WorldWideScience

Sample records for reliable cloud pressures

  1. Reliability and Availability of Cloud Computing

    CERN Document Server

    Bauer, Eric

    2012-01-01

    A holistic approach to service reliability and availability of cloud computing Reliability and Availability of Cloud Computing provides IS/IT system and solution architects, developers, and engineers with the knowledge needed to assess the impact of virtualization and cloud computing on service reliability and availability. It reveals how to select the most appropriate design for reliability diligence to assure that user expectations are met. Organized in three parts (basics, risk analysis, and recommendations), this resource is accessible to readers of diverse backgrounds and experience le

  2. A Framework to Improve Communication and Reliability Between Cloud Consumer and Provider in the Cloud

    OpenAIRE

    Vivek Sridhar

    2014-01-01

    Cloud services consumers demand reliable methods for choosing appropriate cloud service provider for their requirements. Number of cloud consumer is increasing day by day and so cloud providers, hence requirement for a common platform for interacting between cloud provider and cloud consumer is also on the raise. This paper introduces Cloud Providers Market Platform Dashboard. This will act as not only just cloud provider discoverability but also provide timely report to consumer on cloud ser...

  3. Assessment of physical server reliability in multi cloud computing system

    Science.gov (United States)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  4. Dynamic Scheduling for Cloud Reliability using Transportation Problem

    OpenAIRE

    P. Balasubramanie; S. K. Senthil Kumar

    2012-01-01

    Problem statement: Cloud is purely a dynamic environment and the existing task scheduling algorithms are mostly static and considered various parameters like time, cost, make span, speed, scalability, throughput, resource utilization, scheduling success rate and so on. Available scheduling algorithms are mostly heuristic in nature and more complex, time consuming and does not consider reliability and availability of the cloud computing environment. Therefore there is a need to implement a sch...

  5. Reliability analysis of reactor pressure vessel intensity

    International Nuclear Information System (INIS)

    Zheng Liangang; Lu Yongbo

    2012-01-01

    This paper performs the reliability analysis of reactor pressure vessel (RPV) with ANSYS. The analysis method include direct Monte Carlo Simulation method, Latin Hypercube Sampling, central composite design and Box-Behnken Matrix design. The RPV integrity reliability under given input condition is proposed. The result shows that the effects on the RPV base material reliability are internal press, allowable basic stress and elasticity modulus of base material in descending order, and the effects on the bolt reliability are allowable basic stress of bolt material, preload of bolt and internal press in descending order. (authors)

  6. A Cloud Top Pressure Algorithm for DSCOVR-EPIC

    Science.gov (United States)

    Min, Q.; Morgan, E. C.; Yang, Y.; Marshak, A.; Davis, A. B.

    2017-12-01

    The Earth Polychromatic Imaging Camera (EPIC) sensor on the Deep Space Climate Observatory (DSCOVR) satellite presents unique opportunities to derive cloud properties of the entire daytime Earth. In particular, the Oxygen A- and B-band and corresponding reference channels provide cloud top pressure information. In order to address the in-cloud penetration depth issue—and ensuing retrieval bias—a comprehensive sensitivity study has been conducted to simulate satellite-observed radiances for a wide variety of cloud structures and optical properties. Based on this sensitivity study, a cloud top pressure algorithm for DSCOVR-EPIC has been developed. Further, the algorithm has been applied to EPIC measurements.

  7. Multiobjective Reliable Cloud Storage with Its Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Xiyang Liu

    2016-01-01

    Full Text Available Information abounds in all fields of the real life, which is often recorded as digital data in computer systems and treated as a kind of increasingly important resource. Its increasing volume growth causes great difficulties in both storage and analysis. The massive data storage in cloud environments has significant impacts on the quality of service (QoS of the systems, which is becoming an increasingly challenging problem. In this paper, we propose a multiobjective optimization model for the reliable data storage in clouds through considering both cost and reliability of the storage service simultaneously. In the proposed model, the total cost is analyzed to be composed of storage space occupation cost, data migration cost, and communication cost. According to the analysis of the storage process, the transmission reliability, equipment stability, and software reliability are taken into account in the storage reliability evaluation. To solve the proposed multiobjective model, a Constrained Multiobjective Particle Swarm Optimization (CMPSO algorithm is designed. At last, experiments are designed to validate the proposed model and its solution PSO algorithm. In the experiments, the proposed model is tested in cooperation with 3 storage strategies. Experimental results show that the proposed model is positive and effective. The experimental results also demonstrate that the proposed model can perform much better in alliance with proper file splitting methods.

  8. LOFT pressurizer safety: relief valve reliability

    International Nuclear Information System (INIS)

    Brown, E.S.

    1978-01-01

    The LOFT pressurizer self-actuating safety-relief valves are constructed to the present state-of-the-art and should have reliability equivalent to the valves in use on PWR plants in the U.S. There have been no NRC incident reports on valve failures to lift that would challenge the Technical Specification Safety Limit. Fourteen valves have been reported as lifting a few percentage points outside the +-1% Tech. Spec. surveillance tolerance (9 valves tested over and 5 valves tested under specification). There have been no incident reports on failures to reseat. The LOFT surveillance program for assuring reliability is equivalent to nuclear industry practice

  9. LOFT pressurizer safety: relief valve reliability

    Energy Technology Data Exchange (ETDEWEB)

    Brown, E.S.

    1978-01-18

    The LOFT pressurizer self-actuating safety-relief valves are constructed to the present state-of-the-art and should have reliability equivalent to the valves in use on PWR plants in the U.S. There have been no NRC incident reports on valve failures to lift that would challenge the Technical Specification Safety Limit. Fourteen valves have been reported as lifting a few percentage points outside the +-1% Tech. Spec. surveillance tolerance (9 valves tested over and 5 valves tested under specification). There have been no incident reports on failures to reseat. The LOFT surveillance program for assuring reliability is equivalent to nuclear industry practice.

  10. Reliability issues related to the usage of Cloud Computing in Critical Infrastructures

    OpenAIRE

    Diez Gonzalez, Oscar Manuel; Silva Vazquez, Andrés

    2011-01-01

    The use of cloud computing is extending to all kind of systems, including the ones that are part of Critical Infrastructures, and measuring the reliability is becoming more difficult. Computing is becoming the 5th utility, in part thanks to the use of cloud services. Cloud computing is used now by all types of systems and organizations, including critical infrastructure, creating hidden inter-dependencies on both public and private cloud models. This paper investigates the use of cloud co...

  11. Radiation pressure - a stabilizing agent of dust clouds in comets?

    International Nuclear Information System (INIS)

    Froehlich, H.E.; Notni, P.

    1988-01-01

    The internal dynamics of an illuminated dust cloud of finite optical thickness is investigated. The dependence of the radiation pressure on the optical depth makes the individual particles oscillate, in one dimension, around the accelerated centre of gravity of the cloud. The cloud moves as an entity, irrespectively of the velocity dispersion of the particles and their efficiency for radiation pressure. If the optical depth does not change, i.e. if the cloud does not expand laterally, its lifetime is unlimited. A contraction caused by energy dissipation in mechanical collisions between the dust particles is expected. The range of particle sizes which can be transported by such a 'coherent cloud' is estimated, as well as the acceleration of the whole cloud. The structure of the cloud in real space and in velocity space is investigated. A comparison with the 'striae' observed in the dust tails of great comets shows that the parent clouds of these striae may have been of the kind considered. (author)

  12. A Quantitative Risk Analysis Framework for Evaluating and Monitoring Operational Reliability of Cloud Computing

    Science.gov (United States)

    Islam, Muhammad Faysal

    2013-01-01

    Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…

  13. Proactive replica checking to assure reliability of data in cloud storage with minimum replication

    Science.gov (United States)

    Murarka, Damini; Maheswari, G. Uma

    2017-11-01

    The two major issues for cloud storage systems are data reliability and storage costs. For data reliability protection, multi-replica replication strategy which is used mostly in current clouds acquires huge storage consumption, leading to a large storage cost for applications within the loud specifically. This paper presents a cost-efficient data reliability mechanism named PRCR to cut back the cloud storage consumption. PRCR ensures data reliability of large cloud information with the replication that might conjointly function as a price effective benchmark for replication. The duplication shows that when resembled to the standard three-replica approach, PRCR will scale back to consume only a simple fraction of the cloud storage from one-third of the storage, thence considerably minimizing the cloud storage price.

  14. Cyber Security and Reliability in a Digital Cloud

    Science.gov (United States)

    2013-01-01

    Runs the Mission  Mr. Bret Hartman  RSA  The Intelligent Security  Operations Center and  Advanced Persistent Threats  Mr. Chris C. Kemp  OpenStack  Cloud...Software  OpenStack  Cloud Software  Mr. Pravin Kothari  CipherCloud  Cloud Data Protection  Dr. John C. Mitchell  Stanford University  Innovation in

  15. Neural network cloud top pressure and height for MODIS

    Science.gov (United States)

    Håkansson, Nina; Adok, Claudia; Thoss, Anke; Scheirer, Ronald; Hörnquist, Sara

    2018-06-01

    Cloud top height retrieval from imager instruments is important for nowcasting and for satellite climate data records. A neural network approach for cloud top height retrieval from the imager instrument MODIS (Moderate Resolution Imaging Spectroradiometer) is presented. The neural networks are trained using cloud top layer pressure data from the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) dataset. Results are compared with two operational reference algorithms for cloud top height: the MODIS Collection 6 Level 2 height product and the cloud top temperature and height algorithm in the 2014 version of the NWC SAF (EUMETSAT (European Organization for the Exploitation of Meteorological Satellites) Satellite Application Facility on Support to Nowcasting and Very Short Range Forecasting) PPS (Polar Platform System). All three techniques are evaluated using both CALIOP and CPR (Cloud Profiling Radar for CloudSat (CLOUD SATellite)) height. Instruments like AVHRR (Advanced Very High Resolution Radiometer) and VIIRS (Visible Infrared Imaging Radiometer Suite) contain fewer channels useful for cloud top height retrievals than MODIS, therefore several different neural networks are investigated to test how infrared channel selection influences retrieval performance. Also a network with only channels available for the AVHRR1 instrument is trained and evaluated. To examine the contribution of different variables, networks with fewer variables are trained. It is shown that variables containing imager information for neighboring pixels are very important. The error distributions of the involved cloud top height algorithms are found to be non-Gaussian. Different descriptive statistic measures are presented and it is exemplified that bias and SD (standard deviation) can be misleading for non-Gaussian distributions. The median and mode are found to better describe the tendency of the error distributions and IQR (interquartile range) and MAE (mean absolute error) are found

  16. Reliability Evaluation for the Surface to Air Missile Weapon Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Deng Jianjun

    2015-01-01

    Full Text Available The fuzziness and randomness is integrated by using digital characteristics, such as Expected value, Entropy and Hyper entropy. The cloud model adapted to reliability evaluation is put forward based on the concept of the surface to air missile weapon. The cloud scale of the qualitative evaluation is constructed, and the quantitative variable and the qualitative variable in the system reliability evaluation are corresponded. The practical calculation result shows that it is more effective to analyze the reliability of the surface to air missile weapon by this way. The practical calculation result also reflects the model expressed by cloud theory is more consistent with the human thinking style of uncertainty.

  17. Pressure Algometry in Icelandic Horses : Interexaminer and Intraexaminer Reliability

    NARCIS (Netherlands)

    Menke, Eveline S.; Blom, Guy; van Loon, Johannes P A M; Back, Willem

    2016-01-01

    Reliability of pressure algometry as an outcome measure in equine research and therapy needs to be studied. The aim of the present study was to establish interexaminer and intraexaminer reliability of pressure algometry in Icelandic horses and to determine reference mechanical nociceptive threshold

  18. NOAA GOES-R Series Advanced Baseline Imager (ABI) Level 2+ Cloud Top Pressure (CTP)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Cloud Top Pressure product contains an image with pixel values identifying the atmospheric pressure at the top of a cloud layer. The product is generated in...

  19. Estimating the Reliability of the Elements of Cloud Services

    Directory of Open Access Journals (Sweden)

    Iuliia V. Ignatova

    2017-01-01

    Full Text Available Cloud technologies are a very considerable area that influence IT infrastructure, network services and applications. Research has highlighted difficulties in the functioning of cloud infrastructure. For instance, if a server is subjected to malicious attacks or a force majeure causes a failure in the cloud's service, it is required to determine the time that it takes the system to return to being fully functional after the crash. This will determine the technological and financial risks faced by the owner and end users of cloud services. Therefore, to solve the problem of determining the expected time before service is resumed after a failure, we propose to apply Markovian queuing systems, specifically a model of a multi-channel queuing system with Poisson input flow and denial-of-service (breakdown. (original abstract

  20. A Fast Optimization Method for Reliability and Performance of Cloud Services Composition Application

    Directory of Open Access Journals (Sweden)

    Zhao Wu

    2013-01-01

    Full Text Available At present the cloud computing is one of the newest trends of distributed computation, which is propelling another important revolution of software industry. The cloud services composition is one of the key techniques in software development. The optimization for reliability and performance of cloud services composition application, which is a typical stochastic optimization problem, is confronted with severe challenges due to its randomness and long transaction, as well as the characteristics of the cloud computing resources such as openness and dynamic. The traditional reliability and performance optimization techniques, for example, Markov model and state space analysis and so forth, have some defects such as being too time consuming and easy to cause state space explosion and unsatisfied the assumptions of component execution independence. To overcome these defects, we propose a fast optimization method for reliability and performance of cloud services composition application based on universal generating function and genetic algorithm in this paper. At first, a reliability and performance model for cloud service composition application based on the multiple state system theory is presented. Then the reliability and performance definition based on universal generating function is proposed. Based on this, a fast reliability and performance optimization algorithm is presented. In the end, the illustrative examples are given.

  1. Probabilistic assessment of pressure vessel and piping reliability

    International Nuclear Information System (INIS)

    Sundararajan, C.

    1986-01-01

    The paper presents a critical review of the state-of-the-art in probabilistic assessment of pressure vessel and piping reliability. First the differences in assessing the reliability directly from historical failure data and indirectly by a probabilistic analysis of the failure phenomenon are discussed and the advantages and disadvantages are pointed out. The rest of the paper deals with the latter approach of reliability assessment. Methods of probabilistic reliability assessment are described and major projects where these methods are applied for pressure vessel and piping problems are discussed. An extensive list of references is provided at the end of the paper

  2. Reliability of blood pressure measurement and cardiovascular risk prediction

    NARCIS (Netherlands)

    van der Hoeven, N.V.

    2016-01-01

    High blood pressure is one of the leading risk factors for cardiovascular disease, but difficult to reliably assess because there are many factors which can influence blood pressure including stress, exercise or illness. The first part of this thesis focuses on possible ways to improve the

  3. Cloud Incubator Car: A Reliable Platform for Autonomous Driving

    Directory of Open Access Journals (Sweden)

    Raúl Borraz

    2018-02-01

    Full Text Available It appears clear that the future of road transport is going through enormous changes (intelligent transport systems, the main one being the Intelligent Vehicle (IV. Automated driving requires a huge research effort in multiple technological areas: sensing, control, and driving algorithms. We present a comprehensible and reliable platform for autonomous driving technology development as well as for testing purposes, developed in the Intelligent Vehicles Lab at the Technical University of Cartagena. We propose an open and modular architecture capable of easily integrating a wide variety of sensors and actuators which can be used for testing algorithms and control strategies. As a proof of concept, this paper presents a reliable and complete navigation application for a commercial vehicle (Renault Twizy. It comprises a complete perception system (2D LIDAR, 3D HD LIDAR, ToF cameras, Real-Time Kinematic (RTK unit, Inertial Measurement Unit (IMU, an automation of the driving elements of the vehicle (throttle, steering, brakes, and gearbox, a control system, and a decision-making system. Furthermore, two flexible and reliable algorithms are presented for carrying out global and local route planning on board autonomous vehicles.

  4. Porting Your Applications and Saving Data In Cloud As Reliable Entity.

    Directory of Open Access Journals (Sweden)

    Cosmin Cătălin Olteanu

    2013-12-01

    Full Text Available The main purpose of the paper is to illustrate the importance of a reliable service in the meanings of cloud computing. The dynamics of an organization shows us that porting customs applications in cloud can makes the difference to be a successful company and to deliver what the client need just in time. Every employ should be able to access and enter data from everywhere. Remember that the office is moving along with the employ nowadays. But this concept comes with disadvantages of how safe is your data if you cannot control exactly, by your employs, those machines.

  5. Reliability Assessment of Cloud Computing Platform Based on Semiquantitative Information and Evidential Reasoning

    Directory of Open Access Journals (Sweden)

    Hang Wei

    2016-01-01

    Full Text Available A reliability assessment method based on evidential reasoning (ER rule and semiquantitative information is proposed in this paper, where a new reliability assessment architecture including four aspects with both quantitative data and qualitative knowledge is established. The assessment architecture is more objective in describing complex dynamic cloud computing environment than that in traditional method. In addition, the ER rule which has good performance for multiple attribute decision making problem is employed to integrate different types of the attributes in assessment architecture, which can obtain more accurate assessment results. The assessment results of the case study in an actual cloud computing platform verify the effectiveness and the advantage of the proposed method.

  6. Towards a More Reliable and Available Docker-based Container Cloud

    OpenAIRE

    Verma, Mudit; Dhawan, Mohan

    2017-01-01

    Operating System-level virtualization technology, or containers as they are commonly known, represents the next generation of light-weight virtualization, and is primarily represented by Docker. However, Docker's current design does not complement the SLAs from Docker-based container cloud offerings promising both reliability and high availability. The tight coupling between the containers and the Docker daemon proves fatal for the containers' uptime during daemon's unavailability due to eith...

  7. Dusty Cloud Acceleration by Radiation Pressure in Rapidly Star-forming Galaxies

    Science.gov (United States)

    Zhang, Dong; Davis, Shane W.; Jiang, Yan-Fei; Stone, James M.

    2018-02-01

    We perform two-dimensional and three-dimensional radiation hydrodynamic simulations to study cold clouds accelerated by radiation pressure on dust in the environment of rapidly star-forming galaxies dominated by infrared flux. We utilize the reduced speed of light approximation to solve the frequency-averaged, time-dependent radiative transfer equation. We find that radiation pressure is capable of accelerating the clouds to hundreds of kilometers per second while remaining dense and cold, consistent with observations. We compare these results to simulations where acceleration is provided by entrainment in a hot wind, where the momentum injection of the hot flow is comparable to the momentum in the radiation field. We find that the survival time of the cloud accelerated by the radiation field is significantly longer than that of a cloud entrained in a hot outflow. We show that the dynamics of the irradiated cloud depends on the initial optical depth, temperature of the cloud, and intensity of the flux. Additionally, gas pressure from the background may limit cloud acceleration if the density ratio between the cloud and background is ≲ {10}2. In general, a 10 pc-scale optically thin cloud forms a pancake structure elongated perpendicular to the direction of motion, while optically thick clouds form a filamentary structure elongated parallel to the direction of motion. The details of accelerated cloud morphology and geometry can also be affected by other factors, such as the cloud lengthscale, reduced speed of light approximation, spatial resolution, initial cloud structure, and dimensionality of the run, but these have relatively little affect on the cloud velocity or survival time.

  8. Stress Rupture Life Reliability Measures for Composite Overwrapped Pressure Vessels

    Science.gov (United States)

    Murthy, Pappu L. N.; Thesken, John C.; Phoenix, S. Leigh; Grimes-Ledesma, Lorie

    2007-01-01

    Composite Overwrapped Pressure Vessels (COPVs) are often used for storing pressurant gases onboard spacecraft. Kevlar (DuPont), glass, carbon and other more recent fibers have all been used as overwraps. Due to the fact that overwraps are subjected to sustained loads for an extended period during a mission, stress rupture failure is a major concern. It is therefore important to ascertain the reliability of these vessels by analysis, since the testing of each flight design cannot be completed on a practical time scale. The present paper examines specifically a Weibull statistics based stress rupture model and considers the various uncertainties associated with the model parameters. The paper also examines several reliability estimate measures that would be of use for the purpose of recertification and for qualifying flight worthiness of these vessels. Specifically, deterministic values for a point estimate, mean estimate and 90/95 percent confidence estimates of the reliability are all examined for a typical flight quality vessel under constant stress. The mean and the 90/95 percent confidence estimates are computed using Monte-Carlo simulation techniques by assuming distribution statistics of model parameters based also on simulation and on the available data, especially the sample sizes represented in the data. The data for the stress rupture model are obtained from the Lawrence Livermore National Laboratories (LLNL) stress rupture testing program, carried out for the past 35 years. Deterministic as well as probabilistic sensitivities are examined.

  9. Reliability aspects of radiation damage in reactor pressure vessel mterials

    International Nuclear Information System (INIS)

    Brumovsky, M.

    1985-01-01

    The service life estimate is a major factor in the evaluation of the operating reliability and safety of a nuclear reactor pressure vessel. The evaluation of the service life of the pressure vessel is based on a comparison of fracture toughness values with stress intensity factors. Notch toughness curves are used for the indirect determination of fracture toughness. The dominant degradation effect is radiation embrittlement. Factors having the greatest effect on the result are the properties of the starting material of the vessel and the impurity content, mainly the Cu and P content. The design life is affected by the evaluation of residual lifetime which is made by periodical nondestructive inspections and using surveillance samples. (M.D.)

  10. RSAM: An enhanced architecture for achieving web services reliability in mobile cloud computing

    Directory of Open Access Journals (Sweden)

    Amr S. Abdelfattah

    2018-04-01

    Full Text Available The evolution of the mobile landscape is coupled with the ubiquitous nature of the internet with its intermittent wireless connectivity and the web services. Achieving the web service reliability results in low communication overhead and retrieving the appropriate response. The middleware approach (MA is highly tended to achieve the web service reliability. This paper proposes a Reliable Service Architecture using Middleware (RSAM that achieves the reliable web services consumption. The enhanced architecture focuses on ensuring and tracking the request execution under the communication limitations and service temporal unavailability. It considers the most measurement factors including: request size, response size, and consuming time. We conducted experiments to compare the enhanced architecture with the traditional one. In these experiments, we covered several cases to prove the achievement of reliability. Results also show that the request size was found to be constant, the response size is identical to the traditional architecture, and the increase in the consuming time was less than 5% of the transaction time with the different response sizes. Keywords: Reliable web service, Middleware architecture, Mobile cloud computing

  11. Modeling Message Queueing Services with Reliability Guarantee in Cloud Computing Environment Using Colored Petri Nets

    Directory of Open Access Journals (Sweden)

    Jing Li

    2015-01-01

    Full Text Available Motivated by the need for loosely coupled and asynchronous dissemination of information, message queues are widely used in large-scale application areas. With the advent of virtualization technology, cloud-based message queueing services (CMQSs with distributed computing and storage are widely adopted to improve availability, scalability, and reliability; however, a critical issue is its performance and the quality of service (QoS. While numerous approaches evaluating system performance are available, there is no modeling approach for estimating and analyzing the performance of CMQSs. In this paper, we employ both the analytical and simulation modeling to address the performance of CMQSs with reliability guarantee. We present a visibility-based modeling approach (VMA for simulation model using colored Petri nets (CPN. Our model incorporates the important features of message queueing services in the cloud such as replication, message consistency, resource virtualization, and especially the mechanism named visibility timeout which is adopted in the services to guarantee system reliability. Finally, we evaluate our model through different experiments under varied scenarios to obtain important performance metrics such as total message delivery time, waiting number, and components utilization. Our results reveal considerable insights into resource scheduling and system configuration for service providers to estimate and gain performance optimization.

  12. Mechanical factors affecting reliability of pressure components (fatigue, cracking)

    International Nuclear Information System (INIS)

    Lebey, J.; Garnier, C.; Roche, R.; Barrachin, B.

    1978-01-01

    The reliability of a pressure component can be seriously affected by the formation and development of cracks. The experimental studies presented in this paper are devoted to three different aspects of crack propagation phenomena which have been relatively little described. In close connection with safety analyses of PWR, the authors study the influence of the environment by carrying out fatigue tests with samples bathed in hot pressurized water. Ferritic, austenitic and Incolloy 800 steels were used and the results are presented in the form of fatigue curves in the oligocyclic region. The second part of the paper relates to crack initiation cirteria in ductile steels weakened by notches. The CT samples used make it possible to study almost all types of fracture (ductile, intermediate and brittle). The use of two criteria based on the load limit and on the toughness of the material constitutes a practical way of evaluating crack propagation conditions. A series of tests carried out on notched spherical vessels of different size shows that large vessels are relatively brittle; fast unstable fracture is observed as size increases. Crack growth rate in PWR primary circuits (3/6 steel) is studied on piping elements (0.25 scale) subjected to cyclic stress variations (285 0 C and with pressure varying between 1 and 160 bar in each cycle). By calculating the stress intensity factor, correlation with results obtained in the laboratory on CT samples is possible. (author)

  13. Spectral shifting strongly constrains molecular cloud disruption by radiation pressure on dust

    Science.gov (United States)

    Reissl, Stefan; Klessen, Ralf S.; Mac Low, Mordecai-Mark; Pellegrini, Eric W.

    2018-03-01

    Aim. We aim to test the hypothesis that radiation pressure from young star clusters acting on dust is the dominant feedback agent disrupting the largest star-forming molecular clouds and thus regulating the star-formation process. Methods: We performed multi-frequency, 3D, radiative transfer calculations including both scattering and absorption and re-emission to longer wavelengths for model clouds with masses of 104-107 M⊙, containing embedded clusters with star formation efficiencies of 0.009-91%, and varying maximum grain sizes up to 200 μm. We calculated the ratio between radiative and gravitational forces to determine whether radiation pressure can disrupt clouds. Results: We find that radiation pressure acting on dust almost never disrupts star-forming clouds. Ultraviolet and optical photons from young stars to which the cloud is optically thick do not scatter much. Instead, they quickly get absorbed and re-emitted by the dust at thermal wavelengths. As the cloud is typically optically thin to far-infrared radiation, it promptly escapes, depositing little momentum in the cloud. The resulting spectrum is more narrowly peaked than the corresponding Planck function, and exhibits an extended tail at longer wavelengths. As the opacity drops significantly across the sub-mm and mm wavelength regime, the resulting radiative force is even smaller than for the corresponding single-temperature blackbody. We find that the force from radiation pressure falls below the strength of gravitational attraction by an order of magnitude or more for either Milky Way or moderate starbust conditions. Only for unrealistically large maximum grain sizes, and star formation efficiencies far exceeding 50% do we find that the strength of radiation pressure can exceed gravity. Conclusions: We conclude that radiation pressure acting on dust does not disrupt star-forming molecular clouds in any Local Group galaxies. Radiation pressure thus appears unlikely to regulate the star

  14. EEC-sponsored theoretical studies of gas cloud explosion pressure loadings

    International Nuclear Information System (INIS)

    Briscoe, F.; Curtress, N.; Farmer, C.L.; Fogg, G.J.; Vaughan, G.J.

    1979-01-01

    Estimates of the pressure loadings produced by unconfined gas cloud explosions on the surface of structures are required to assist the design of strong secondary containments in countries where the protection of nuclear installations against these events is considered to be necessary. At the present time, one difficulty in the specification of occurate pressure loadings arises from our lack of knowledge concerning the interaction between the incident pressure waves produced by unconfined gas cloud explosions and large structures. Preliminary theoretical studies include (i) general theoretical considerations, especially with regard to scaling (ii) investigations of the deflagration wave interaction with a wall based on an analytic solution for situations with planar symmetry and the application of an SRD gas cloud explosion code (GASEX 1) for situations with planar and spherical symmetry, and (iii) investigations of the interaction between shock waves and structures for situations with two-dimensional symmetry based on the application of another SRD gas cloud explosion code (GASEX 2)

  15. High pressure, high current, low inductance, high reliability sealed terminals

    Science.gov (United States)

    Hsu, John S [Oak Ridge, TN; McKeever, John W [Oak Ridge, TN

    2010-03-23

    The invention is a terminal assembly having a casing with at least one delivery tapered-cone conductor and at least one return tapered-cone conductor routed there-through. The delivery and return tapered-cone conductors are electrically isolated from each other and positioned in the annuluses of ordered concentric cones at an off-normal angle. The tapered cone conductor service can be AC phase conductors and DC link conductors. The center core has at least one service conduit of gate signal leads, diagnostic signal wires, and refrigerant tubing routed there-through. A seal material is in direct contact with the casing inner surface, the tapered-cone conductors, and the service conduits thereby hermetically filling the interstitial space in the casing interior core and center core. The assembly provides simultaneous high-current, high-pressure, low-inductance, and high-reliability service.

  16. Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models

    Science.gov (United States)

    Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.

    2017-12-01

    While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API

  17. Reliable intraocular pressure measurement using automated radio-wave telemetry

    Science.gov (United States)

    Paschalis, Eleftherios I; Cade, Fabiano; Melki, Samir; Pasquale, Louis R; Dohlman, Claes H; Ciolino, Joseph B

    2014-01-01

    Purpose To present an autonomous intraocular pressure (IOP) measurement technique using a wireless implantable transducer (WIT) and a motion sensor. Methods The WIT optical aid was implanted within the ciliary sulcus of a normotensive rabbit eye after extracapsular clear lens extraction. An autonomous wireless data system (AWDS) comprising of a WIT and an external antenna aided by a motion sensor provided continuous IOP readings. The sensitivity of the technique was determined by the ability to detect IOP changes resulting from the administration of latanoprost 0.005% or dorzolamide 2%, while the reliability was determined by the agreement between baseline and vehicle (saline) IOP. Results On average, 12 diurnal and 205 nocturnal IOP measurements were performed with latanoprost, and 26 diurnal and 205 nocturnal measurements with dorzolamide. No difference was found between mean baseline IOP (13.08±2.2 mmHg) and mean vehicle IOP (13.27±2.1 mmHg) (P=0.45), suggesting good measurement reliability. Both antiglaucoma medications caused significant IOP reduction compared to baseline; latanoprost reduced mean IOP by 10% (1.3±3.54 mmHg; P<0.001), and dorzolamide by 5% (0.62±2.22 mmHg; P<0.001). Use of latanoprost resulted in an overall twofold higher IOP reduction compared to dorzolamide (P<0.001). Repeatability was ±1.8 mmHg, assessed by the variability of consecutive IOP measurements performed in a short period of time (≤1 minute), during which the IOP is not expected to change. Conclusion IOP measurements in conscious rabbits obtained without the need for human interactions using the AWDS are feasible and provide reproducible results. PMID:24531415

  18. Reliable intraocular pressure measurement using automated radio-wave telemetry.

    Science.gov (United States)

    Paschalis, Eleftherios I; Cade, Fabiano; Melki, Samir; Pasquale, Louis R; Dohlman, Claes H; Ciolino, Joseph B

    2014-01-01

    To present an autonomous intraocular pressure (IOP) measurement technique using a wireless implantable transducer (WIT) and a motion sensor. The WIT optical aid was implanted within the ciliary sulcus of a normotensive rabbit eye after extracapsular clear lens extraction. An autonomous wireless data system (AWDS) comprising of a WIT and an external antenna aided by a motion sensor provided continuous IOP readings. The sensitivity of the technique was determined by the ability to detect IOP changes resulting from the administration of latanoprost 0.005% or dorzolamide 2%, while the reliability was determined by the agreement between baseline and vehicle (saline) IOP. On average, 12 diurnal and 205 nocturnal IOP measurements were performed with latanoprost, and 26 diurnal and 205 nocturnal measurements with dorzolamide. No difference was found between mean baseline IOP (13.08±2.2 mmHg) and mean vehicle IOP (13.27±2.1 mmHg) (P=0.45), suggesting good measurement reliability. Both antiglaucoma medications caused significant IOP reduction compared to baseline; latanoprost reduced mean IOP by 10% (1.3±3.54 mmHg; P<0.001), and dorzolamide by 5% (0.62±2.22 mmHg; P<0.001). Use of latanoprost resulted in an overall twofold higher IOP reduction compared to dorzolamide (P<0.001). Repeatability was ±1.8 mmHg, assessed by the variability of consecutive IOP measurements performed in a short period of time (≤1 minute), during which the IOP is not expected to change. IOP measurements in conscious rabbits obtained without the need for human interactions using the AWDS are feasible and provide reproducible results.

  19. Reliable intraocular pressure measurement using automated radio-wave telemetry

    Directory of Open Access Journals (Sweden)

    Paschalis EI

    2014-01-01

    Full Text Available Eleftherios I Paschalis,* Fabiano Cade,* Samir Melki, Louis R Pasquale, Claes H Dohlman, Joseph B CiolinoMassachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, USA*These authors contributed equally to this workPurpose: To present an autonomous intraocular pressure (IOP measurement technique using a wireless implantable transducer (WIT and a motion sensor.Methods: The WIT optical aid was implanted within the ciliary sulcus of a normotensive rabbit eye after extracapsular clear lens extraction. An autonomous wireless data system (AWDS comprising of a WIT and an external antenna aided by a motion sensor provided continuous IOP readings. The sensitivity of the technique was determined by the ability to detect IOP changes resulting from the administration of latanoprost 0.005% or dorzolamide 2%, while the reliability was determined by the agreement between baseline and vehicle (saline IOP.Results: On average, 12 diurnal and 205 nocturnal IOP measurements were performed with latanoprost, and 26 diurnal and 205 nocturnal measurements with dorzolamide. No difference was found between mean baseline IOP (13.08±2.2 mmHg and mean vehicle IOP (13.27±2.1 mmHg (P=0.45, suggesting good measurement reliability. Both antiglaucoma medications caused significant IOP reduction compared to baseline; latanoprost reduced mean IOP by 10% (1.3±3.54 mmHg; P<0.001, and dorzolamide by 5% (0.62±2.22 mmHg; P<0.001. Use of latanoprost resulted in an overall twofold higher IOP reduction compared to dorzolamide (P<0.001. Repeatability was ±1.8 mmHg, assessed by the variability of consecutive IOP measurements performed in a short period of time (≤1 minute, during which the IOP is not expected to change.Conclusion: IOP measurements in conscious rabbits obtained without the need for human interactions using the AWDS are feasible and provide reproducible results.Keywords: IOP, pressure transducer, wireless, MEMS, implant, intraocular

  20. About reliability of WWER pressure vessel neutron fluence calculation

    Energy Technology Data Exchange (ETDEWEB)

    Belousov, S; Ilieva, K; Antonov, S [Bylgarska Akademiya na Naukite, Sofia (Bulgaria). Inst. za Yadrena Izsledvaniya i Yadrena Energetika

    1996-12-31

    This reliability study was carried out under a Research Contracts F111 and TH-324 of the Bulgarian Ministry of Higher Education and the IAEA. The effect of geometry approximation and the choice of neutron cross-sections on the calculation model is estimated. The neutron flux onto reactor pressure vessel at locations, important for metal embrittlement surveillance, has been calculated using the codes TORT and DORT. The flux values calculated for both WWER-440 and WWER-1000 show good consistency within the limits of solution accuracy. It is concluded that the synthesis method (DORT) can be used for calculations at a reasonable cost whenever metal embrittlement surveillance is considered. Using an iron sphere benchmark measurement, a comparison of an experimental leakage spectrum with spectrum calculated using multigroup neutron cross-sections based on ENDF/B-4 and ENDF/B-6 data is performed. In the energy region above 1 MeV the best agreement with the experiment is achieved for ENDF/B-6 in VITAMIN-E group structure. 7 refs., 1 fig., 4 tabs.

  1. About reliability of WWER pressure vessel neutron fluence calculation

    International Nuclear Information System (INIS)

    Belousov, S.; Ilieva, K.; Antonov, S.

    1995-01-01

    This reliability study was carried out under a Research Contracts F111 and TH-324 of the Bulgarian Ministry of Higher Education and the IAEA. The effect of geometry approximation and the choice of neutron cross-sections on the calculation model is estimated. The neutron flux onto reactor pressure vessel at locations, important for metal embrittlement surveillance, has been calculated using the codes TORT and DORT. The flux values calculated for both WWER-440 and WWER-1000 show good consistency within the limits of solution accuracy. It is concluded that the synthesis method (DORT) can be used for calculations at a reasonable cost whenever metal embrittlement surveillance is considered. Using an iron sphere benchmark measurement, a comparison of an experimental leakage spectrum with spectrum calculated using multigroup neutron cross-sections based on ENDF/B-4 and ENDF/B-6 data is performed. In the energy region above 1 MeV the best agreement with the experiment is achieved for ENDF/B-6 in VITAMIN-E group structure. 7 refs., 1 fig., 4 tabs

  2. A Comparison and Calibration of a Wrist-Worn Blood Pressure Monitor for Patient Management: Assessing the Reliability of Innovative Blood Pressure Devices

    Science.gov (United States)

    Melville, Sarah; Teskey, Robert; Philip, Shona; Simpson, Jeremy A; Lutchmedial, Sohrab

    2018-01-01

    Background Clinical guidelines recommend monitoring of blood pressure at home using an automatic blood pressure device for the management of hypertension. Devices are not often calibrated against direct blood pressure measures, leaving health care providers and patients with less reliable information than is possible with current technology. Rigorous assessments of medical devices are necessary for establishing clinical utility. Objective The purpose of our study was 2-fold: (1) to assess the validity and perform iterative calibration of indirect blood pressure measurements by a noninvasive wrist cuff blood pressure device in direct comparison with simultaneously recorded peripheral and central intra-arterial blood pressure measurements and (2) to assess the validity of the measurements thereafter of the noninvasive wrist cuff blood pressure device in comparison with measurements by a noninvasive upper arm blood pressure device to the Canadian hypertension guidelines. Methods The cloud-based blood pressure algorithms for an oscillometric wrist cuff device were iteratively calibrated to direct pressure measures in 20 consented patient participants. We then assessed measurement validity of the device, using Bland-Altman analysis during routine cardiovascular catheterization. Results The precalibrated absolute mean difference between direct intra-arterial to wrist cuff pressure measurements were 10.8 (SD 9.7) for systolic and 16.1 (SD 6.3) for diastolic. The postcalibrated absolute mean difference was 7.2 (SD 5.1) for systolic and 4.3 (SD 3.3) for diastolic pressures. This is an improvement in accuracy of 33% systolic and 73% diastolic with a 48% reduction in the variability for both measures. Furthermore, the wrist cuff device demonstrated similar sensitivity in measuring high blood pressure compared with the direct intra-arterial method. The device, when calibrated to direct aortic pressures, demonstrated the potential to reduce a treatment gap in high blood

  3. Validity and reliability of central blood pressure estimated by upper arm oscillometric cuff pressure.

    Science.gov (United States)

    Climie, Rachel E D; Schultz, Martin G; Nikolic, Sonja B; Ahuja, Kiran D K; Fell, James W; Sharman, James E

    2012-04-01

    Noninvasive central blood pressure (BP) independently predicts mortality, but current methods are operator-dependent, requiring skill to obtain quality recordings. The aims of this study were first, to determine the validity of an automatic, upper arm oscillometric cuff method for estimating central BP (O(CBP)) by comparison with the noninvasive reference standard of radial tonometry (T(CBP)). Second, we determined the intratest and intertest reliability of O(CBP). To assess validity, central BP was estimated by O(CBP) (Pulsecor R6.5B monitor) and compared with T(CBP) (SphygmoCor) in 47 participants free from cardiovascular disease (aged 57 ± 9 years) in supine, seated, and standing positions. Brachial mean arterial pressure (MAP) and diastolic BP (DBP) from the O(CBP) device were used to calibrate in both devices. Duplicate measures were recorded in each position on the same day to assess intratest reliability, and participants returned within 10 ± 7 days for repeat measurements to assess intertest reliability. There was a strong intraclass correlation (ICC = 0.987, P difference (1.2 ± 2.2 mm Hg) for central systolic BP (SBP) determined by O(CBP) compared with T(CBP). Ninety-six percent of all comparisons (n = 495 acceptable recordings) were within 5 mm Hg. With respect to reliability, there were strong correlations but higher limits of agreement for the intratest (ICC = 0.975, P difference 0.6 ± 4.5 mm Hg) and intertest (ICC = 0.895, P difference 4.3 ± 8.0 mm Hg) comparisons. Estimation of central SBP using cuff oscillometry is comparable to radial tonometry and has good reproducibility. As a noninvasive, relatively operator-independent method, O(CBP) may be as useful as T(CBP) for estimating central BP in clinical practice.

  4. Optical nucleation of bubble clouds in a high pressure spherical resonator.

    Science.gov (United States)

    Anderson, Phillip; Sampathkumar, A; Murray, Todd W; Gaitan, D Felipe; Glynn Holt, R

    2011-11-01

    An experimental setup for nucleating clouds of bubbles in a high-pressure spherical resonator is described. Using nanosecond laser pulses and multiple phase gratings, bubble clouds are optically nucleated in an acoustic field. Dynamics of the clouds are captured using a high-speed CCD camera. The images reveal cloud nucleation, growth, and collapse and the resulting emission of radially expanding shockwaves. These shockwaves are reflected at the interior surface of the resonator and then reconverge to the center of the resonator. As the shocks reconverge upon the center of the resonator, they renucleate and grow the bubble cloud. This process is repeated over many acoustic cycles and with each successive shock reconvergence, the bubble cloud becomes more organized and centralized so that subsequent collapses give rise to stronger, better defined shockwaves. After many acoustic cycles individual bubbles cannot be distinguished and the cloud is then referred to as a cluster. Sustainability of the process is ultimately limited by the detuning of the acoustic field inside the resonator. The nucleation parameter space is studied in terms of laser firing phase, laser energy, and acoustic power used.

  5. The Effects of Ram Pressure on the Cold Clouds in the Centers of Galaxy Clusters

    Science.gov (United States)

    Li, Yuan; Ruszkowski, Mateusz; Tremblay, Grant

    2018-02-01

    We discuss the effect of ram pressure on the cold clouds in the centers of cool-core galaxy clusters, and in particular, how it reduces cloud velocity and sometimes causes an offset between the cold gas and young stars. The velocities of the molecular gas in both observations and our simulations fall in the range of 100–400 km s‑1, which is much lower than expected if they fall from a few tens of kiloparsecs ballistically. If the intracluster medium (ICM) is at rest, the ram pressure of the ICM only slightly reduces the velocity of the clouds. When we assume that the clouds are actually “fluffier” because they are co-moving with a warm-hot layer, the velocity becomes smaller. If we also consider the active galactic nucleus wind in the cluster center by adding a wind profile measured from the simulation, the clouds are further slowed down at small radii, and the resulting velocities are in general agreement with the observations and simulations. Because ram pressure only affects gas but not stars, it can cause a separation between a filament and young stars that formed in the filament as they move through the ICM together. This separation has been observed in Perseus and also exists in our simulations. We show that the star-filament offset, combined with line-of-sight velocity measurements, can help determine the true motion of the cold gas, and thus distinguish between inflows and outflows.

  6. Inferences about pressures and vertical extension of cloud layers from POLDER3/PARASOL measurements in the oxygen A-band

    Science.gov (United States)

    Desmons, Marine; Ferlay, Nicolas; Parol, Frédéric; Vanbauce, Claudine; Mcharek, Linda

    2013-05-01

    We present new inferences about cloud vertical structures from multidirectionnal measurements in the oxygen A-band. The analysis of collocated data provided by instruments onboard satellite platforms within the A-Train, as well as simulations have shown that for monolayered clouds, the cloud oxygen pressure PO2 derived from the POLDER3 instrument was sensitive to the cloud vertical structure in two ways: First, PO2 is actually close to the pressure of the geometrical middle of cloud and we propose a method to correct it to get the cloud top pressure (CTP), and then to obtain the cloud geometrical extent. Second, for the liquid water clouds, the angular standard deviation σPO2 of PO2 is correlated with the geometrical extent of cloud layers, which makes possible a second estimation of the cloud geometrical thickness. The determination of the vertical location of cloud layers from passive measurements, eventually completed from other observations, would be useful in many applications for which cloud macrophysical properties are needed.

  7. Inter- and intra-observer reliability of masking in plantar pressure measurement analysis.

    Science.gov (United States)

    Deschamps, K; Birch, I; Mc Innes, J; Desloovere, K; Matricali, G A

    2009-10-01

    Plantar pressure measurement is an important tool in gait analysis. Manual placement of small masks (masking) is increasingly used to calculate plantar pressure characteristics. Little is known concerning the reliability of manual masking. The aim of this study was to determine the reliability of masking on 2D plantar pressure footprints, in a population with forefoot deformity (i.e. hallux valgus). Using a random repeated-measure design, four observers identified the third metatarsal head on a peak-pressure barefoot footprint, using a small mask. Subsequently, the location of all five metatarsal heads was identified, using the same size of masks and the same protocol. The 2D positional variation of the masks and the peak pressure (PP) and pressure time integral (PTI) values of each mask were calculated. For single-masking the lowest inter-observer reliability was found for the distal-proximal direction, causing a clear, adverse impact on the reliability of the pressure characteristics (PP and PTI). In the medial-lateral direction the inter-observer reliability could be scored as high. Intra-observer reliability was better and could be scored as high or good for both directions, with a correlated improved reliability of the pressure characteristics. Reliability of multi-masking showed a similar pattern, but overall values tended to be lower. Therefore, small sized masking in order to define pressure characteristics in the forefoot should be done with care.

  8. On the propagation of the pressure pulse due to an unconfined gas cloud explosion

    International Nuclear Information System (INIS)

    Essers, J.A.

    1985-01-01

    A critical analysis of flow models used in computer codes for the simulation of the propagation in air of a pressure pulse due to a gas cloud explosion is presented. In particular, weaknesses of simple linear acoustic model are pointed out, and a more reliable non-linear isentropic model is proposed. A simple one-dimensional theory is used to evaluate as a function of the relative overpressure the speed of an incident normal shock-wave, as well as the strength and speed of the wave after reflection on a simplified rigid obstacle. Results obtained with the different models are compared to those obtained from the full Euler equations. A theoretical analysis of pulse deformation during its propagation is presented, and the ability of each model to correctly simulate that purely non-linear phenomenon is discussed. In particular, the formation of a sharp pressure pulse (shock-up phenomenon) is analyzed in detail. From the analysis, the accuracy of the linear acoustic model for the evaluation of strength and speed of incident and reflected waves is found to be quite poor except for very weak overpressures. Additionally, such a model is completely unable to simulate pulse deformations. As a result, it should be expected to lead to important errors in the simulation of pulse interaction with non-rigid obstacles, even at very weak overpressures. As opposed to that very simple model, the proposed non-linear isentropic model is found to lead to an excellent accuracy in the prediction of all wave characteristics mentioned above and in the simulation of pulse deformation if overpressure is not too large. (author)

  9. Pressurizer pump reliability analysis high flux isotope reactor

    International Nuclear Information System (INIS)

    Merryman, L.; Christie, B.

    1993-01-01

    During a prolonged outage from November 1986 to May 1990, numerous changes were made at the High Flux Isotope Reactor (HFIR). Some of these changes involved the pressurizer pumps. An analysis was performed to calculate the impact of these changes on the pressurizer system availability. The analysis showed that the availability of the pressurizer system dropped from essentially 100% to approximately 96%. The primary reason for the decrease in availability comes because off-site power grid disturbances sometimes result in a reactor trip with the present pressurizer pump configuration. Changes are being made to the present pressurizer pump configuration to regain some of the lost availability

  10. Method of core thermodynamic reliability determination in pressurized water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Ackermann, G.; Horche, W. (Ingenieurhochschule Zittau (German Democratic Republic). Sektion Kraftwerksanlagenbau und Energieumwandlung)

    1983-01-01

    A statistical model appropriate to determine the thermodynamic reliability and the power-limiting parameter of PWR cores is described for cases of accidental transients. The model is compared with the hot channel model hitherto applied.

  11. Method of core thermodynamic reliability determination in pressurized water reactors

    International Nuclear Information System (INIS)

    Ackermann, G.; Horche, W.

    1983-01-01

    A statistical model appropriate to determine the thermodynamic reliability and the power-limiting parameter of PWR cores is described for cases of accidental transients. The model is compared with the hot channel model hitherto applied. (author)

  12. A Correlated Model for Evaluating Performance and Energy of Cloud System Given System Reliability

    Directory of Open Access Journals (Sweden)

    Hongli Zhang

    2015-01-01

    Full Text Available The serious issue of energy consumption for high performance computing systems has attracted much attention. Performance and energy-saving have become important measures of a computing system. In the cloud computing environment, the systems usually allocate various resources (such as CPU, Memory, Storage, etc. on multiple virtual machines (VMs for executing tasks. Therefore, the problem of resource allocation for running VMs should have significant influence on both system performance and energy consumption. For different processor utilizations assigned to the VM, there exists the tradeoff between energy consumption and task completion time when a given task is executed by the VMs. Moreover, the hardware failure, software failure and restoration characteristics also have obvious influences on overall performance and energy. In this paper, a correlated model is built to analyze both performance and energy in the VM execution environment given the reliability restriction, and an optimization model is presented to derive the most effective solution of processor utilization for the VM. Then, the tradeoff between energy-saving and task completion time is studied and balanced when the VMs execute given tasks. Numerical examples are illustrated to build the performance-energy correlated model and evaluate the expected values of task completion time and consumed energy.

  13. High-pressure cloud point data for the system glycerol + olive oil + n-butane + AOT

    OpenAIRE

    Bender,J. P.; Junges,A.; Franceschi,E.; Corazza,F. C.; Dariva,C.; Oliveira,J. Vladimir; Corazza,M. L.

    2008-01-01

    This work reports high-pressure cloud point data for the quaternary system glycerol + olive oil + n-butane + AOT surfactant. The static synthetic method, using a variable-volume view cell, was employed for obtaining the experimental data at pressures up to 27 MPa. The effects of glycerol/olive oil concentration and surfactant addition on the pressure transition values were evaluated in the temperature range from 303 K to 343 K. For the system investigated, vapor-liquid (VLE), liquid-liquid (L...

  14. Improving the capacity, reliability & life of mobile devices with cloud computing

    CSIR Research Space (South Africa)

    Nkosi, MT

    2011-05-01

    Full Text Available devices. The approach in this paper is to model the mobile cloud computing process in a 3GPP IMS software development and emulator environment. And show that multimedia and security operations can be performed in the cloud, allowing mobile service...

  15. Reliability of in-Shoe Plantar Pressure Measurements in Rheumatoid Arthritis Patients

    Science.gov (United States)

    Vidmar, Gaj; Novak, Primoz

    2009-01-01

    Plantar pressures measurement is a frequently used method in rehabilitation and related research. Metric characteristics of the F-Scan system have been assessed from different standpoints and in different patients, but not its reliability in rheumatoid arthritis patients. Therefore, our objective was to assess reliability of the F-Scan plantar…

  16. Studying the influence of temperature and pressure on microphysical properties of mixed-phase clouds using airborne measurements

    Science.gov (United States)

    Andreea, Boscornea; Sabina, Stefan; Sorin-Nicolae, Vajaiac; Mihai, Cimpuieru

    2015-04-01

    One cloud type for which the formation and evolution process is not well-understood is the mixed-phase type. In general mixed-phase clouds consist of liquid droplets and ice crystals. The temperature interval within both liquid droplets and ice crystals can potentially coexist is limited to 0 °C and - 40 °C. Mixed-phase clouds account for 20% to 30% of the global cloud coverage. The need to understand the microphysical characteristics of mixed-phase clouds to improve numerical forecast modeling and radiative transfer calculation is of major interest in the atmospheric community. In the past, studies of cloud phase composition have been significantly limited by a lack of aircraft instruments capable of discriminating between the ice and liquid phase for a wide range of particle sizes. Presently, in situ airborne measurements provide the most accurate information about cloud microphysical characteristics. This information can be used for verification of both numerical models and cloud remote-sensing techniques. The knowledge of the temperature and pressure variation during the airborne measurements is crucial in order to understand their influence on the cloud dynamics and also their role in the cloud formation processes like accretion and coalescence. Therefore, in this paper is presented a comprehensive study of cloud microphysical properties in mixed-phase clouds in focus of the influence of temperature and pressure variation on both, cloud dynamics and the cloud formation processes, using measurements performed with the ATMOSLAB - Airborne Laboratory for Environmental Atmospheric Research in property of the National Institute for Aerospace Research "Elie Carafoli" (INCAS). The airborne laboratory equipped for special research missions is based on a Hawker Beechcraft - King Air C90 GTx aircraft and is equipped with a sensors system CAPS - Cloud, Aerosol and Precipitation Spectrometer (30 bins, 0.51-50 µm) and a HAWKEYE cloud probe. The analyzed data in this

  17. High-pressure cloud point data for the system glycerol + olive oil + n-butane + AOT

    Directory of Open Access Journals (Sweden)

    J. P. Bender

    2008-09-01

    Full Text Available This work reports high-pressure cloud point data for the quaternary system glycerol + olive oil + n-butane + AOT surfactant. The static synthetic method, using a variable-volume view cell, was employed for obtaining the experimental data at pressures up to 27 MPa. The effects of glycerol/olive oil concentration and surfactant addition on the pressure transition values were evaluated in the temperature range from 303 K to 343 K. For the system investigated, vapor-liquid (VLE, liquid-liquid (LLE and vapor-liquid-liquid (VLLE equilibrium were recorded. It was experimentally observed that, at a given temperature and surfactant content, an increase in the concentration of glycerol/oil ratio led to a pronounced increase in the slope of the liquid-liquid coexistence curve. A comparison with results reported for the same system but using propane as solvent showed that much lower pressure transition values are obtained when using n-butane.

  18. Analysis of fatigue reliability for high temperature and high pressure multi-stage decompression control valve

    Science.gov (United States)

    Yu, Long; Xu, Juanjuan; Zhang, Lifang; Xu, Xiaogang

    2018-03-01

    Based on stress-strength interference theory to establish the reliability mathematical model for high temperature and high pressure multi-stage decompression control valve (HMDCV), and introduced to the temperature correction coefficient for revising material fatigue limit at high temperature. Reliability of key dangerous components and fatigue sensitivity curve of each component are calculated and analyzed by the means, which are analyzed the fatigue life of control valve and combined with reliability theory of control valve model. The impact proportion of each component on the control valve system fatigue failure was obtained. The results is shown that temperature correction factor makes the theoretical calculations of reliability more accurate, prediction life expectancy of main pressure parts accords with the technical requirements, and valve body and the sleeve have obvious influence on control system reliability, the stress concentration in key part of control valve can be reduced in the design process by improving structure.

  19. Problems and chances for probabilistic fracture mechanics in the analysis of steel pressure boundary reliability

    Energy Technology Data Exchange (ETDEWEB)

    Staat, M [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Sicherheitsforschung und Reaktortechnik

    1996-12-01

    It is shown that the difficulty for probabilistic fracture mechanics (PFM) is the general problem of the high reliability of a small population. There is no way around the problem as yet. Therefore what PFM can contribute to the reliability of steel pressure boundaries is demonstrated with the example of a typical reactor pressure vessel and critically discussed. Although no method is distinguishable that could give exact failure probabilities, PFM has several additional chances. Upper limits for failure probability may be obtained together with trends for design and operating conditions. Further, PFM can identify the most sensitive parameters, improved control of which would increase reliability. Thus PFM should play a vital role in the analysis of steel pressure boundaries despite all shortcomings. (author). 19 refs, 7 figs, 1 tab.

  20. Problems and chances for probabilistic fracture mechanics in the analysis of steel pressure boundary reliability

    International Nuclear Information System (INIS)

    Staat, M.

    1996-01-01

    It is shown that the difficulty for probabilistic fracture mechanics (PFM) is the general problem of the high reliability of a small population. There is no way around the problem as yet. Therefore what PFM can contribute to the reliability of steel pressure boundaries is demonstrated with the example of a typical reactor pressure vessel and critically discussed. Although no method is distinguishable that could give exact failure probabilities, PFM has several additional chances. Upper limits for failure probability may be obtained together with trends for design and operating conditions. Further, PFM can identify the most sensitive parameters, improved control of which would increase reliability. Thus PFM should play a vital role in the analysis of steel pressure boundaries despite all shortcomings. (author). 19 refs, 7 figs, 1 tab

  1. Reliability analysis of pipelines and pressure vessels at nuclear power plants

    International Nuclear Information System (INIS)

    Klemin, A.I.; Shiverskij, E.A.

    1979-01-01

    Reliability analysis of pipelines and pressure vessels at NPP is given. The main causes and failure mechanisms of these elements, the ways of reliability improvement and preventing of great damages are considered. The reliability estimation methods both according to the statistical operation data and under the conditions of absence of failure statistics are given. The main characteristics and actual reliability factors of pipelines and pressure vessels of three home NPP: the first in the world NPP, VK-50 and Beloyarsk NPP, are presented. From the start-up there were practically no failures of the pipelines and pressure vessels at the VK-50 pilot installation. The analysis of the operation experience of the first and second blocks of the Beloyarsk NPP, as well as the first in the world NPP, shows that the most part of failures of the pipelines and pressure vessels of these energy blocks with the channel reactors is connected with the coolant leakage at minority pipelines of a small diameter. The most part of failures at individual pipelines of the first and second blocks of the Beloyarsk NPP are connected with the leakages of stuffing boxes of switching off devices. It is noted that serious failures of large pipelines and pressure vessels at all home NPP under operation have not been observed

  2. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  3. A comparison of ground and satellite observations of cloud cover to saturation pressure differences during a cold air outbreak

    Energy Technology Data Exchange (ETDEWEB)

    Alliss, R.J.; Raman, S. [North Carolina State Univ., Raleigh, NC (United States)

    1996-04-01

    The role of clouds in the atmospheric general circulation and the global climate is twofold. First, clouds owe their origin to large-scale dynamical forcing, radiative cooling in the atmosphere, and turbulent transfer at the surface. In addition, they provide one of the most important mechanisms for the vertical redistribution of momentum and sensible and latent heat for the large scale, and they influence the coupling between the atmosphere and the surface as well as the radiative and dynamical-hydrological balance. In existing diagnostic cloudiness parameterization schemes, relative humidity is the most frequently used variable for estimating total cloud amount or stratiform cloud amount. However, the prediction of relative humidity in general circulation models (GCMs) is usually poor. Even for the most comprehensive GCMs, the predicted relative humidity may deviate greatly from that observed, as far as the frequency distribution of relative humidity is concerned. Recently, there has been an increased effort to improve the representation of clouds and cloud-radiation feedback in GCMs, but the verification of cloudiness parameterization schemes remains a severe problem because of the lack of observational data sets. In this study, saturation pressure differences (as opposed to relative humidity) and satellite-derived cloud heights and amounts are compared with ground determinations of cloud cover over the Gulf Stream Locale (GSL) during a cold air outbreak.

  4. Evaluation of the Validity and Reliability of the Waterlow Pressure Ulcer Risk Assessment Scale.

    Science.gov (United States)

    Charalambous, Charalambos; Koulori, Agoritsa; Vasilopoulos, Aristidis; Roupa, Zoe

    2018-04-01

    Prevention is the ideal strategy to tackle the problem of pressure ulcers. Pressure ulcer risk assessment scales are one of the most pivotal measures applied to tackle the problem, much criticisms has been developed regarding the validity and reliability of these scales. To investigate the validity and reliability of the Waterlow pressure ulcer risk assessment scale. The methodology used is a narrative literature review, the bibliography was reviewed through Cinahl, Pubmed, EBSCO, Medline and Google scholar, 26 scientific articles where identified. The articles where chosen due to their direct correlation with the objective under study and their scientific relevance. The construct and face validity of the Waterlow appears adequate, but with regards to content validity changes in the category age and gender can be beneficial. The concurrent validity cannot be assessed. The predictive validity of the Waterlow is characterized by high specificity and low sensitivity. The inter-rater reliability has been demonstrated to be inadequate, this may be due to lack of clear definitions within the categories and differentiating level of knowledge between the users. Due to the limitations presented regarding the validity and reliability of the Waterlow pressure ulcer risk assessment scale, the scale should be used in conjunction with clinical assessment to provide optimum results.

  5. Evaluation of the Validity and Reliability of the Waterlow Pressure Ulcer Risk Assessment Scale

    Science.gov (United States)

    Charalambous, Charalambos; Koulori, Agoritsa; Vasilopoulos, Aristidis; Roupa, Zoe

    2018-01-01

    Introduction Prevention is the ideal strategy to tackle the problem of pressure ulcers. Pressure ulcer risk assessment scales are one of the most pivotal measures applied to tackle the problem, much criticisms has been developed regarding the validity and reliability of these scales. Objective To investigate the validity and reliability of the Waterlow pressure ulcer risk assessment scale. Method The methodology used is a narrative literature review, the bibliography was reviewed through Cinahl, Pubmed, EBSCO, Medline and Google scholar, 26 scientific articles where identified. The articles where chosen due to their direct correlation with the objective under study and their scientific relevance. Results The construct and face validity of the Waterlow appears adequate, but with regards to content validity changes in the category age and gender can be beneficial. The concurrent validity cannot be assessed. The predictive validity of the Waterlow is characterized by high specificity and low sensitivity. The inter-rater reliability has been demonstrated to be inadequate, this may be due to lack of clear definitions within the categories and differentiating level of knowledge between the users. Conclusion Due to the limitations presented regarding the validity and reliability of the Waterlow pressure ulcer risk assessment scale, the scale should be used in conjunction with clinical assessment to provide optimum results. PMID:29736104

  6. Impact of inservice inspection on the reliability of pressure vessels and piping

    International Nuclear Information System (INIS)

    Bush, S.H.

    1975-01-01

    The reliability of pressure components of a nuclear reactor is a function of the as-fabricated quality plus a program of continuing inspection during operation. Since insufficient data exist to quantitatively determine failure probabilities of nuclear pressure vessels and piping, it is necessary to utilize information from comparable non-nuclear systems such as power boilers. Based on probabilistic studies it is inferred that in-service inspection improves component reliability one-to-two orders of magnitude depending on the type and completeness of the inspections. An attempt is made to assess the significance of the ASME Section XI Code as to relative completeness of inspection and the probable improvement in reliability. (U.S.)

  7. Impact of inservice inspection on the reliability of pressure vessels and piping

    International Nuclear Information System (INIS)

    Bush, S.H.

    1974-01-01

    The reliability of pressure components of a nuclear reactor is a function of the quality as-fabricated plus a program of continuing inspection during operation. Since insufficient data exist to quantitatively determine failure probabilities of nuclear pressure vessels and piping, it is necessary to utilize information from comparable non-nuclear systems such as power boilers. Based on probabilistic studies it is inferred that in-service inspection improves component reliability one-to-two orders of magnitude depending on the type and completeness of the inspections. An attempt is made to assess the significance of the ASME Section XI Code as to relative completeness of inspection and the probable improvement in reliability. (U.S.)

  8. Validity and reliability of the Turkish version of the pressure ulcer prevention knowledge assessment instrument.

    Science.gov (United States)

    Tulek, Zeliha; Polat, Cansu; Ozkan, Ilknur; Theofanidis, Dimitris; Togrol, Rifat Erdem

    2016-11-01

    Sound knowledge of pressure ulcers is important to enable good prevention. There are limited instruments assessing pressure ulcer knowledge. The Pressure Ulcer Prevention Knowledge Assessment Instrument is among the scales of which psychometric properties have been studied rigorously and reflects the latest evidence. This study aimed to evaluate the validity and reliability of the Turkish version of the Pressure Ulcer Prevention Knowledge Assessment Instrument (PUPKAI-T), an instrument that assesses knowledge of pressure ulcer prevention by using multiple-choice questions. Linguistic validity was verified through front-to-back translation. Psychometric properties of the instrument were studied on a sample of 150 nurses working in a tertiary hospital in Istanbul, Turkey. The content validity index of the translated instrument was 0.94, intra-class correlation coefficients were between 0.37 and 0.80, item difficulty indices were between 0.21 and 0.88, discrimination indices were 0.20-0.78, and the Kuder Richardson for the internal consistency was 0.803. The PUPKAI-T was found to be a valid and reliable tool to evaluate nurses' knowledge on pressure ulcer prevention. The PUPKAI-T may be a useful tool for determining educational needs of nurses on pressure ulcer prevention. Copyright © 2016 Tissue Viability Society. Published by Elsevier Ltd. All rights reserved.

  9. A fracture mechanics and reliability based method to assess non-destructive testings for pressure vessels

    International Nuclear Information System (INIS)

    Kitagawa, Hideo; Hisada, Toshiaki

    1979-01-01

    Quantitative evaluation has not been made on the effects of carrying out preservice and in-service nondestructive tests for securing the soundness, safety and maintainability of pressure vessels, spending large expenses and labor. Especially the problems concerning the time and interval of in-service inspections lack the reasonable, quantitative evaluation method. In this paper, the problems of pressure vessels are treated by having developed the analysis method based on reliability technology and probability theory. The growth of surface cracks in pressure vessels was estimated, using the results of previous studies. The effects of nondestructive inspection on the defects in pressure vessels were evaluated, and the influences of many factors, such as plate thickness, stress, the accuracy of inspection and so on, on the effects of inspection, and the method of evaluating the inspections at unequal intervals were investigated. The analysis of reliability taking in-service inspection into consideration, the evaluation of in-service inspection and other affecting factors through the typical examples of analysis, and the review concerning the time of inspection are described. The method of analyzing the reliability of pressure vessels, considering the growth of defects and preservice and in-service nondestructive tests, was able to be systematized so as to be practically usable. (Kako, I.)

  10. Reliability analysis of protection system of advanced pressurized water reactor - APR 1400

    International Nuclear Information System (INIS)

    Varde, P. V.; Choi, J. G.; Lee, D. Y.; Han, J. B.

    2003-04-01

    Reliability analysis was carried out for the protection system of the Korean Advanced Pressurized Water Reactor - APR 1400. The main focus of this study was the reliability analysis of digital protection system, however, towards giving an integrated statement of complete protection reliability an attempt has been made to include the shutdown devices and other related aspects based on the information available to date. The sensitivity analysis has been carried out for the critical components / functions in the system. Other aspects like importance analysis and human error reliability for the critical human actions form part of this work. The framework provided by this study and the results obtained shows that this analysis has potential to be utilized as part of risk informed approach for future design / regulatory applications

  11. Some infant ventilators do not limit peak inspiratory pressure reliably during active expiration.

    Science.gov (United States)

    Kirpalani, H; Santos-Lyn, R; Roberts, R

    1988-09-01

    In order to minimize barotrauma in newborn infants with respiratory failure, peak inspiratory pressures should not exceed those required for adequate gas exchange. We examined whether four commonly used pressure-limited, constant flow ventilators limit pressure reliably during simulated active expiration against the inspiratory stroke of the ventilator. Three machines of each type were tested at 13 different expiratory flow rates (2 to 14 L/min). Flow-dependent pressure overshoot above a dialed pressure limit of 20 cm H2O was observed in all machines. However, the magnitude differed significantly between ventilators from different manufacturers (p = .0009). Pressure overshoot above 20 cm H2O was consistently lowest in the Healthdyne (0.8 cm H2O at 2 L/min, 3.6 cm H2O at 14 L/min) and highest in the Bourns BP200 (3.0 cm H2O at 2 L/min, 15.4 cm H2O at 14 L/min). We conclude that peak inspiratory pressure overshoots on pressure-limited ventilators occur during asynchronous expiration. This shortcoming may contribute to barotrauma in newborn infants who "fight" positive-pressure ventilation.

  12. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site...

  13. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration; Medrano Llamas, R; Sciacca, G; Van der Ster, D C

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes more than 80 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate si...

  14. Reliability and responsiveness of algometry for measuring pressure pain threshold in patients with knee osteoarthritis.

    Science.gov (United States)

    Mutlu, Ebru Kaya; Ozdincler, Arzu Razak

    2015-06-01

    [Purpose] This study aimed to establish the intrarater reliability and responsiveness of a clinically available algometer in patients with knee osteoarthritis as well as to determine the minimum-detectable-change and standard error of measurement of testing to facilitate clinical interpretation of temporal changes. [Subjects] Seventy-three patients with knee osteoarthritis were included. [Methods] Pressure pain threshold measured by algometry was evaluated 3 times at 2-min intervals over 2 clinically relevant sites-mediolateral to the medial femoral tubercle (distal) and lateral to the medial malleolus (local)-on the same day. Intrarater reliability was estimated by intraclass correlation coefficients. The minimum-detectable-change and standard error of measurement were calculated. As a measure of responsiveness, the effect size was calculated for the results at baseline and after treatment. [Results] The intrarater reliability was almost perfect (intraclass correlation coefficient = 0.93-0.97). The standard error of measurement and minimum-detectable-change were 0.70-0.66 and 1.62-1.53, respectively. The pressure pain threshold over the distal site was inadequately responsive in knee osteoarthritis, but the local site was responsive. The effect size was 0.70. [Conclusion] Algometry is reliable and responsive to assess measures of pressure pain threshold for evaluating pain patients with knee osteoarthritis.

  15. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    International Nuclear Information System (INIS)

    Capone, V; Esposito, R; Pardi, S; Taurino, F; Tortone, G

    2012-01-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  16. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    Science.gov (United States)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  17. PREDICTED SIZES OF PRESSURE-SUPPORTED HI CLOUDS IN THE OUTSKIRTS OF THE VIRGO CLUSTER

    Energy Technology Data Exchange (ETDEWEB)

    Burkhart, Blakesley; Loeb, Abraham [Harvard-Smithsonian Center for Astrophysics, 60 Garden St. Cambridge, MA (United States)

    2016-06-10

    Using data from the ALFALFA AGES Arecibo HI survey of galaxies and the Virgo cluster X-ray pressure profiles from XMM-Newton , we investigate the possibility that starless dark HI clumps, also known as “dark galaxies,” are supported by external pressure in the surrounding intercluster medium. We find that the starless HI clump masses, velocity dispersions, and positions allow these clumps to be in pressure equilibrium with the X-ray gas near the virial radius of the Virgo cluster. We predict the sizes of these clumps to range from 1 to 10 kpc, in agreement with the range of sizes found for spatially resolved HI starless clumps outside of Virgo. Based on the predicted HI surface density of the Virgo sources, as well as a sample of other similar resolved ALFALFA HI dark clumps with follow-up optical/radio observations, we predict that most of the HI dark clumps are on the cusp of forming stars. These HI sources therefore mark the transition between starless HI clouds and dwarf galaxies with stars.

  18. Human factors assessment of conflict resolution aid reliability and time pressure in future air traffic control.

    Science.gov (United States)

    Trapsilawati, Fitri; Qu, Xingda; Wickens, Chris D; Chen, Chun-Hsien

    2015-01-01

    Though it has been reported that air traffic controllers' (ATCos') performance improves with the aid of a conflict resolution aid (CRA), the effects of imperfect automation on CRA are so far unknown. The main objective of this study was to examine the effects of imperfect automation on conflict resolution. Twelve students with ATC knowledge were instructed to complete ATC tasks in four CRA conditions including reliable, unreliable and high time pressure, unreliable and low time pressure, and manual conditions. Participants were able to resolve the designated conflicts more accurately and faster in the reliable versus unreliable CRA conditions. When comparing the unreliable CRA and manual conditions, unreliable CRA led to better conflict resolution performance and higher situation awareness. Surprisingly, high time pressure triggered better conflict resolution performance as compared to the low time pressure condition. The findings from the present study highlight the importance of CRA in future ATC operations. Practitioner Summary: Conflict resolution aid (CRA) is a proposed automation decision aid in air traffic control (ATC). It was found in the present study that CRA was able to promote air traffic controllers' performance even when it was not perfectly reliable. These findings highlight the importance of CRA in future ATC operations.

  19. Simple, reliable, and nondestructive method for the measurement of vacuum pressure without specialized equipment.

    Science.gov (United States)

    Yuan, Jin-Peng; Ji, Zhong-Hua; Zhao, Yan-Ting; Chang, Xue-Fang; Xiao, Lian-Tuan; Jia, Suo-Tang

    2013-09-01

    We present a simple, reliable, and nondestructive method for the measurement of vacuum pressure in a magneto-optical trap. The vacuum pressure is verified to be proportional to the collision rate constant between cold atoms and the background gas with a coefficient k, which can be calculated by means of the simple ideal gas law. The rate constant for loss due to collisions with all background gases can be derived from the total collision loss rate by a series of loading curves of cold atoms under different trapping laser intensities. The presented method is also applicable for other cold atomic systems and meets the miniaturization requirement of commercial applications.

  20. Mobile Laser Scanning along Dieppe coastal cliffs: reliability of the acquired point clouds applied to rockfall assessments

    Science.gov (United States)

    Michoud, Clément; Carrea, Dario; Augereau, Emmanuel; Cancouët, Romain; Costa, Stéphane; Davidson, Robert; Delacourt, Chirstophe; Derron, Marc-Henri; Jaboyedoff, Michel; Letortu, Pauline; Maquaire, Olivier

    2013-04-01

    Dieppe coastal cliffs, in Normandy, France, are mainly formed by sub-horizontal deposits of chalk and flintstone. Largely destabilized by an intense weathering and the Channel sea erosion, small and large rockfalls are regularly observed and contribute to retrogressive cliff processes. During autumn 2012, cliff and intertidal topographies have been acquired with a Terrestrial Laser Scanner (TLS) and a Mobile Laser Scanner (MLS), coupled with seafloor bathymetries realized with a multibeam echosounder (MBES). MLS is a recent development of laser scanning based on the same theoretical principles of aerial LiDAR, but using smaller, cheaper and portable devices. The MLS system, which is composed by an accurate dynamic positioning and orientation (INS) devices and a long range LiDAR, is mounted on a marine vessel; it is then possible to quickly acquire in motion georeferenced LiDAR point clouds with a resolution of about 15 cm. For example, it takes about 1 h to scan of shoreline of 2 km long. MLS is becoming a promising technique supporting erosion and rockfall assessments along the shores of lakes, fjords or seas. In this study, the MLS system used to acquire cliffs and intertidal areas of the Cap d'Ailly was composed by the INS Applanix POS-MV 320 V4 and the LiDAR Optech Ilirs LR. On the same day, three MLS scans with large overlaps (J1, J21 and J3) have been performed at ranges from 600 m at 4 knots (low tide) up to 200 m at 2.2 knots (up tide) with a calm sea at 2.5 Beaufort (small wavelets). Mean scan resolutions go from 26 cm for far scan (J1) to about 8.1 cm for close scan (J3). Moreover, one TLS point cloud on this test site has been acquired with a mean resolution of about 2.3 cm, using a Riegl LMS Z390i. In order to quantify the reliability of the methodology, comparisons between scans have been realized with the software Polyworks™, calculating shortest distances between points of one cloud and the interpolated surface of the reference point cloud. A Mat

  1. Improving ATLAS grid site reliability with functional tests using HammerCloud

    Science.gov (United States)

    Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan

    2012-12-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.

  2. Improved Reliability of SiC Pressure Sensors for Long Term High Temperature Applications

    Science.gov (United States)

    Okojie, R. S.; Nguyen, V.; Savrun, E.; Lukco, D.

    2011-01-01

    We report advancement in the reliability of silicon carbide pressure sensors operating at 600 C for extended periods. The large temporal drifts in zero pressure offset voltage at 600 C observed previously were significantly suppressed to allow improved reliable operation. This improvement was the result of further enhancement of the electrical and mechanical integrity of the bondpad/contact metallization, and the introduction of studded bump bonding on the pad. The stud bump contact promoted strong adhesion between the Au bond pad and the Au die-attach. The changes in the zero offset voltage and bridge resistance over time at temperature were explained by the microstructure and phase changes within the contact metallization, that were analyzed with Auger electron spectroscopy (AES) and field emission scanning electron microscopy (FE-SEM).

  3. Poor Reliability of Wrist Blood Pressure Self-Measurement at Home: A Population-Based Study.

    Science.gov (United States)

    Casiglia, Edoardo; Tikhonoff, Valérie; Albertini, Federica; Palatini, Paolo

    2016-10-01

    The reliability of blood pressure measurement with wrist devices, which has not previously been assessed under real-life circumstances in general population, is dependent on correct positioning of the wrist device at heart level. We determined whether an error was present when blood pressure was self-measured at the wrist in 721 unselected subjects from the general population. After training, blood pressure was measured in the office and self-measured at home with an upper-arm device (the UA-767 Plus) and a wrist device (the UB-542, not provided with a position sensor). The upper-arm-wrist blood pressure difference detected in the office was used as the reference measurement. The discrepancy between office and home differences was the home measurement error. In the office, systolic blood pressure was 2.5% lower at wrist than at arm (P=0.002), whereas at home, systolic and diastolic blood pressures were higher at wrist than at arm (+5.6% and +5.4%, respectively; Pblood pressure values likely because of a poor memory and rendition of the instructions, leading to the wrong position of the wrist. © 2016 American Heart Association, Inc.

  4. How simulation of failure risk can improve structural reliability - application to pressurized components and pipes

    OpenAIRE

    Cioclov, Dimitru Dragos

    2013-01-01

    Probabilistic methods for failure risk assessment are introduced, with reference to load carrying structures, such as pressure vessels (PV) and components of pipes systems. The definition of the failure risk associated with structural integrity is made in the context of the general approach to structural reliability. Sources of risk are summarily outlined with emphasis on variability and uncertainties (V&U) which might be encountered in the analysis. To highlight the problem, in its practical...

  5. Reliability and responsiveness of algometry for measuring pressure pain threshold in patients with knee osteoarthritis

    OpenAIRE

    Mutlu, Ebru Kaya; Ozdincler, Arzu Razak

    2015-01-01

    [Purpose] This study aimed to establish the intrarater reliability and responsiveness of a clinically available algometer in patients with knee osteoarthritis as well as to determine the minimum-detectable-change and standard error of measurement of testing to facilitate clinical interpretation of temporal changes. [Subjects] Seventy-three patients with knee osteoarthritis were included. [Methods] Pressure pain threshold measured by algometry was evaluated 3 times at 2-min intervals over 2 cl...

  6. [Reliability and validity of the Braden Scale for predicting pressure sore risk].

    Science.gov (United States)

    Boes, C

    2000-12-01

    For more accurate and objective pressure sore risk assessment various risk assessment tools were developed mainly in the USA and Great Britain. The Braden Scale for Predicting Pressure Sore Risk is one such example. By means of a literature analysis of German and English texts referring to the Braden Scale the scientific control criteria reliability and validity will be traced and consequences for application of the scale in Germany will be demonstrated. Analysis of 4 reliability studies shows an exclusive focus on interrater reliability. Further, even though examination of 19 validity studies occurs in many different settings, such examination is limited to the criteria sensitivity and specificity (accuracy). The range of sensitivity and specificity level is 35-100%. The recommended cut off points rank in the field of 10 to 19 points. The studies prove to be not comparable with each other. Furthermore, distortions in these studies can be found which affect accuracy of the scale. The results of the here presented analysis show an insufficient proof for reliability and validity in the American studies. In Germany, the Braden scale has not yet been tested under scientific criteria. Such testing is needed before using the scale in different German settings. During the course of such testing, construction and study procedures of the American studies can be used as a basis as can the problems be identified in the analysis presented below.

  7. PRESSURE EQUILIBRIUM BETWEEN THE LOCAL INTERSTELLAR CLOUDS AND THE LOCAL HOT BUBBLE

    Energy Technology Data Exchange (ETDEWEB)

    Snowden, S. L.; Chiao, M.; Collier, M. R.; Porter, F. S.; Thomas, N. E. [NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Cravens, T.; Robertson, I. P. [Department of Physics and Astronomy, University of Kansas, 1251 Wescoe Hall Drive, Lawrence, KS 66045 (United States); Galeazzi, M.; Uprety, Y.; Ursino, E. [Department of Physics, University of Miami, 1320 Campo Sano Drive, Coral Gables, FL 33146 (United States); Koutroumpa, D. [Université Versailles St-Quentin, Sorbonne Universités, UPMC Univ. Paris 06, CNRS/INSU, LATMOS-IPSL, 11 Boulevard d' Alembert, F-78280 Guyancourt (France); Kuntz, K. D. [The Henry A. Rowland Department of Physics and Astronomy, The Johns Hopkins University, Baltimore, MD 21218 (United States); Lallement, R.; Puspitarini, L. [GEPI, Observatoire de Paris, CNRS UMR8111, Université Paris Diderot, 5 Place Jules Janssen, F-92190 Meudon (France); Lepri, S. T. [University of Michigan, 2455 Hayward Street, Ann Arbor, MI 48109 (United States); McCammon, D.; Morgan, K. [Department of Physics, University of Wisconsin, 1150 University Avenue, Madison, WI 53706 (United States); Walsh, B. M., E-mail: steven.l.snowden@nasa.gov [Space Sciences Laboratory, 7 Gauss Way, Berkeley, CA 94720 (United States)

    2014-08-10

    Three recent results related to the heliosphere and the local interstellar medium (ISM) have provided an improved insight into the distribution and conditions of material in the solar neighborhood. These are the measurement of the magnetic field outside of the heliosphere by Voyager 1, the improved mapping of the three-dimensional structure of neutral material surrounding the Local Cavity using extensive ISM absorption line and reddening data, and a sounding rocket flight which observed the heliospheric helium focusing cone in X-rays and provided a robust estimate of the contribution of solar wind charge exchange emission to the ROSAT All-Sky Survey 1/4 keV band data. Combining these disparate results, we show that the thermal pressure of the plasma in the Local Hot Bubble (LHB) is P/k = 10, 700 cm{sup –3} K. If the LHB is relatively free of a global magnetic field, it can easily be in pressure (thermal plus magnetic field) equilibrium with the local interstellar clouds, eliminating a long-standing discrepancy in models of the local ISM.

  8. FRACTURE MECHANICS UNCERTAINTY ANALYSIS IN THE RELIABILITY ASSESSMENT OF THE REACTOR PRESSURE VESSEL: (2D SUBJECTED TO INTERNAL PRESSURE

    Directory of Open Access Journals (Sweden)

    Entin Hartini

    2016-06-01

    Full Text Available ABSTRACT FRACTURE MECHANICS UNCERTAINTY ANALYSIS IN THE RELIABILITY ASSESSMENT OF THE REACTOR PRESSURE VESSEL: (2D SUBJECTED TO INTERNAL PRESSURE. The reactor pressure vessel (RPV is a pressure boundary in the PWR type reactor which serves to confine radioactive material during chain reaction process. The integrity of the RPV must be guaranteed either  in a normal operation or accident conditions. In analyzing the integrity of RPV, especially related to the crack behavior which can introduce break to the reactor pressure vessel, a fracture mechanic approach should be taken for this assessment. The uncertainty of input used in the assessment, such as mechanical properties and physical environment, becomes a reason that the assessment is not sufficient if it is perfomed only by deterministic approach. Therefore, the uncertainty approach should be applied. The aim of this study is to analize the uncertainty of fracture mechanics calculations in evaluating the reliability of PWR`s reactor pressure vessel. Random character of input quantity was generated using probabilistic principles and theories. Fracture mechanics analysis is solved by Finite Element Method (FEM with  MSC MARC software, while uncertainty input analysis is done based on probability density function with Latin Hypercube Sampling (LHS using python script. The output of MSC MARC is a J-integral value, which is converted into stress intensity factor for evaluating the reliability of RPV’s 2D. From the result of the calculation, it can be concluded that the SIF from  probabilistic method, reached the limit value of  fracture toughness earlier than SIF from  deterministic method.  The SIF generated by the probabilistic method is 105.240 MPa m0.5. Meanwhile, the SIF generated by deterministic method is 100.876 MPa m0.5. Keywords: Uncertainty analysis, fracture mechanics, LHS, FEM, reactor pressure vessels   ABSTRAK ANALISIS KETIDAKPASTIAN FRACTURE MECHANIC PADA EVALUASI KEANDALAN

  9. Reliability of Eustachian tube function measurements in a hypobaric and hyperbaric pressure chamber.

    Science.gov (United States)

    Meyer, M F; Jansen, S; Mordkovich, O; Hüttenbrink, K-B; Beutner, D

    2017-12-01

    Measurement of the Eustachian tube (ET) function is a challenge. The demand for a precise and meaningful diagnostic tool increases-especially because more and more operative therapies are being offered without objective evidence. The measurement of the ET function by continuous impedance recording in a pressure chamber is an established method, although the reliability of the measurements is still unclear. Twenty-five participants (50 ears) were exposed to phases of compression and decompression in a hypo- and hyperbaric pressure chamber. The ET function reflecting parameters-ET opening pressure (ETOP), ET opening duration (ETOD) and ET opening frequency (ETOF)-were determined under exactly the same preconditions three times in a row. The intraclass correlation coefficient (ICC) and Bland and Altman plot were used to assess test-retest reliability. ICCs revealed a high correlation for ETOP and ETOF in phases of decompression (passive equalisation) as well as ETOD and ETOP in phases of compression (active induced equalisation). Very high correlation could be shown for ETOD in decompression and ETOF in compression phases. The Bland and Altman graphs could show that measurements provide results within a 95 % confidence interval in compression and decompression phases. We conclude that measurements in a pressure chamber are a very valuable tool in terms of estimating the ET opening and closing function. Measurements show some variance comparing participants, but provide reliable results within a 95 % confidence interval in retest. This study is the basis for enabling efficacy measurements of ET treatment modalities. © 2017 John Wiley & Sons Ltd.

  10. Reliability of Pressure Ulcer Rates: How Precisely Can We Differentiate Among Hospital Units, and Does the Standard Signal‐Noise Reliability Measure Reflect This Precision?

    Science.gov (United States)

    Cramer, Emily

    2016-01-01

    Abstract Hospital performance reports often include rankings of unit pressure ulcer rates. Differentiating among units on the basis of quality requires reliable measurement. Our objectives were to describe and apply methods for assessing reliability of hospital‐acquired pressure ulcer rates and evaluate a standard signal‐noise reliability measure as an indicator of precision of differentiation among units. Quarterly pressure ulcer data from 8,199 critical care, step‐down, medical, surgical, and medical‐surgical nursing units from 1,299 US hospitals were analyzed. Using beta‐binomial models, we estimated between‐unit variability (signal) and within‐unit variability (noise) in annual unit pressure ulcer rates. Signal‐noise reliability was computed as the ratio of between‐unit variability to the total of between‐ and within‐unit variability. To assess precision of differentiation among units based on ranked pressure ulcer rates, we simulated data to estimate the probabilities of a unit's observed pressure ulcer rate rank in a given sample falling within five and ten percentiles of its true rank, and the probabilities of units with ulcer rates in the highest quartile and highest decile being identified as such. We assessed the signal‐noise measure as an indicator of differentiation precision by computing its correlations with these probabilities. Pressure ulcer rates based on a single year of quarterly or weekly prevalence surveys were too susceptible to noise to allow for precise differentiation among units, and signal‐noise reliability was a poor indicator of precision of differentiation. To ensure precise differentiation on the basis of true differences, alternative methods of assessing reliability should be applied to measures purported to differentiate among providers or units based on quality. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc. PMID:27223598

  11. Practical applications of probabilistic structural reliability analyses to primary pressure systems of nuclear power plants

    International Nuclear Information System (INIS)

    Witt, F.J.

    1980-01-01

    Primary pressure systems of nuclear power plants are built to exacting codes and standards with provisions for inservice inspection and repair if necessary. Analyses and experiments have demonstrated by deterministic means that very large margins exist on safety impacting failures under normal operating and upset conditions. Probabilistic structural reliability analyses provide additional support that failures of significance are very, very remote. They may range in degree of sophistication from very simple calculations to very complex computer analyses involving highly developed mathematical techniques. The end result however should be consistent with the desired usage. In this paper a probabilistic structural reliability analysis is performed as a supplement to in-depth deterministic evaluations with the primary objective to demonstrate an acceptably low probability of failure for the conditions considered. (author)

  12. A low-pressure cloud chamber to study the spatial distribution of ionizations

    International Nuclear Information System (INIS)

    Hodges, D.C.; Marshall, M.

    1977-01-01

    To further the understanding of the biological effects of radiation a knowledge of the spatial distribution of ionizations in small volumes is required. A cloud chamber capable of resolving the droplets formed on individual ions in the tracks of low-energy electrons has been constructed. It is made to high-vacuum specifications and contains a mixture of permanent gases and vapours, unsaturated before expansion, at a total pressure of 10 kPa. Condensation efficiencies close to 100% are obtained without significant background from condensation on uncharged particles and molecular aggregates. This paper describes the chamber, associated equipment and method of operation and discusses the performance of the system. Photographs of the droplets produced from the interaction of low-energy X-rays in the chamber gas for various modes of operation are presented. The mean energy loss per ion pair for electrons produced by the interaction of Al X-rays in the chamber gas (8130 Pa H 2 , 700 Pa C 2 H 5 OH, 690 Pa H 2 O, 400 Pa He, 70 Pa air) has been measured as 29.8 +- 0.7 eV per ion pair compared with a calculated value of 29.6 +- 0.4 eV per ion pair. (author)

  13. The Magellanic Stream System. I. Ram-Pressure Tails and the Relics of the Collision Between the Magellanic Clouds

    Science.gov (United States)

    Hammer, F.; Yang, Y. B.; Flores, H.; Puech, M.; Fouquet, S.

    2015-11-01

    We have analyzed the Magellanic Stream (MS) using the deepest and the most resolved H i survey of the Southern Hemisphere (the Galactic All-Sky Survey). The overall Stream is structured into two filaments, suggesting two ram-pressure tails lagging behind the Magellanic Clouds (MCs), and resembling two close, transonic, von Karman vortex streets. The past motions of the Clouds appear imprinted in them, implying almost parallel initial orbits, and then a radical change after their passage near the N(H i) peak of the MS. This is consistent with a recent collision between the MCs, 200-300 Myr ago, which has stripped their gas further into small clouds, spreading them out along a gigantic bow shock, perpendicular to the MS. The Stream is formed by the interplay between stellar feedback and the ram pressure exerted by hot gas in the Milky Way (MW) halo with n h = 10-4 cm-3 at 50-70 kpc, a value necessary to explain the MS multiphase high-velocity clouds. The corresponding hydrodynamic modeling provides the currently most accurate reproduction of the whole H i Stream morphology, of its velocity, and column density profiles along L MS. The “ram pressure plus collision” scenario requires tidal dwarf galaxies, which are assumed to be the Cloud and dSph progenitors, to have left imprints in the MS and the Leading Arm, respectively. The simulated LMC and SMC have baryonic mass, kinematics, and proper motions consistent with observations. This supports a novel paradigm for the MS System, which could have its origin in material expelled toward the MW by the ancient gas-rich merger that formed M31.

  14. THE MAGELLANIC STREAM SYSTEM. I. RAM-PRESSURE TAILS AND THE RELICS OF THE COLLISION BETWEEN THE MAGELLANIC CLOUDS

    Energy Technology Data Exchange (ETDEWEB)

    Hammer, F.; Yang, Y. B.; Flores, H.; Puech, M.; Fouquet, S., E-mail: francois.hammer@obspm.fr [GEPI, Observatoire de Paris, CNRS, 5 Place Jules Janssen, Meudon F-92195 (France)

    2015-11-10

    We have analyzed the Magellanic Stream (MS) using the deepest and the most resolved H i survey of the Southern Hemisphere (the Galactic All-Sky Survey). The overall Stream is structured into two filaments, suggesting two ram-pressure tails lagging behind the Magellanic Clouds (MCs), and resembling two close, transonic, von Karman vortex streets. The past motions of the Clouds appear imprinted in them, implying almost parallel initial orbits, and then a radical change after their passage near the N(H i) peak of the MS. This is consistent with a recent collision between the MCs, 200–300 Myr ago, which has stripped their gas further into small clouds, spreading them out along a gigantic bow shock, perpendicular to the MS. The Stream is formed by the interplay between stellar feedback and the ram pressure exerted by hot gas in the Milky Way (MW) halo with n{sub h} = 10{sup −4} cm{sup −3} at 50–70 kpc, a value necessary to explain the MS multiphase high-velocity clouds. The corresponding hydrodynamic modeling provides the currently most accurate reproduction of the whole H i Stream morphology, of its velocity, and column density profiles along L{sub MS}. The “ram pressure plus collision” scenario requires tidal dwarf galaxies, which are assumed to be the Cloud and dSph progenitors, to have left imprints in the MS and the Leading Arm, respectively. The simulated LMC and SMC have baryonic mass, kinematics, and proper motions consistent with observations. This supports a novel paradigm for the MS System, which could have its origin in material expelled toward the MW by the ancient gas-rich merger that formed M31.

  15. Stress analysis of R2 pressure vessel. Structural reliability benchmark exercise

    International Nuclear Information System (INIS)

    Vestergaard, N.

    1987-05-01

    The Structural Reliability Benchmark Exercise (SRBE) is sponsored by the EEC as part of the Reactor Safety Programme. The objectives of the SRBE are to evaluate and improve 1) inspection procedures, which use non-destructive methods to locate defects in pressure (reactor) vessels, as well as 2) analytical damage accumulation models, which predict the time to failure of vessels containing defects. In order to focus attention, an experimental presure vessel has been inspected, subjected fatigue loadings and subsequently analysed by several teams using methods of their choice. The present report contains the first part of the analytical damage accumulation analysis. The stress distributions in the welds of the experimental pressure vessel were determined. These stress distributions will be used to determine the driving forces of the damage accumulation models, which will be addressed in a future report. (author)

  16. STUDY OF THE IMPACT OF THERMAL DRIFT ON RELIABILITY OF PRESSURE SENSORS

    Directory of Open Access Journals (Sweden)

    ABDELAZIZ BEDDIAF

    2017-10-01

    Full Text Available Piezoresistive pressure sensors, using a Wheatstone bridge with the piezoresistors, are typically supplied with a voltage ranging from 3 to 10 V involve thermal drift caused by Joule heating. In this paper, an accurate numerical model for optimization and predicting the thermal drift in piezoresistive pressure sensors due to the electric heater in its piezoresistors is adopted. In this case, by using the solution of 2D heat transfer equation considering Joule heating in Cartesian coordinates for the transient regime, we determine how the temperature affects the sensor when the supply voltage is applied. For this, the elevation of temperature due to the Joule heating has been calculated for various values of supply voltage and for several operating times of the sensor; by varying different geometrical parameters. Otherwise, the variation of the coefficient 44 in p-Si and pressure sensitivity as a function of the applied potential, as well as, for various times, for different dimensions of the device, have been also established. It is observed that the electrical heating leads to an important temperature rise in the piezoresistor. Consequently, it causes drift in the pressure sensitivity of the sensor upon application of a voltage. Finally, this work allows us to evaluate the reliability of sensors. Also, it permits to predict their behaviour against temperature due to the application of a voltage of a bridge and to minimize this effect by optimizing the geometrical parameters of the sensor and by reducing the supply voltage.

  17. [Fasciocutaneous flap reliable by deep femoral artery perforator for the treatment of ischial pressure ulcers].

    Science.gov (United States)

    Gebert, L; Boucher, F; Lari, A; Braye, F; Mojallal, A; Ismaïl, M

    2018-04-01

    The surgical management of pressure ulcers in the paraplegic or quadriplegic population is marked by the high risk of recurrence in the long-term. In the current era of perforator flaps, newer reconstructive options are available for the management of pressure ulcers, decreasing the need to use the classically described muscular or musculocutaneous locoregional flaps. The coverage of ischial sores described in this article by a pedicled flap based on a deep femoral artery perforator, appears to be an effective first-line reconstructive option for the management of limited size pressure ulcers. A number of fifteen paraplegic or quadriplegic patients having at least one ischial bed sore with underlying osteomyelitis were included in this series. The approximate location of the deep femoral artery perforator was initially identified using the "The Atlas of the perforator arteries of the skin, the trunk and limbs", which was confirmed, with the use of a Doppler device. A fasciocutaneous transposition flap was elevated, with the pivot point based on the cutaneous bridge centered on the perforator, and then transposed to cover the area of tissue loss. The donor site was closed primarily. A total of fifteen patients were operated from November 2015 to November 2016. The series comprised of 16 first presentations of a stage 4 pressure ulcers associated with underlying osteomyelitis that were subsequently reconstructed by the pedicled deep femoral artery perforator flap. The healing rate and functional results were both satisfactory. Fasciocutaneous flap reliable by deep femoral artery perforator appears to have a promising role in the treatment of ischial pressure sores. It is an attractive option to spare the use of musculocutaneous flaps in the area. Thus this flap could be used as a first-line option to cover ischial pressure ulcers of limited size. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  18. FRESCO+: an improved O2 A-band cloud retrieval algorithm for tropospheric trace gas retrievals

    Directory of Open Access Journals (Sweden)

    M. van Roozendael

    2008-11-01

    Full Text Available The FRESCO (Fast Retrieval Scheme for Clouds from the Oxygen A-band algorithm has been used to retrieve cloud information from measurements of the O2 A-band around 760 nm by GOME, SCIAMACHY and GOME-2. The cloud parameters retrieved by FRESCO are the effective cloud fraction and cloud pressure, which are used for cloud correction in the retrieval of trace gases like O3 and NO2. To improve the cloud pressure retrieval for partly cloudy scenes, single Rayleigh scattering has been included in an improved version of the algorithm, called FRESCO+. We compared FRESCO+ and FRESCO effective cloud fractions and cloud pressures using simulated spectra and one month of GOME measured spectra. As expected, FRESCO+ gives more reliable cloud pressures over partly cloudy pixels. Simulations and comparisons with ground-based radar/lidar measurements of clouds show that the FRESCO+ cloud pressure is about the optical midlevel of the cloud. Globally averaged, the FRESCO+ cloud pressure is about 50 hPa higher than the FRESCO cloud pressure, while the FRESCO+ effective cloud fraction is about 0.01 larger. The effect of FRESCO+ cloud parameters on O3 and NO2 vertical column density (VCD retrievals is studied using SCIAMACHY data and ground-based DOAS measurements. We find that the FRESCO+ algorithm has a significant effect on tropospheric NO2 retrievals but a minor effect on total O3 retrievals. The retrieved SCIAMACHY tropospheric NO2 VCDs using FRESCO+ cloud parameters (v1.1 are lower than the tropospheric NO2VCDs which used FRESCO cloud parameters (v1.04, in particular over heavily polluted areas with low clouds. The difference between SCIAMACHY tropospheric NO2 VCDs v1.1 and ground-based MAXDOAS measurements performed in Cabauw, The Netherlands, during the DANDELIONS campaign is about −2.12×1014molec cm−2.

  19. Elongated dust clouds in a uniform DC positive column of low pressure gas discharge

    International Nuclear Information System (INIS)

    Usachev, A D; Zobnin, A V; Petrov, O F; Fortov, V E; Thoma, M H; Pustylnik, M Y; Fink, M A; Morfill, G E

    2016-01-01

    Experimental investigations of the formation of elongated dust clouds and their influence on the plasma glow intensity of the uniform direct current (DC) positive column (PC) have been performed under microgravity conditions. For the axial stabilization of the dust cloud position a polarity switching DC gas discharge with a switching frequency of 250 Hz was used. During the experiment, a spontaneous division of one elongated dust cloud into two smaller steady state dust clouds has been observed. Quantitative data on the dust cloud shape, size and dust number density distribution were obtained. Axial and radial distributions of plasma emission within the 585.2 nm and 703.2 nm neon spectral lines were measured over the whole discharge volume. It has been found that both spectral line intensities at the dust cloud region grew 1.7 times with respect to the undisturbed positive column region; in this the 585.2 nm line intensity increased by 10% compared to the 703.2 nm line intensity. For a semi-quantitative explanation of the observed phenomena the Schottky approach based on the equation of diffusion was used. The model reasonably explains the observed glow enhancement as an increasing of the ionization rate in the discharge with dust cloud, which compensates ion-electron recombination on the dust grain surfaces. In this, the ionization rate increases due to the growing of the DC axial electric field, and the glow grows directly proportional to the electric field. It is shown that the fundamental condition of the radial stability of the dusty plasma cloud is equal to the ionization and recombination rates within the cloud volume that is possible only when the electron density is constant and the radial electric field is absent within the dust cloud. (paper)

  20. Coupling finite elements and reliability methods - application to safety evaluation of pressurized water reactor vessels

    International Nuclear Information System (INIS)

    Pitner, P.; Venturini, V.

    1995-02-01

    When reliability studies are extended form deterministic calculations in mechanics, it is necessary to take into account input parameters variabilities which are linked to the different sources of uncertainty. Integrals must then be calculated to evaluate the failure risk. This can be performed either by simulation methods, or by approximations ones (FORM/SORM). Model in mechanics often require to perform calculation codes. These ones must then be coupled with the reliability calculations. Theses codes can involve large calculation times when they are invoked numerous times during simulations sequences or in complex iterative procedures. Response surface method gives an approximation of the real response from a reduced number of points for which the finite element code is run. Thus, when it is combined with FORM/SORM methods, a coupling can be carried out which gives results in a reasonable calculation time. An application of response surface method to mechanics reliability coupling for a mechanical model which calls for a finite element code is presented. It corresponds to a probabilistic fracture mechanics study of a pressurized water reactor vessel. (authors). 5 refs., 3 figs

  1. Pressure pain thresholds, clinical assessment, and differential diagnosis: reliability and validity in patients with myogenic pain.

    Science.gov (United States)

    Ohrbach, R; Gale, E N

    1989-11-01

    Four studies are presented testing the validity and reliability of pressure pain thresholds (PPTs) and of examination parameters believed to be important in the clinical assessment of sites commonly used for such measures in patient samples. Forty-five patients with a myogenous temporomandibular disorder were examined clinically prior to PPT measures. Criteria for history and examination included functional aspects of the pain, tissue quality of the pain site, and the type of pain elicited from palpation. Control sites within the same muscle and in the contralateral muscle were also examined. PPTs were measured as an index of tenderness using a strain gauge algometer at these sites. The data from the 5 male subjects were excluded from subsequent analyses due to the higher PPT in the males and to their unequal distribution among the various factorial conditions. The first study demonstrated strong validity in PPT measures between patients (using pain sites replicating the patients' pain) and matched controls (n = 11). The PPT was not significantly different between the primary pain site (referred pain and non-referred pain collapsed) and the no-pain control site in the same muscle (n = 16). The PPT was significantly lower at the pain site compared to the no-pain control site in the contralateral muscle (n = 13). The second study indicated adequate reliability in patient samples of the PPT measures. In the third study, the PPT was significantly lower at sites producing referred pain on palpation compared to sites producing localized pain on palpation. The PPT findings from the control sites were inconsistent on this factor. The fourth study presented preliminary evidence that palpable bands and nodular areas in muscle were most commonly associated with muscle regions that produce pain; such muscle findings were not specific, however, for regions that produce pain. Further, the intraexaminer reliability in reassessing these pain sites qualitatively was only fair

  2. The prediction of reliability and residual life of reactor pressure components

    International Nuclear Information System (INIS)

    Nemec, J.; Antalovsky, S.

    1978-01-01

    The paper deals with the problem of PWR pressure components reliability and residual life evaluation and prediction. A physical model of damage cumulation which serves as a theoretical basis for all considerations presents two major aspects. The first one describes the dependence of the degree of damage in the crack leading-edge in pressure components on the reactor system load-time history, i.e. on the number of transient loads. Both stages, fatigue crack initiation and growth through the wall until the critical length is reached, are investigated. The crack is supposed to initiate at the flaws in a strength weld joint or in the bimetallic weld of the base ferritic steel and the austenitic stainless overlay cladding. The growth rates of developed cracks are analysed in respect to different load-time histories. Important cyclic properties of some steels are derived from the low-cycle fatigue theory. The second aspect is the load-time history-dependent process of precipitation, deformation and radiation aging, characterized entirely by the critical crack-length value mentioned above. The fracture point, defined by the equation ''crack-length=critical value'' and hence the residual life, can be evaluated using this model and verified by in-service inspection. The physical model described is randomized by considering all the parameters of the model as random. Monte Carlo methods are applied and fatigue crack initiation and growth is simulated. This permits evaluation of the reliability and residual life of the component. The distributions of material and load-time history parameters are needed for such simulation. Both the deterministic and computer-simulated probabilistic predictions of reliability and residual life are verified by prior-to-failure sequential testing of data coming from in-service NDT periodical inspections. (author)

  3. A reliability analysis of a natural-gas pressure-regulating installation

    Energy Technology Data Exchange (ETDEWEB)

    Gerbec, Marko, E-mail: marko.gerbec@ijs.s [Jozef Stefan Institute, Jamova 39, 1000 Ljubljana (Slovenia)

    2010-11-15

    A case study involving analyses of the operability, reliability and availability was made for a selected, typical, high-pressure, natural-gas, pressure-regulating installation (PRI). The study was commissioned by the national operator of the natural-gas, transmission-pipeline network for the purpose of validating the existing operability and maintenance practices and policies. The study involved a failure-risk analysis (HAZOP) of the selected typical installation, retrieval and analysis of the available corrective maintenance data for the PRI's equipment at the network level in order to obtain the failure rates followed by an elaboration of the quantitative fault trees. Thus, both operator-specific and generic literature data on equipment failure rates were used. The results obtained show that two failure scenarios need to be considered: the first is related to the PRI's failure to provide gas to the consumer(s) due to a low-pressure state and the second is related to a failure of the gas pre-heating at the high-pressure reduction stage, leading to a low temperature (a non-critical, but unfavorable, PRI state). Related to the first scenario, the most important cause of failure was found to be a transient pressure disturbance back from the consumer side. The network's average PRI failure frequency was assessed to be about once per 32 years, and the average unavailability to be about 4 minutes per year (the confidence intervals were also assessed). Based on the results obtained, some improvements to the monitoring of the PRI are proposed.

  4. A reliability analysis of a natural-gas pressure-regulating installation

    International Nuclear Information System (INIS)

    Gerbec, Marko

    2010-01-01

    A case study involving analyses of the operability, reliability and availability was made for a selected, typical, high-pressure, natural-gas, pressure-regulating installation (PRI). The study was commissioned by the national operator of the natural-gas, transmission-pipeline network for the purpose of validating the existing operability and maintenance practices and policies. The study involved a failure-risk analysis (HAZOP) of the selected typical installation, retrieval and analysis of the available corrective maintenance data for the PRI's equipment at the network level in order to obtain the failure rates followed by an elaboration of the quantitative fault trees. Thus, both operator-specific and generic literature data on equipment failure rates were used. The results obtained show that two failure scenarios need to be considered: the first is related to the PRI's failure to provide gas to the consumer(s) due to a low-pressure state and the second is related to a failure of the gas pre-heating at the high-pressure reduction stage, leading to a low temperature (a non-critical, but unfavorable, PRI state). Related to the first scenario, the most important cause of failure was found to be a transient pressure disturbance back from the consumer side. The network's average PRI failure frequency was assessed to be about once per 32 years, and the average unavailability to be about 4 minutes per year (the confidence intervals were also assessed). Based on the results obtained, some improvements to the monitoring of the PRI are proposed.

  5. An investigative case study at local hospital into the current reliability of blood pressure sphygmomanometers

    Directory of Open Access Journals (Sweden)

    Vinícius Naves Rezende Faria

    2017-04-01

    Full Text Available Abstract Introduction Arterial Blood Pressure is a significant indicator of the current health condition of an individual. The correct detection of hypertension is essential, where this health problem is considered as one of the greatest health risks factors that affect the heart and circulatory system. This paper presents the importance of the application of metrological criteria for the diagnosis of hypertension using a sphygmomanometer aneroid. Methods 72 mechanical aneroid sphygmomanometers were calibrated using a standard manometer and the indication error, hysteresis, air leakage and rapid exhaust were determined; readings of these sphygmomanometers were compared to a properly calibrated and adjusted aneroid sphygmomanometer to carry out pressure measurements as those made during the hypertension diagnosis; the uncertainty of measurement associated with the sphygmomanometers calibration, and pressure values was assessed according to the recommendations of the Guide to the Expression of Uncertainty in Measurement, defined by the Joint Committee for Guides in Metrology. Results The results obtained have shown that about 61% of the evaluated aneroid sphygmomanometers did not meet the specifications. The variable that most contributed to the final calibration uncertainty was the hysteresis of the standard manometer, with 53% of contribution, followed by the sphygmomanometer resolution with 27%. Conclusion The periodic verifications are essential to evaluate the performance of these devices. It was shown that uncertainty of measurement influences the final diagnosis of hypertension and the application of metrological criteria can increase the reliability of the final diagnosis.

  6. Evaluation of structural reliability for vacuum vessel under external pressure and electromagnetic force

    International Nuclear Information System (INIS)

    Minato, Akio

    1983-08-01

    Static and dynamic structural analyses of the vacuum vessel for a Swimming Pool Type Tokamak Reactor (SPTR) have been conducted under the external pressure (hydraulic and atmospheric pressure) during normal operation or the electromagnetic force due to plasma disruption. The reactor structural design is based on the concept that the adjacent modules of the vacuum vessel are not connected mechanically with bolts in the torus inboard region each other, so as to save the required space for inserting the remote handling machine for tightenning and untightenning bolts in the region and to simplify the repair and maintenance of the reactor. The structural analyses of the vacuum vessel have been carried out under the external pressure and the electromagnetic force and the structural reliability against the static and dynamic loads is estimated. The several configurations of the lip seal between the modules, which is required to make a plasma vacuum boundary, have been proposed and the structural strength under the forced displacements due to the deformation of the vacuum vessel is also estimated. (author)

  7. Reliability Analysis Based on a Jump Diffusion Model with Two Wiener Processes for Cloud Computing with Big Data

    Directory of Open Access Journals (Sweden)

    Yoshinobu Tamura

    2015-06-01

    Full Text Available At present, many cloud services are managed by using open source software, such as OpenStack and Eucalyptus, because of the unification management of data, cost reduction, quick delivery and work savings. The operation phase of cloud computing has a unique feature, such as the provisioning processes, the network-based operation and the diversity of data, because the operation phase of cloud computing changes depending on many external factors. We propose a jump diffusion model with two-dimensional Wiener processes in order to consider the interesting aspects of the network traffic and big data on cloud computing. In particular, we assess the stability of cloud software by using the sample paths obtained from the jump diffusion model with two-dimensional Wiener processes. Moreover, we discuss the optimal maintenance problem based on the proposed jump diffusion model. Furthermore, we analyze actual data to show numerical examples of dependability optimization based on the software maintenance cost considering big data on cloud computing.

  8. Relative and absolute test-retest reliabilities of pressure pain threshold in patients with knee osteoarthritis.

    Science.gov (United States)

    Srimurugan Pratheep, Neeraja; Madeleine, Pascal; Arendt-Nielsen, Lars

    2018-04-25

    Pressure pain threshold (PPT) and PPT maps are commonly used to quantify and visualize mechanical pain sensitivity. Although PPT's have frequently been reported from patients with knee osteoarthritis (KOA), the absolute and relative reliability of PPT assessments remain to be determined. Thus, the purpose of this study was to evaluate the test-retest relative and absolute reliability of PPT in KOA. For that purpose, intra- and interclass correlation coefficient (ICC) as well as the standard error of measurement (SEM) and the minimal detectable change (MDC) values within eight anatomical locations covering the most painful knee of KOA patients was measured. Twenty KOA patients participated in two sessions with a period of 2 weeks±3 days apart. PPT's were assessed over eight anatomical locations covering the knee and two remote locations over tibialis anterior and brachioradialis. The patients rated their maximum pain intensity during the past 24 h and prior to the recordings on a visual analog scale (VAS), and completed The Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and PainDetect surveys. The ICC, SEM and MDC between the sessions were assessed. The ICC for the individual variability was expressed with coefficient of variance (CV). Bland-Altman plots were used to assess potential bias in the dataset. The ICC ranged from 0.85 to 0.96 for all the anatomical locations which is considered "almost perfect". CV was lowest in session 1 and ranged from 44.2 to 57.6%. SEM for comparison ranged between 34 and 71 kPa and MDC ranged between 93 and 197 kPa with a mean PPT ranged from 273.5 to 367.7 kPa in session 1 and 268.1-331.3 kPa in session 2. The analysis of Bland-Altman plot showed no systematic bias. PPT maps showed that the patients had lower thresholds in session 2, but no significant difference was observed for the comparison between the sessions for PPT or VAS. No correlations were seen between PainDetect and PPT and PainDetect and WOMAC

  9. Reliability and validity of pressure and temporal parameters recorded using a pressure-sensitive insole during running.

    Science.gov (United States)

    Mann, Robert; Malisoux, Laurent; Brunner, Roman; Gette, Paul; Urhausen, Axel; Statham, Andrew; Meijer, Kenneth; Theisen, Daniel

    2014-01-01

    Running biomechanics has received increasing interest in recent literature on running-related injuries, calling for new, portable methods for large-scale measurements. Our aims were to define running strike pattern based on output of a new pressure-sensitive measurement device, the Runalyser, and to test its validity regarding temporal parameters describing running gait. Furthermore, reliability of the Runalyser measurements was evaluated, as well as its ability to discriminate different running styles. Thirty-one healthy participants (30.3 ± 7.4 years, 1.78 ± 0.10 m and 74.1 ± 12.1 kg) were involved in the different study parts. Eleven participants were instructed to use a rearfoot (RFS), midfoot (MFS) and forefoot (FFS) strike pattern while running on a treadmill. Strike pattern was subsequently defined using a linear regression (R(2)=0.89) between foot strike angle, as determined by motion analysis (1000 Hz), and strike index (SI, point of contact on the foot sole, as a percentage of foot sole length), as measured by the Runalyser. MFS was defined by the 95% confidence interval of the intercept (SI=43.9-49.1%). High agreement (overall mean difference 1.2%) was found between stance time, flight time, stride time and duty factor as determined by the Runalyser and a force-measuring treadmill (n=16 participants). Measurements of the two devices were highly correlated (R ≥ 0.80) and not significantly different. Test-retest intra-class correlation coefficients for all parameters were ≥ 0.94 (n=14 participants). Significant differences (p<0.05) between FFS, RFS and habitual running were detected regarding SI, stance time and stride time (n=24 participants). The Runalyser is suitable for, and easily applicable in large-scale studies on running biomechanics. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Intraexaminer and interexaminer reliability of manual palpation and pressure algometry of the lower limb nerves in asymptomatic subjects.

    LENUS (Irish Health Repository)

    Fingleton, Caitriona P

    2014-02-01

    Nerve palpation is a method of clinically identifying mechanosensitivity of neural tissue by means of pressure algometry and manual palpation. There are few investigations of the reliability of lower limb nerve palpation, and femoral nerve palpation has never been previously reported. The aim of this study was to investigate the reliability of nerve palpation of the femoral, sciatic, tibial, and common peroneal nerves and to report normative values for the femoral nerve.

  11. In-service inspection as an aid to steel pressure vessel reliability

    International Nuclear Information System (INIS)

    Nichols, R.W.

    1975-01-01

    In-service inspection has played an important role in non-nuclear pressure vessel technology, being a legal requirement in many countries. Evidence from surveys of reliability of non-nuclear plant has suggested that such inspections can be effective in reducing the risk of subsequent failures. Recent requirements of the ASME XI code which will be summarised have important implications on the techniques to be used for in-service inspection, and so on design and fabrication aspects. Moreover, in-service inspection can only be an effective procedure if its possible weaknesses are recognised. The first problem is to ensure that an ultrasonic technique is used which is capable of detecting defects of an order of magnitude smaller than the critical size for each particular situation, in whatever defect orientation is important. The potential of different ultrasonic techniques will be compared. Next it is necessary to ensure coverage of all the relevant material. In this respect machine operation is superior to manual scanning, so that manipulation and scanning devices have to be developed. Problems of local geometry and of deviations in geometry have to be discussed with designer and fabricator; plate and clad quality have to be controlled (with respect to surface contour, metallurgical condition and freedom from interfering defects) to ensure inspectability in depth. The reliability of the mechanical and electronic equipment has to be assessed and designed to meet high requirements. Some presentational aids to detection and interpretation will be discussed. Having located a potential defect, the application of fracture mechanics treatments requires knowledge of size, shape and orientation. Some of the problems will be discussed together with possible solutions. (author)

  12. Reliability of pressure waveform analysis to determine correct epidural needle placement in labouring women.

    Science.gov (United States)

    Al-Aamri, I; Derzi, S H; Moore, A; Elgueta, M F; Moustafa, M; Schricker, T; Tran, D Q

    2017-07-01

    Pressure waveform analysis provides a reliable confirmatory adjunct to the loss-of-resistance technique to identify the epidural space during thoracic epidural anaesthesia, but its role remains controversial in lumbar epidural analgesia during labour. We performed an observational study in 100 labouring women of the sensitivity and specificity of waveform analysis to determine the correct location of the epidural needle. After obtaining loss-of-resistance, the anaesthetist injected 5 ml saline through the epidural needle (accounting for the volume already used in the loss-of-resistance). Sterile extension tubing, connected to a pressure transducer, was attached to the needle. An investigator determined the presence or absence of a pulsatile waveform, synchronised with the heart rate, on a monitor screen that was not in the view of the anaesthetist or the parturient. A bolus of 4 ml lidocaine 2% with adrenaline 5 μg.ml -1 was administered, and the epidural block was assessed after 15 min. Three women displayed no sensory block at 15 min. The results showed: epidural block present, epidural waveform present 93; epidural block absent, epidural waveform absent 2; epidural block present, epidural waveform absent 4; epidural block absent, epidural waveform present 1. Compared with the use of a local anaesthetic bolus to ascertain the epidural space, the sensitivity, specificity, positive and negative predictive values of waveform analysis were 95.9%, 66.7%, 98.9% and 33.3%, respectively. Epidural waveform analysis provides a simple adjunct to loss-of-resistance for confirming needle placement during performance of obstetric epidurals, however, further studies are required before its routine implementation in clinical practice. © 2017 The Association of Anaesthetists of Great Britain and Ireland.

  13. Materials technology and the energy problem : application to the reliability and safety of nuclear pressure vessels

    International Nuclear Information System (INIS)

    Garrett, G.G.

    1975-01-01

    In the U.S.A. over the past few months, widespread plant shutdowns because of cracking problems has produced considerable public pressure for a reappraisal of the reliability and safety of nuclear reactors. The awareness of such problems, and their solution, is particularly relevant to South Africa at this time. Some materials problems related to nuclear plant failure are examined in this paper. Since catastrophic failure (without prior warning from slow leakage) is in principle possible for light water (pressurised) reactors under operating conditions, it is essential to maintain rigorous manufacturing and quality control procedures, in conjunction with thorough and frequent examination by non-destructive testing methods. Although tests currently in progress in the U.S.A. on large-scale model reactors suggest that mathematical stress and failure analyses, for simple geometries at least, are sound, current in situ surveillance programmes aimed at categorizing the effects of irradiation are inadequate. In addition, the effects on materials properties and subsequent fracture resistance of the combined effects of irradiation and thermal shock (arising from the injection of emergency cooling water during a loss-of coolant accident) are unknown. The problem of stress corrosion cracking in stainless steel pipelines is considerable, and at present virtually impossible to predict. Much of the available laboratory data is inapplicable in that it cannot account for the complex interactions of stress state, temperature, material variations and segregation effects, and water chemistry, especially in conjunction with irradiation effects, that are experienced in an operating environment

  14. Accuracy and reliability of wrist-cuff devices for self-measurement of blood pressure.

    Science.gov (United States)

    Kikuya, Masahiro; Chonan, Kenichi; Imai, Yutaka; Goto, Eiji; Ishii, Masao

    2002-04-01

    dorsiflexion was significantly lower than that in palmar extension. In some cases, finger plethysmogram did not disappear during maximum inflation of the wrist-cuff (congruent with 250 mmHg), even in palmar extension and especially in palmar flexion, suggesting incomplete obstruction of radial and/or ulnar arteries during inflation. The results suggest that wrist-cuff devices in the present form are inadequate for self-measurement of blood pressure and, thus, are inadequate for general use or clinical and practical use. However, there is much possibility in wrist-cuff device and the accuracy and reliability of wrist-cuff device are warranted by an improvement of technology.

  15. Validating and calibrating the Nintendo Wii balance board to derive reliable center of pressure measures.

    Science.gov (United States)

    Leach, Julia M; Mancini, Martina; Peterka, Robert J; Hayes, Tamara L; Horak, Fay B

    2014-09-29

    The Nintendo Wii balance board (WBB) has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation) research domains. Although the WBB has been proposed as an alternative to the "gold standard" laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP) measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP) to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz). Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB's CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML) (compared to the anteroposterior (AP)) sway direction. There was no difference in error across the 12 WBB's, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB's CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB's CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error) from 2-6 mm (before calibration) to 0.5-2 mm (after calibration). WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain) CoP sway measures, from -10.5% (before calibration) to -0.05% (after calibration) (percent errors averaged across all sway measures and in both sway directions). In this study, we characterized the WBB's CoP measurement error under controlled, dynamic

  16. Reliability and validity of a new fingertip-shaped pressure algometer for assessing pressure pain thresholds in the temporomandibular joint and masticatory muscles.

    Science.gov (United States)

    Bernhardt, Olaf; Schiffman, Eric L; Look, John O

    2007-01-01

    To test in vitro and in vivo the reliability and accuracy of a new algometer, the pressure algometer for palpation (PAP), for measuring pressure pain thresholds (PPTs) and to compare its features with those of a commercially available pressure algometer. For in vitro accuracy testing, 6 repeated measurements were made at 8 defined test weights from 0.5 to 5 lb. In vivo validity testing compared the PAP to a standard instrument, the hand-held Somedic algometer, at 16 sites including the masticatory muscles, the temporomandibular joints, and the frontalis (as the control site) in 15 temporomandibular disorder (TMD) cases and 15 controls. Intraexaminer reliability was also assessed for both algometers. In vitro reliability was high, with coefficients of variation of test weights at r = .99 (P algometer. PPT values correlated moderately between the 2 devices (r ranged from 0.38 to 0.66; P algometers (P algometer showed high reliability. Concurrent validity was demonstrated by statistically significant correlations between the devices. Both showed equally high capacity for differentiating TMD cases from controls. The PAP yielded significantly higher PPTs than the Somedic algometer.

  17. A reliable method of quantification of trace copper in beverages with and without alcohol by spectrophotometry after cloud point extraction

    Directory of Open Access Journals (Sweden)

    Ramazan Gürkan

    2013-01-01

    Full Text Available A new cloud point extraction (CPE method was developed for the separation and preconcentration of copper (II prior to spectrophotometric analysis. For this purpose, 1-(2,4-dimethylphenyl azonapthalen-2-ol (Sudan II was used as a chelating agent and the solution pH was adjusted to 10.0 with borate buffer. Polyethylene glycol tert-octylphenyl ether (Triton X-114 was used as an extracting agent in the presence of sodium dodecylsulphate (SDS. After phase separation, based on the cloud point of the mixture, the surfactant-rich phase was diluted with acetone, and the enriched analyte was spectrophotometrically determined at 537 nm. The variables affecting CPE efficiency were optimized. The calibration curve was linear within the range 0.285-20 µg L-1 with a detection limit of 0.085 µg L-1. The method was successfully applied to the quantification of copper in different beverage samples.

  18. Modeling UV Radiation Feedback from Massive Stars. II. Dispersal of Star-forming Giant Molecular Clouds by Photoionization and Radiation Pressure

    Science.gov (United States)

    Kim, Jeong-Gyu; Kim, Woong-Tae; Ostriker, Eve C.

    2018-05-01

    UV radiation feedback from young massive stars plays a key role in the evolution of giant molecular clouds (GMCs) by photoevaporating and ejecting the surrounding gas. We conduct a suite of radiation hydrodynamic simulations of star cluster formation in marginally bound, turbulent GMCs, focusing on the effects of photoionization and radiation pressure on regulating the net star formation efficiency (SFE) and cloud lifetime. We find that the net SFE depends primarily on the initial gas surface density, Σ0, such that the SFE increases from 4% to 51% as Σ0 increases from 13 to 1300 {M}ȯ {pc}}-2. Cloud destruction occurs within 2–10 Myr after the onset of radiation feedback, or within 0.6–4.1 freefall times (increasing with Σ0). Photoevaporation dominates the mass loss in massive, low surface density clouds, but because most photons are absorbed in an ionization-bounded Strömgren volume, the photoevaporated gas fraction is proportional to the square root of the SFE. The measured momentum injection due to thermal and radiation pressure forces is proportional to {{{Σ }}}0-0.74, and the ejection of neutrals substantially contributes to the disruption of low mass and/or high surface density clouds. We present semi-analytic models for cloud dispersal mediated by photoevaporation and by dynamical mass ejection, and show that the predicted net SFE and mass loss efficiencies are consistent with the results of our numerical simulations.

  19. Validating and Calibrating the Nintendo Wii Balance Board to Derive Reliable Center of Pressure Measures

    Directory of Open Access Journals (Sweden)

    Julia M. Leach

    2014-09-01

    Full Text Available The Nintendo Wii balance board (WBB has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation research domains. Although the WBB has been proposed as an alternative to the “gold standard” laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz. Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB’s CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML (compared to the anteroposterior (AP sway direction. There was no difference in error across the 12 WBB’s, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB’s CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB’s CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error from 2–6 mm (before calibration to 0.5–2 mm (after calibration. WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain CoP sway measures, from −10.5% (before calibration to −0.05% (after calibration (percent errors averaged across all sway measures and in both sway directions. In this study, we characterized the WBB’s CoP measurement error

  20. Reliability and validity of pressure and temporal parameters recorded using a pressure-sensitive insole during running

    NARCIS (Netherlands)

    Mann, R.; Malisoux, L.; Brunner, R.; Gette, P.; Urhausen, A.; Statham, A.; Meijer, K.; Theisen, D.

    2014-01-01

    Running biomechanics has received increasing interest in recent literature on running-related injuries, calling for new, portable methods for large-scale measurements. Our aims were to define running strike pattern based on output of a new pressure-sensitive measurement device, the Runalyser, and to

  1. Reliability of blood pressure parameters for dry weight estimation in hemodialysis patients.

    Science.gov (United States)

    Susantitaphong, Paweena; Laowaloet, Suthanit; Tiranathanagul, Khajohn; Chulakadabba, Adhisabandh; Katavetin, Pisut; Praditpornsilpa, Kearkiat; Tungsanga, Kriang; Eiam-Ong, Somchai

    2013-02-01

    Chronic volume overload resulting from interdialytic weight gain and inadequate fluid removal plays a significant role in poorly controlled high blood pressure. Although bioimpedance has been introduced as an accurate method for assessing hydration status, the instrument is not available in general hemodialysis (HEMO) centers. This study was conducted to explore the correlation between hydration status measured by bioimpedance and blood pressure parameters in chronic HEMO patients. Multifrequency bioimpedance analysis was used to determine pre- and post-dialysis hydration status in 32 stable HEMO patients. Extracellular water/total body water (ECW/TBW) determined by sum of segments from bioimpedance analysis was used as an index of hydration status. The mean age was 57.9 ± 16.4 years. The mean dry weight and body mass index were 57.7 ± 14.5 kg and 22.3 ± 4.7 kg/m(2), respectively. Pre-dialysis ECW/TBW was significantly correlated with only pulse pressure (r = 0.5, P = 0.003) whereas post-dialysis ECW/TBW had significant correlations with pulse pressure, systolic blood pressure, and diastolic blood pressure (r = 0.6, P = 0.001, r = 0.4, P = 0.04, r = -0.4, and P = 0.02, respectively). After dialysis, the mean values of ECW/TBW, systolic blood pressure, mean arterial pressure, and pulse pressure were significantly decreased. ECW/TBW was used to classify the patients into normohydration (≤ 0.4) and overhydration (>0.4) groups. Systolic blood pressure, mean arterial pressure, and pulse pressure significantly reduced after dialysis in the normohydration group but did not significantly change in the overhydration group. Pre-dialysis pulse pressure, post-dialysis pulse pressure, and post-dialysis systolic blood pressure in the overhydration group were significantly higher than normohydration group. Due to the simplicity and cost, blood pressure parameters, especially pulse pressure, might be a simple reference for clinicians to determine hydration status in HEMO

  2. Bayes Analysis and Reliability Implications of Stress-Rupture Testing a Kevlar/Epoxy COPV Using Temperature and Pressure Acceleration

    Science.gov (United States)

    Phoenix, S. Leigh; Kezirian, Michael T.; Murthy, Pappu L. N.

    2009-01-01

    Composite Overwrapped Pressure Vessels (COPVs) that have survived a long service time under pressure generally must be recertified before service is extended. Flight certification is dependent on the reliability analysis to quantify the risk of stress rupture failure in existing flight vessels. Full certification of this reliability model would require a statistically significant number of lifetime tests to be performed and is impractical given the cost and limited flight hardware for certification testing purposes. One approach to confirm the reliability model is to perform a stress rupture test on a flight COPV. Currently, testing of such a Kevlar49 (Dupont)/epoxy COPV is nearing completion. The present paper focuses on a Bayesian statistical approach to analyze the possible failure time results of this test and to assess the implications in choosing between possible model parameter values that in the past have had significant uncertainty. The key uncertain parameters in this case are the actual fiber stress ratio at operating pressure, and the Weibull shape parameter for lifetime; the former has been uncertain due to ambiguities in interpreting the original and a duplicate burst test. The latter has been uncertain due to major differences between COPVs in the database and the actual COPVs in service. Any information obtained that clarifies and eliminates uncertainty in these parameters will have a major effect on the predicted reliability of the service COPVs going forward. The key result is that the longer the vessel survives, the more likely the more optimistic stress ratio model is correct. At the time of writing, the resulting effect on predicted future reliability is dramatic, increasing it by about one "nine," that is, reducing the predicted probability of failure by an order of magnitude. However, testing one vessel does not change the uncertainty on the Weibull shape parameter for lifetime since testing several vessels would be necessary.

  3. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  4. Seventh regular meeting of the International Working Group on Reliability of Reactor Pressure Components, Vienna, 3-5 September 1985

    International Nuclear Information System (INIS)

    1986-07-01

    The seventh regular meeting of the IAEA International Working Group on Reliability of Reactor Pressure Components was held at the Agency's Headquarters in Vienna from 3 to 5 September 1985. The representatives of Member States and of the Commission of the European Communities reported the status of the research programmes in this field (12 presentations). A separate abstract was prepared for each of the presentations

  5. An experimental study of assessment of weld quality on fatigue reliability analysis of a nuclear pressure vessel

    International Nuclear Information System (INIS)

    Dai Shuhe

    1993-01-01

    The steam generator in PWR primary coolant system China of Qinshan Nuclear Power Plant is a crucial unit belonging to the category of nuclear pressure vessel. The purpose of this research work is to make an examination of the weld quality of the steam generator under fatigue loading and to assess its reliability by using the experimental results of fatigue test of material of nuclear pressure vessel S-271 (Chinese Standard) and of qualified tests of welded seams of a simulated prototype of bottom closure head of the steam generator. A guarantee of weld quality is proposed as a subsequent verification for China National Nuclear Safety Supervision Bureau. The results of reliability analysis reported in this work can be taken as a supplementary material of Probabilistic Safety Assessment (PSA) of Qinshan Nuclear Power Plant. According to the requirement of Provision II-1500 cyclic testing, ASME Boiler and Pressure Vessel Code, Section III, Rules for Construction of Nuclear Power Plant Components, a simulated prototype of the bottom closure head of the steam generator was made for qualified tests. To find the quantified results of reliability assessment by using the testing data, two proposals are presented

  6. Contributions to the determination of the thermal core reliability of pressurized water reactors

    International Nuclear Information System (INIS)

    Ackermann, G.; Horche, W.; Melchior, H.; Prasser, H.M.

    1982-09-01

    The investigations in the field of thermohydraulics of PW reactors are aimed at a possible increase of economy and reliability of WWER-type-reactors. In detail the flow distribution at the core entrance, the modification of the power distribution as a result of an irregular temperature distribution at the core entrance, and based on the theory of hot spots the thermic core reliability are studied. In this connection qualitatively new methods are applied characterized by low expenditure. (author)

  7. The feasibility and reliability of capillary blood pressure measurements in the fingernail fold

    NARCIS (Netherlands)

    de Graaff, Jurgen C.; Ubbink, Dirk Th; Lagarde, Sjoerd M.; Jacobs, Michael J. H. M.

    2002-01-01

    Capillary blood pressure is an essential parameter in the study of the (patho-)physiology of microvascular perfusion. Currently, capillary pressure measurements in humans are performed using a servo-nulling micropressure system containing an oil-water interface, which suffers some drawbacks. In

  8. Reliability and integrity management program for PBMR helium pressure boundary components - HTR2008-58036

    International Nuclear Information System (INIS)

    Fleming, K. N.; Gamble, R.; Gosselin, S.; Fletcher, J.; Broom, N.

    2008-01-01

    The purpose of this paper is to present the results of a study to establish strategies for the reliability and integrity management (RIM) of passive metallic components for the PBMR. The RIM strategies investigated include design elements, leak detection and testing approaches, and non-destructive examinations. Specific combinations of strategies are determined to be necessary and sufficient to achieve target reliability goals for passive components. This study recommends a basis for the RIM program for the PBMR Demonstration Power Plant (DPP) and provides guidance for the development by the American Society of Mechanical Engineers (ASME) of RIM requirements for Modular High Temperature Gas-Cooled Reactors (MHRs). (authors)

  9. Reliability of Doppler and stethoscope methods of determining systolic blood pressures: considerations for calculating an ankle-brachial index.

    Science.gov (United States)

    Chesbro, Steven B; Asongwed, Elmira T; Brown, Jamesha; John, Emmanuel B

    2011-01-01

    The purposes of this study were to: (1) identify the interrater and intrarater reliability of systolic blood pressures using a stethoscope and Doppler to determine an ankle-brachial index (ABI), and (2) to determine the correlation between the 2 methods. Peripheral arterial disease (PAD) affects approximately 8 to 12 million people in the United States, and nearly half of those with this disease are asymptomatic. Early detection and prompt treatment of PAD will improve health outcomes. It is important that clinicians perform tests that determine the presence of PAD. Two individual raters trained in ABI procedure measured the systolic blood pressures of 20 individuals' upper and lower extremities. Standard ABI measurement protocols were observed. Raters individually recorded the systolic blood pressures of each extremity using a stethoscope and a Doppler, for a total of 640 independent measures. Interrater reliability of Doppler measurements to determine SBP at the ankle was very strong (intraclass correlation coefficient [ICC], 0.93-0.99) compared to moderate to strong reliability using a stethoscope (ICC, 0.64-0.87). Agreement between the 2 devices to determine SBP was moderate to very weak (ICC, 0.13-0.61). Comparisons of the use of Doppler and stethoscope to determine ABI showed weak to very weak intrarater correlation (ICC, 0.17-0.35). Linear regression analysis of the 2 methods to determine ABI showed positive but weak to very weak correlations (r2 = .013, P = .184). A Doppler ultrasound is recommended over a stethoscope for accuracy in systolic pressure readings for ABI measurements.

  10. A reavaluation of the reliability analysis of the low pressure injection system for Angra-1

    International Nuclear Information System (INIS)

    Oliveira, L.F.S. de; Fleming, P.V.; Frutuoso e Melo, P.F.F.; Tayt-Sohn, L.C.

    1983-01-01

    The emergency core cooling system of Angra 1 is analysed aiming at the low pressure injection systems, using the fault tree technique. All the failure mode of the components are considered for this analyse. (author) [pt

  11. Cloud Infrastructure Security

    OpenAIRE

    Velev , Dimiter; Zlateva , Plamena

    2010-01-01

    Part 4: Security for Clouds; International audience; Cloud computing can help companies accomplish more by eliminating the physical bonds between an IT infrastructure and its users. Users can purchase services from a cloud environment that could allow them to save money and focus on their core business. At the same time certain concerns have emerged as potential barriers to rapid adoption of cloud services such as security, privacy and reliability. Usually the information security professiona...

  12. Reliability Assessment of 2400 MWth Gas-Cooled Fast Reactor Natural Circulation Decay Heat Removal in Pressurized Situations

    Directory of Open Access Journals (Sweden)

    C. Bassi

    2008-01-01

    Full Text Available As the 2400 MWth gas-cooled fast reactor concept makes use of passive safety features in combination with active safety systems, the question of natural circulation decay heat removal (NCDHR reliability and performance assessment into the ongoing probabilistic safety assessment in support to the reactor design, named “probabilistic engineering assessment” (PEA, constitutes a challenge. Within the 5th Framework Program for Research and Development (FPRD of the European Community, a methodology has been developed to evaluate the reliability of passive systems characterized by a moving fluid and whose operation is based on physical principles, such as the natural circulation. This reliability method for passive systems (RMPSs is based on uncertainties propagation into thermal-hydraulic (T-H calculations. The aim of this exercise is finally to determine the performance reliability of the DHR system operating in a “passive” mode, taking into account the uncertainties of parameters retained for thermal-hydraulical calculations performed with the CATHARE 2 code. According to the PEA preliminary results, exhibiting the weight of pressurized scenarios (i.e., with intact primary circuit boundary for the core damage frequency (CDF, the RMPS exercise is first focusing on the NCDHR performance at these T-H conditions.

  13. A Stochastic Reliability Model for Application in a Multidisciplinary Optimization of a Low Pressure Turbine Blade Made of Titanium Aluminide

    Directory of Open Access Journals (Sweden)

    Christian Dresbach

    Full Text Available Abstract Currently, there are a lot of research activities dealing with gamma titanium aluminide (γ-TiAl alloys as new materials for low pressure turbine (LPT blades. Even though the scatter in mechanical properties of such intermetallic alloys is more distinctive as in conventional metallic alloys, stochastic investigations on γ -TiAl alloys are very rare. For this reason, we analyzed the scatter in static and dynamic mechanical properties of the cast alloy Ti-48Al-2Cr-2Nb. It was found that this alloy shows a size effect in strength which is less pronounced than the size effect of brittle materials. A weakest-link approach is enhanced for describing a scalable size effect under multiaxial stress states and implemented in a post processing tool for reliability analysis of real components. The presented approach is a first applicable reliability model for semi-brittle materials. The developed reliability tool was integrated into a multidisciplinary optimization of the geometry of a LPT blade. Some processes of the optimization were distributed in a wide area network, so that specialized tools for each discipline could be employed. The optimization results show that it is possible to increase the aerodynamic efficiency and the structural mechanics reliability at the same time, while ensuring the blade can be manufactured in an investment casting process.

  14. Expansion of magnetic clouds

    International Nuclear Information System (INIS)

    Suess, S.T.

    1987-01-01

    Magnetic clouds are a carefully defined subclass of all interplanetary signatures of coronal mass ejections whose geometry is thought to be that of a cylinder embedded in a plane. It has been found that the total magnetic pressure inside the clouds is higher than the ion pressure outside, and that the clouds are expanding at 1 AU at about half the local Alfven speed. The geometry of the clouds is such that even though the magnetic pressure inside is larger than the total pressure outside, expansion will not occur because the pressure is balanced by magnetic tension - the pinch effect. The evidence for expansion of clouds at 1 AU is nevertheless quite strong so another reason for its existence must be found. It is demonstrated that the observations can be reproduced by taking into account the effects of geometrical distortion of the low plasma beta clouds as they move away from the Sun

  15. Reliable blood pressure self-measurement in the obstetric waiting room

    DEFF Research Database (Denmark)

    Wagner, Stefan; Kamper, C. H.; Rasmussen, Niels H

    2014-01-01

    Background: Patients often fail to adhere to clinical recommendations when using current blood pressure self-measurement (BPSM) methods and equipment. As existing BPSM equipment is not able to detect non-adherent behavior, this could result in misdiagnosis and treatment error. To overcome...... patients scheduled for self-measuring their blood pressure (BP) in the waiting room at an obstetrics department's outpatient clinic to perform an additional BPSM using ValidAid. We then compared the automatically measured and classified values from ValidAid with our manual observations. Results: We found...

  16. Modelling massive-star feedback with Monte Carlo radiation hydrodynamics: photoionization and radiation pressure in a turbulent cloud

    Science.gov (United States)

    Ali, Ahmad; Harries, Tim J.; Douglas, Thomas A.

    2018-04-01

    We simulate a self-gravitating, turbulent cloud of 1000M⊙ with photoionization and radiation pressure feedback from a 34M⊙ star. We use a detailed Monte Carlo radiative transfer scheme alongside the hydrodynamics to compute photoionization and thermal equilibrium with dust grains and multiple atomic species. Using these gas temperatures, dust temperatures, and ionization fractions, we produce self-consistent synthetic observations of line and continuum emission. We find that all material is dispersed from the (15.5pc)3 grid within 1.6Myr or 0.74 free-fall times. Mass exits with a peak flux of 2× 10-3M⊙yr-1, showing efficient gas dispersal. The model without radiation pressure has a slight delay in the breakthrough of ionization, but overall its effects are negligible. 85 per cent of the volume, and 40 per cent of the mass, become ionized - dense filaments resist ionization and are swept up into spherical cores with pillars that point radially away from the ionizing star. We use free-free emission at 20cm to estimate the production rate of ionizing photons. This is almost always underestimated: by a factor of a few at early stages, then by orders of magnitude as mass leaves the volume. We also test the ratio of dust continuum surface brightnesses at 450 and 850μ to probe dust temperatures. This underestimates the actual temperature by more than a factor of 2 in areas of low column density or high line-of-sight temperature dispersion; the HII region cavity is particularly prone to this discrepancy. However, the probe is accurate in dense locations such as filaments.

  17. Ninth regular meeting of the International Working Group on Reliability of Reactor Pressure Components, Vienna, 18-20 October 1988

    International Nuclear Information System (INIS)

    1990-04-01

    The 9th regular meeting of the International Working Group on Reliability of Pressure Components took place from 18-20 October 1988 at the Agency's Headquarters. The meeting was attended by 25 representatives from 19 Member States and International Organizations. The agenda of the meeting included overviews of the national activities in the field of pressure retaining components of PWRs, review of the past IWGRRPC activities and updating of the working plan for years 1989-1992. A great deal of attention was paid to the involvement of the IWGRRPC in the Agency's programme on nuclear power plant ageing and life extension. Members of the IWGRRPC reviewed the long term plan of the activities and proposed a provisional list and scope of the IAEA Specialists' Meetings planned for the period 1989-1992. Seventeen papers were presented at the meeting. A separate abstract was prepared for each of these papers. Refs, figs and tabs

  18. Operating reliability of valves in French pressurized water nuclear power plants

    International Nuclear Information System (INIS)

    Conte

    1986-10-01

    Taking into account the large numbers of valves (about 10000) of a PWR nuclear power plant, the importance of some valves in the safety functions and the cost resulting from their unavailability, the individual operability of these equipments has to be ensured at a high reliability level. This assurance can be obtained by means of an effort at all the stages which contribute to the quality of the product: design, qualification tests, fabrication, tests at the start-up stage, maintenance and tests during the power plant operation, experience feedback. This paper emphasizes more particularly on the tests carried out on loops of qualification [fr

  19. Safety and reliability of pressure components with special emphasis on advanced methods of NDT. Vol. 1

    International Nuclear Information System (INIS)

    1986-01-01

    24 papers discuss various methods for nondestructive testing of materials, e.g. eddy current measurement, EMAG analyser, tomography, ultrasound, holographic interferometry, and optical sound field camera. Special consideration is given to mathematical programmes and tests allowing to determine fracture-mechanical parameters and to assess cracks in various components, system parts and individual specimens both in pressurized systems and NPP systems. Studies focus on weld seams and adjacent areas. (DG) [de

  20. Author Correction: Reliability and feasibility of gait initiation centre-of-pressure excursions using a Wii® Balance Board in older adults at risk of falling.

    Science.gov (United States)

    Lee, James; Webb, Graham; Shortland, Adam P; Edwards, Rebecca; Wilce, Charlotte; Jones, Gareth D

    2018-05-12

    In the original publication, the article title was incorrectly published as 'Reliability and feasibility of gait initiation centre-of-pressure excursions using a Wii® Balance Board in older adults at risk of failing'. The correct title should read as 'Reliability and feasibility of gait initiation centre-of-pressure excursions using a Wii® Balance Board in older adults at risk of falling'.

  1. Analysis of the reliability of quality assurance of welded nuclear pressure vessels with regard to catastrophic failure

    Energy Technology Data Exchange (ETDEWEB)

    Ostberg, G [Lund Institute of Technology, Dept. of Materials Engineering (Sweden); Klingenstierna, B [FTL, Military Electronics Laboratory, National Defence Research Institute, Stockholm (Sweden); Sjoberg, L [Goteborg Univ., Dept. of Psychology (Sweden)

    1976-07-01

    The project is described as an analysis of the reliability of quality assurance of welded nuclear pressure vessels with regard to catastrophic failure. Its scope extends both beyond previous statistical evaluations of the risk of catastrophic failure, beyond previous studies of human malfunction, and beyond current studies of probabilistic fracture mechanics. The latter deal only with 'normal' data and 'normal' processes and procedures according to established rules and regulations, where as the present study concerns deficiencies or more or less complete fallacies of normal procedures and processes. Hopefully such events will prove to be rare enough to be characterized as 'unique'; this, in turn, means that the result of the investigation is not a new statistical figure but rather a survey of types and frequencies of errors and error-producing conditions. The emphasis is on the main pressure vessel; related information on the primary circuit is included, only when this can be done without excessive effort or costs. The avenues of approach in terms of technical-academic disciplines are reliability techniques and the psychology of analysis work and control processes.

  2. Reliability and Maintainability Analysis of a High Air Pressure Compressor Facility

    Science.gov (United States)

    Safie, Fayssal M.; Ring, Robert W.; Cole, Stuart K.

    2013-01-01

    This paper discusses a Reliability, Availability, and Maintainability (RAM) independent assessment conducted to support the refurbishment of the Compressor Station at the NASA Langley Research Center (LaRC). The paper discusses the methodologies used by the assessment team to derive the repair by replacement (RR) strategies to improve the reliability and availability of the Compressor Station (Ref.1). This includes a RAPTOR simulation model that was used to generate the statistical data analysis needed to derive a 15-year investment plan to support the refurbishment of the facility. To summarize, study results clearly indicate that the air compressors are well past their design life. The major failures of Compressors indicate that significant latent failure causes are present. Given the occurrence of these high-cost failures following compressor overhauls, future major failures should be anticipated if compressors are not replaced. Given the results from the RR analysis, the study team recommended a compressor replacement strategy. Based on the data analysis, the RR strategy will lead to sustainable operations through significant improvements in reliability, availability, and the probability of meeting the air demand with acceptable investment cost that should translate, in the long run, into major cost savings. For example, the probability of meeting air demand improved from 79.7 percent for the Base Case to 97.3 percent. Expressed in terms of a reduction in the probability of failing to meet demand (1 in 5 days to 1 in 37 days), the improvement is about 700 percent. Similarly, compressor replacement improved the operational availability of the facility from 97.5 percent to 99.8 percent. Expressed in terms of a reduction in system unavailability (1 in 40 to 1 in 500), the improvement is better than 1000 percent (an order of magnitude improvement). It is worthy to note that the methodologies, tools, and techniques used in the LaRC study can be used to evaluate

  3. Evaluation of design, leak monitoring, dnd NDEA strategies to assure PBMR Helium pressure boundary reliability - HTR2008-58037

    International Nuclear Information System (INIS)

    Fleming, K. N.; Smit, K.

    2008-01-01

    This paper discusses the reliability and integrity management (RIM) strategies that have been applied in the design of the PBMR passive metallic components for the helium pressure boundary (HPB) to meet reliability targets and to evaluate what combination of strategies are needed to meet the targets. The strategies considered include deterministic design strategies to reduce or eliminate the potential for specific damage mechanisms, use of an on-line leak monitoring system and associated design provisions that provide a high degree of leak detection reliability, and periodic nondestructive examinations combined with repair and replacement strategies to reduce the probability that degradation would lead to pipe ruptures. The PBMR RIM program for passive metallic piping components uses a leak-before-break philosophy. A Markov model developed for use in LWR risk-informed in-service inspection evaluations was applied to investigate the impact of alternative RIM strategies and plant age assumptions on the pipe rupture frequencies as a function of rupture size. Some key results of this investigation are presented in this paper. (authors)

  4. Application of a Reliability Model Generator to a Pressure Tank System

    Institute of Scientific and Technical Information of China (English)

    Kathryn Stockwell; Sarah Dunnett

    2013-01-01

    A number of mathematical modelling techniques exist which are used to measure the performance of a given system,by assessing each individual component within the system.This can be used to determine the failure frequency or probability of the system.Software is available to undertake the task of analysing these mathematical models after an individual or group of individuals manually create the models.The process of generating these models is time consuming and reduces the impact of the model on the system design.One way to improve this would be to generate the model automatically.In this work,the procedure to automatically construct a model,based on Petri nets,for systems undergoing a phased-mission is applied to a pressure tank system,undertaking a four phase mission.

  5. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  6. A TRUSTWORTHY CLOUD FORENSICS ENVIRONMENT

    OpenAIRE

    Zawoad , Shams; Hasan , Ragib

    2015-01-01

    Part 5: CLOUD FORENSICS; International audience; The rapid migration from traditional computing and storage models to cloud computing environments has made it necessary to support reliable forensic investigations in the cloud. However, current cloud computing environments often lack support for forensic investigations and the trustworthiness of evidence is often questionable because of the possibility of collusion between dishonest cloud providers, users and forensic investigators. This chapt...

  7. Influence of the inservice inspection on the reliability of a pressure vessel under stress corrosion cracking

    International Nuclear Information System (INIS)

    Francisco, Alexandre S.; Melo, Paulo F.F.

    1997-01-01

    The influence of in-service inspection on the structural integrity of a pressure vessel subjected to an aggressive environment is assessed in a quantitative manner with the purpose of establishing an optimal inspection program. For illustration purposes, the specific case of a LPG storage vessel under stress corrosion cracking is analysed, for which some failure data is available. Its failure probability is evaluated by means of a model published in the literature, for which three probabilities are taken into account: the first, concerns the distribution of initial crack depths; the second one accounts for the probability of detecting these cracks by nondestructive testing and finally, the probability of not detecting a crack until the next inspection is undertaken. The analysis is performed by means of a trade-off between two important parameters: the inspection sensitivity and the inspection interval. The influence of these parameters on the probability of failure is displayed in a graphical fashion where the benefits which can be achieved in various inspections programs are clearly revealed. Decisions can then be taken as to the optimal parameters. (author). 8 refs., 2 figs

  8. The effect of Web-based Braden Scale training on the reliability and precision of Braden Scale pressure ulcer risk assessments.

    Science.gov (United States)

    Magnan, Morris A; Maklebust, Joann

    2008-01-01

    To evaluate the effect of Web-based Braden Scale training on the reliability and precision of pressure ulcer risk assessments made by registered nurses (RN) working in acute care settings. Pretest-posttest, 2-group, quasi-experimental design. Five hundred Braden Scale risk assessments were made on 102 acute care patients deemed to be at various levels of risk for pressure ulceration. Assessments were made by RNs working in acute care hospitals at 3 different medical centers where the Braden Scale was in regular daily use (2 medical centers) or new to the setting (1 medical center). The Braden Scale for Predicting Pressure Sore Risk was used to guide pressure ulcer risk assessments. A Web-based version of the Detroit Medical Center Braden Scale Computerized Training Module was used to teach nurses correct use of the Braden Scale and selection of risk-based pressure ulcer prevention interventions. In the aggregate, RN generated reliable Braden Scale pressure ulcer risk assessments 65% of the time after training. The effect of Web-based Braden Scale training on reliability and precision of assessments varied according to familiarity with the scale. With training, new users of the scale made reliable assessments 84% of the time and significantly improved precision of their assessments. The reliability and precision of Braden Scale risk assessments made by its regular users was unaffected by training. Technology-assisted Braden Scale training improved both reliability and precision of risk assessments made by new users of the scale, but had virtually no effect on the reliability or precision of risk assessments made by regular users of the instrument. Further research is needed to determine best approaches for improving reliability and precision of Braden Scale assessments made by its regular users.

  9. OMI/Aura Cloud Pressure and Fraction (Raman Scattering) 1-Orbit L2 Swath 13x24 km V003

    Data.gov (United States)

    National Aeronautics and Space Administration — The reprocessed Aura OMI Version 003 Level 2 Cloud Data Product OMCLDRR is made available (in April 2012) to the public from the NASA Goddard Earth Sciences Data and...

  10. The influence of frequency and reliability of in-service inspection on reactor pressure vessel disruptive failure probability

    International Nuclear Information System (INIS)

    Jordan, G.M.

    1977-01-01

    A simple probabilistic methodology is used to investigate the benefit, in terms of reduction of disruptive failure probability, which comes from the application of periodic In Service Inspection (ISI) to nuclear pressure vessels. The analysis indicates the strong interaction between inspection benefit and the intrinsic quality of the structure. In order to quantify the inspection benefit, assumptions are made which allow the quality to be characterized in terms of the parameters governing a Log Normal distribution of time-to-failure. Using these assumptions, it is shown that the overall benefit of ISI is unlikely to exceed an order of magnitude in terms of reduction of disruptive failure probability. The method is extended to evaluate the effect of the periodicity and reliability of the inspection process itself. (author)

  11. The influence of frequency and reliability of in-service inspection on reactor pressure vessel disruptive failure probability

    International Nuclear Information System (INIS)

    Jordan, G.M.

    1978-01-01

    A simple probabilistic methodology is used to investigate the benefit, in terms of reduction of disruptive failure probability, which comes from the application of periodic in-service inspection to nuclear pressure vessels. The analysis indicates the strong interaction between inspection benefit and the intrinsic quality of the structure. In order to quantify the inspection benefit, assumptions are made which allow the quality to be characterised in terms of the parameters governing a log normal distribution of time - to - failure. Using these assumptions, it is shown that the overall benefit of in-service inspection unlikely to exceed an order of magnitude in terms of reduction of disruptive failure probability. The method is extended to evaluate the effect of the periodicity and reliability of the inspection process itself. (author)

  12. Ambient Pressure Laser Desorption—Chemical Ionization Mass Spectrometry for Fast and Reliable Detection of Explosives, Drugs, and Their Precursors

    Directory of Open Access Journals (Sweden)

    René Reiss

    2018-06-01

    Full Text Available Fast and reliable information is crucial for first responders to draw correct conclusions at crime scenes. An ambient pressure laser desorption (APLD mass spectrometer is introduced for this scenario, which enables detecting substances on surfaces without sample pretreatment. It is especially useful for substances with low vapor pressure and thermolabile ones. The APLD allows for the separation of desorption and ionization into two steps and, therefore, both can be optimized separately. Within this work, an improved version of the developed system is shown that achieves limits of detection (LOD down to 500 pg while remaining fast and flexible. Furthermore, realistic scenarios are applied to prove the usability of this system in real-world issues. For this purpose, post-blast residues of a bomb from the Second World War were analyzed, and the presence of PETN was proven without sample pretreatment. In addition, the analyzable substance range could be expanded by various drugs and drug precursors. Thus, the presented instrumentation can be utilized for an increased number of forensically important compound classes without changing the setup. Drug precursors revealed a LOD ranging from 6 to 100 ng. Drugs such as cocaine hydrochloride, heroin, (3,4-methylendioxy-methamphetamine hydrochloride (MDMA hydrochloride, and others exhibit a LOD between 10 to 200 ng.

  13. Reliability of transpulmonary pressure-time curve profile to identify tidal recruitment/hyperinflation in experimental unilateral pleural effusion.

    Science.gov (United States)

    Formenti, P; Umbrello, M; Graf, J; Adams, A B; Dries, D J; Marini, J J

    2017-08-01

    The stress index (SI) is a parameter that characterizes the shape of the airway pressure-time profile (P/t). It indicates the slope progression of the curve, reflecting both lung and chest wall properties. The presence of pleural effusion alters the mechanical properties of the respiratory system decreasing transpulmonary pressure (Ptp). We investigated whether the SI computed using Ptp tracing would provide reliable insight into tidal recruitment/overdistention during the tidal cycle in the presence of unilateral effusion. Unilateral pleural effusion was simulated in anesthetized, mechanically ventilated pigs. Respiratory system mechanics and thoracic computed tomography (CT) were studied to assess P/t curve shape and changes in global lung aeration. SI derived from airway pressure (Paw) was compared with that calculated by Ptp under the same conditions. These results were themselves compared with quantitative CT analysis as a gold standard for tidal recruitment/hyperinflation. Despite marked changes in tidal recruitment, mean values of SI computed either from Paw or Ptp were remarkably insensitive to variations of PEEP or condition. After the instillation of effusion, SI indicates a preponderant over-distension effect, not detected by CT. After the increment in PEEP level, the extent of CT-determined tidal recruitment suggest a huge recruitment effect of PEEP as reflected by lung compliance. Both SI in this case were unaffected. We showed that the ability of SI to predict tidal recruitment and overdistension was significantly reduced in a model of altered chest wall-lung relationship, even if the parameter was computed from the Ptp curve profile.

  14. Reliability in the utility computing era: Towards reliable Fog computing

    DEFF Research Database (Denmark)

    Madsen, Henrik; Burtschy, Bernard; Albeanu, G.

    2013-01-01

    This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....

  15. Progress in evaluation and improvement in nondestructive examination reliability for inservice inspection of Light Water Reactors (LWRs) and characterize fabrication flaws in reactor pressure vessels

    International Nuclear Information System (INIS)

    Doctor, S.R.; Bowey, R.E.; Good, M.S.; Friley, J.R.; Kurtz, R.J.; Simonen, F.A.; Taylor, T.T.; Heasler, P.G.; Andersen, E.S.; Diaz, A.A.; Greenwood, M.S.; Hockey, R.L.; Schuster, G.J.; Spanner, J.C.; Vo, T.V.

    1991-10-01

    This paper is a review of the work conducted under two programs. One (NDE Reliability Program) is a multi-year program addressing the reliability of nondestructive evaluation (NDE) for the inservice inspection (ISI) of light water reactor components. This program examines the reliability of current NDE, the effectiveness of evolving technologies, and provides assessments and recommendations to ensure that the NDE is applied at the right time, in the right place with sufficient effectiveness that defects of importance to structural integrity will be reliably detected and accurately characterized. The second program (Characterizing Fabrication Flaws in Reactor Pressure Vessels) is assembling a data base to quantify the distribution of fabrication flaws that exist in US nuclear reactor pressure vessels with respect to density, size, type, and location. These programs will be discussed as two separate sections in this report. 4 refs., 7 figs

  16. Reliability of an HTR-module primary circuit pressure boundary. Influences, sensitivity, and comparison with a PWR

    International Nuclear Information System (INIS)

    Staat, M.

    1995-01-01

    The reliability of the HTR-module for electricity and steam generation was analysed for normal operation, as well as accident conditions. The probabilistic fracture mechanics assessment was performed with a modification of the ZERBERUS code on the basis of widely used data. The calculated failure probabilities may thus be compared with similar investigations. The HTR-module primary circuit pressure boundary as a unit showed leak-before-break behaviour in a probabilistic sense, although a break was more probable than a leak for some of its parts.However, the findings may depend greatly on the stochastic data. Therefore a stochastic reference problem is defined and the results are compared with the Japanese round robin on a PWR section. Possible changes of failure probabilities and of the leak-before-break behaviour are discussed for different criteria for the events leading to a leak, and for modifications of the stochastic reference problem such as the inclusion of NDE. The results may be used to identify those stochastic variables which have the greatest influence on the computed failure probabilities, and to perhaps justify further work which would provide more detailed information on these probabilities. Furthermore, there is an obvious need for reduction of the non-statistical reasons for great variations of failure probabilities. (orig.)

  17. Reliable before-fabrication forecasting of normal and touch mode MEMS capacitive pressure sensor: modeling and simulation

    Science.gov (United States)

    Jindal, Sumit Kumar; Mahajan, Ankush; Raghuwanshi, Sanjeev Kumar

    2017-10-01

    An analytical model and numerical simulation for the performance of MEMS capacitive pressure sensors in both normal and touch modes is required for expected behavior of the sensor prior to their fabrication. Obtaining such information should be based on a complete analysis of performance parameters such as deflection of diaphragm, change of capacitance when the diaphragm deflects, and sensitivity of the sensor. In the literature, limited work has been carried out on the above-stated issue; moreover, due to approximation factors of polynomials, a tolerance error cannot be overseen. Reliable before-fabrication forecasting requires exact mathematical calculation of the parameters involved. A second-order polynomial equation is calculated mathematically for key performance parameters of both modes. This eliminates the approximation factor, and an exact result can be studied, maintaining high accuracy. The elimination of approximation factors and an approach of exact results are based on a new design parameter (δ) that we propose. The design parameter gives an initial hint to the designers on how the sensor will behave once it is fabricated. The complete work is aided by extensive mathematical detailing of all the parameters involved. Next, we verified our claims using MATLAB® simulation. Since MATLAB® effectively provides the simulation theory for the design approach, more complicated finite element method is not used.

  18. HST IMAGING OF DUST STRUCTURES AND STARS IN THE RAM PRESSURE STRIPPED VIRGO SPIRALS NGC 4402 AND NGC 4522: STRIPPED FROM THE OUTSIDE IN WITH DENSE CLOUD DECOUPLING

    International Nuclear Information System (INIS)

    Abramson, A.; Kenney, J.; Crowl, H.; Tal, T.

    2016-01-01

    We describe and constrain the origins of interstellar medium (ISM) structures likely created by ongoing intracluster medium (ICM) ram pressure stripping in two Virgo Cluster spirals, NGC 4522 and NGC 4402, using Hubble Space Telescope (HST) BVI images of dust extinction and stars, as well as supplementary H i, H α , and radio continuum images. With a spatial resolution of ∼10 pc in the HST images, this is the highest-resolution study to date of the physical processes that occur during an ICM–ISM ram pressure stripping interaction, ram pressure stripping's effects on the multi-phase, multi-density ISM, and the formation and evolution of ram-pressure-stripped tails. In dust extinction, we view the leading side of NGC 4402 and the trailing side of NGC 4522, and so we see distinct types of features in both. In both galaxies, we identify some regions where dense clouds are decoupling or have decoupled and others where it appears that kiloparsec-sized sections of the ISM are moving coherently. NGC 4522 has experienced stronger, more recent pressure and has the “jellyfish” morphology characteristic of some ram-pressure-stripped galaxies. Its stripped tail extends up from the disk plane in continuous upturns of dust and stars curving up to ∼2 kpc above the disk plane. On the other side of the galaxy, there is a kinematically and morphologically distinct extraplanar arm of young, blue stars and ISM above a mostly stripped portion of the disk, and between it and the disk plane are decoupled dust clouds that have not been completely stripped. The leading side of NGC 4402 contains two kiloparsec-scale linear dust filaments with complex substructure that have partially decoupled from the surrounding ISM. NGC 4402 also contains long dust ridges, suggesting that large parts of the ISM are being pushed out at once. Both galaxies contain long ridges of polarized radio continuum emission indicating the presence of large-scale, ordered magnetic fields. We propose that

  19. Cloud services, networking, and management

    CERN Document Server

    da Fonseca, Nelson L S

    2015-01-01

    Cloud Services, Networking and Management provides a comprehensive overview of the cloud infrastructure and services, as well as their underlying management mechanisms, including data center virtualization and networking, cloud security and reliability, big data analytics, scientific and commercial applications. Special features of the book include: State-of-the-art content. Self-contained chapters for readers with specific interests. Includes commercial applications on Cloud (video services and games).

  20. Security Problems in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Rola Motawie

    2016-12-01

    Full Text Available Cloud is a pool of computing resources which are distributed among cloud users. Cloud computing has many benefits like scalability, flexibility, cost savings, reliability, maintenance and mobile accessibility. Since cloud-computing technology is growing day by day, it comes with many security problems. Securing the data in the cloud environment is most critical challenges which act as a barrier when implementing the cloud. There are many new concepts that cloud introduces, such as resource sharing, multi-tenancy, and outsourcing, create new challenges for the security community. In this work, we provide a comparable study of cloud computing privacy and security concerns. We identify and classify known security threats, cloud vulnerabilities, and attacks.

  1. Pattern description and reliability parameters of six force-time related indices measured with plantar pressure measurements.

    Science.gov (United States)

    Deschamps, Kevin; Roosen, Philip; Bruyninckx, Herman; Desloovere, Kaat; Deleu, Paul-Andre; Matricali, Giovanni A; Peeraer, Louis; Staes, Filip

    2013-09-01

    Functional interpretation of plantar pressure measurements is commonly done through the use of ratios and indices which are preceded by the strategic combination of a subsampling method and selection of physical quantities. However, errors which may arise throughout the determination of these temporal indices/ratio calculations (T-IRC) have not been quantified. The purpose of the current study was therefore to estimate the reliability of T-IRC following semi-automatic total mapping (SATM). Using a repeated-measures design, two experienced therapists performed three subsampling sessions on three left and right pedobarographic footprints of ten healthy participants. Following the subsampling, six T-IRC were calculated: Rearfoot-Forefoot_fti, Rearfoot-Midfoot_fti, Forefoot medial/lateral_fti, First ray_fti, Metatarsal 1-Metatarsal 5_fti, Foot medial-lateral_fti. Patterns of the T-IRC were found to be consistent and in good agreement with corresponding knowledge from the literature. The inter-session errors of both therapists were similar in pattern and magnitude. The lowest peak inter-therapist error was found in the First ray_fti (6.5 a.u.) whereas the highest peak inter-therapist error was observed in the Forefoot medial/lateral_fti (27.0 a.u.) The magnitude of the inter-session and inter-therapist error varied over time, precluding the calculation of a simple numerical value for the error. The difference between both error parameters of all T-IRC was negligible which underscores the repeatability of the SATM protocol. The current study reports consistent patterns for six T-IRC and similar inter-session and inter-therapist error. The proposed SATM protocol and the T-IRC may therefore serve as basis for functional interpretation of footprint data. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Reliability and Repetition Effect of the Center of Pressure and Kinematics Parameters That Characterize Trunk Postural Control During Unstable Sitting Test.

    Science.gov (United States)

    Barbado, David; Moreside, Janice; Vera-Garcia, Francisco J

    2017-03-01

    Although unstable seat methodology has been used to assess trunk postural control, the reliability of the variables that characterize it remains unclear. To analyze reliability and learning effect of center of pressure (COP) and kinematic parameters that characterize trunk postural control performance in unstable seating. The relationships between kinematic and COP parameters also were explored. Test-retest reliability design. Biomechanics laboratory setting. Twenty-three healthy male subjects. Participants volunteered to perform 3 sessions at 1-week intervals, each consisting of five 70-second balancing trials. A force platform and a motion capture system were used to measure COP and pelvis, thorax, and spine displacements. Reliability was assessed through standard error of measurement (SEM) and intraclass correlation coefficients (ICC 2,1 ) using 3 methods: (1) comparing the last trial score of each day; (2) comparing the best trial score of each day; and (3) calculating the average of the three last trial scores of each day. Standard deviation and mean velocity were calculated to assess balance performance. Although analyses of variance showed some differences in balance performance between days, these differences were not significant between days 2 and 3. Best result and average methods showed the greatest reliability. Mean velocity of the COP showed high reliability (0.71 reliability (0.37 reliability using the average method (0.62 reliability than kinematics ones. Specifically, mean velocity of COP showed the highest test-retest reliability, especially for the average and best methods. Although correlations between COP and mean joint angular velocity were high, the few relationships between COP and kinematic standard deviation suggest different postural behavior can lead to a similar balance performance during an unstable sitting protocol. III. Copyright © 2017 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights

  3. Reliable experimental setup to test the pressure modulation of Baerveldt Implant tubes for reducing post-operative hypotony

    Science.gov (United States)

    Ramani, Ajay

    future to evaluate custom inserts and their effects on the pressure drop over 4 -- 6 weeks. The design requirements were: simulate physiological conditions [flow rate between 1.25 and 2.5 mul/min], evaluate small inner diameter tubes [50 and 75 mum] and annuli, and demonstrate reliability and repeatability. The current study was focused on benchmarking the experimental setup for the IOP range of 15 -- 20 mm Hg. Repeated experiments have been conducted using distilled water with configurations [diameter of tube, insert diameter, lengths of insert and tube, and flow rate] that produce pressure variations which include the 15 -- 20 mm Hg range. Two similar setups were assembled and evaluated for repeatability between the two. Experimental measurements of pressure drop were validated using theoretical calculations. Theory predicted a range of expected values by considering manufacturing and performance tolerances of the apparatus components: tube diameter, insert diameter, and the flow-rate and pressure [controlled by pump]. In addition, preliminary experiments evaluated the dissolution of suture samples in a balanced salt solution and in distilled water. The balanced salt solution approximates the eye's aqueous humor properties, and it was expected that the salt and acid would help to hydrolyze sutures much faster than distilled water. Suture samples in a balanced salt solution showed signs of deterioration [flaking] within 23 days, and distilled water samples showed only slight signs of deterioration after about 30 days. These preliminary studies indicate that future dissolution and flow experiments should be conducted using the balanced salt solution. Also, the absorbable sutures showed signs of bulk erosion/deterioration in a balanced salt solution after 14 days, which indicates that they may not be suitable as inserts in the implant tubes because flakes could block the tube entrance. Further long term studies should be performed in order to understand the effects of

  4. HUBBLE SPACE TELESCOPE AND HI IMAGING OF STRONG RAM PRESSURE STRIPPING IN THE COMA SPIRAL NGC 4921: DENSE CLOUD DECOUPLING AND EVIDENCE FOR MAGNETIC BINDING IN THE ISM

    Energy Technology Data Exchange (ETDEWEB)

    Kenney, Jeffrey D. P.; Abramson, Anne [Yale University Astronomy Department, P.O. Box 208101, New Haven, CT 06520-8101 (United States); Bravo-Alfaro, Hector, E-mail: jeff.kenney@yale.edu [Institut d’Astrophysique de Paris, CNRS/UPMC, 98bis, Boulevard Arago F-75014, Paris (France)

    2015-08-15

    Remarkable dust extinction features in the deep Hubble Space Telescope (HST) V and I images of the face-on Coma cluster spiral galaxy NGC 4921 show in unprecedented ways how ram pressure strips the ISM from the disk of a spiral galaxy. New VLA HI maps show a truncated and highly asymmetric HI disk with a compressed HI distribution in the NW, providing evidence for ram pressure acting from the NW. Where the HI distribution is truncated in the NW region, HST images show a well-defined, continuous front of dust that extends over 90° and 20 kpc. This dust front separates the dusty from dust-free regions of the galaxy, and we interpret it as galaxy ISM swept up near the leading side of the ICM–ISM interaction. We identify and characterize 100 pc–1 kpc scale substructure within this dust front caused by ram pressure, including head–tail filaments, C-shaped filaments, and long smooth dust fronts. The morphology of these features strongly suggests that dense gas clouds partially decouple from surrounding lower density gas during stripping, but decoupling is inhibited, possibly by magnetic fields that link and bind distant parts of the ISM.

  5. Is One Trial Sufficient to Obtain Excellent Pressure Pain Threshold Reliability in the Low Back of Asymptomatic Individuals? A Test-Retest Study.

    Science.gov (United States)

    Balaguier, Romain; Madeleine, Pascal; Vuillerme, Nicolas

    2016-01-01

    The assessment of pressure pain threshold (PPT) provides a quantitative value related to the mechanical sensitivity to pain of deep structures. Although excellent reliability of PPT has been reported in numerous anatomical locations, its absolute and relative reliability in the lower back region remains to be determined. Because of the high prevalence of low back pain in the general population and because low back pain is one of the leading causes of disability in industrialized countries, assessing pressure pain thresholds over the low back is particularly of interest. The purpose of this study study was (1) to evaluate the intra- and inter- absolute and relative reliability of PPT within 14 locations covering the low back region of asymptomatic individuals and (2) to determine the number of trial required to ensure reliable PPT measurements. Fifteen asymptomatic subjects were included in this study. PPTs were assessed among 14 anatomical locations in the low back region over two sessions separated by one hour interval. For the two sessions, three PPT assessments were performed on each location. Reliability was assessed computing intraclass correlation coefficients (ICC), standard error of measurement (SEM) and minimum detectable change (MDC) for all possible combinations between trials and sessions. Bland-Altman plots were also generated to assess potential bias in the dataset. Relative reliability for both intra- and inter- session was almost perfect with ICC ranged from 0.85 to 0.99. With respect to the intra-session, no statistical difference was reported for ICCs and SEM regardless of the conducted comparisons between trials. Conversely, for inter-session, ICCs and SEM values were significantly larger when two consecutive PPT measurements were used for data analysis. No significant difference was observed for the comparison between two consecutive measurements and three measurements. Excellent relative and absolute reliabilities were reported for both intra

  6. Privacy Protection in Cloud Using Rsa Algorithm

    OpenAIRE

    Amandeep Kaur; Manpreet Kaur

    2014-01-01

    The cloud computing architecture has been on high demand nowadays. The cloud has been successful over grid and distributed environment due to its cost and high reliability along with high security. However in the area of research it is observed that cloud computing still has some issues in security regarding privacy. The cloud broker provide services of cloud to general public and ensures that data is protected however they sometimes lag security and privacy. Thus in this work...

  7. Trusted cloud computing

    CERN Document Server

    Krcmar, Helmut; Rumpe, Bernhard

    2014-01-01

    This book documents the scientific results of the projects related to the Trusted Cloud Program, covering fundamental aspects of trust, security, and quality of service for cloud-based services and applications. These results aim to allow trustworthy IT applications in the cloud by providing a reliable and secure technical and legal framework. In this domain, business models, legislative circumstances, technical possibilities, and realizable security are closely interwoven and thus are addressed jointly. The book is organized in four parts on "Security and Privacy", "Software Engineering and

  8. Reliability, standard error, and minimum detectable change of clinical pressure pain threshold testing in people with and without acute neck pain.

    Science.gov (United States)

    Walton, David M; Macdermid, Joy C; Nielson, Warren; Teasell, Robert W; Chiasson, Marco; Brown, Lauren

    2011-09-01

    Clinical measurement. To evaluate the intrarater, interrater, and test-retest reliability of an accessible digital algometer, and to determine the minimum detectable change in normal healthy individuals and a clinical population with neck pain. Pressure pain threshold testing may be a valuable assessment and prognostic indicator for people with neck pain. To date, most of this research has been completed using algometers that are too resource intensive for routine clinical use. Novice raters (physiotherapy students or clinical physiotherapists) were trained to perform algometry testing over 2 clinically relevant sites: the angle of the upper trapezius and the belly of the tibialis anterior. A convenience sample of normal healthy individuals and a clinical sample of people with neck pain were tested by 2 different raters (all participants) and on 2 different days (healthy participants only). Intraclass correlation coefficient (ICC), standard error of measurement, and minimum detectable change were calculated. A total of 60 healthy volunteers and 40 people with neck pain were recruited. Intrarater reliability was almost perfect (ICC = 0.94-0.97), interrater reliability was substantial to near perfect (ICC = 0.79-0.90), and test-retest reliability was substantial (ICC = 0.76-0.79). Smaller change was detectable in the trapezius compared to the tibialis anterior. This study provides evidence that novice raters can perform digital algometry with adequate reliability for research and clinical use in people with and without neck pain.

  9. Human error probability evaluation as part of reliability analysis of digital protection system of advanced pressurized water reactor - APR 1400

    International Nuclear Information System (INIS)

    Varde, P. V.; Lee, D. Y.; Han, J. B.

    2003-03-01

    A case of study on human reliability analysis has been performed as part of reliability analysis of digital protection system of the reactor automatically actuates the shutdown system of the reactor when demanded. However, the safety analysis takes credit for operator action as a diverse mean for tripping the reactor for, though a low probability, ATWS scenario. Based on the available information two cases, viz., human error in tripping the reactor and calibration error for instrumentations in protection system, have been analyzed. Wherever applicable a parametric study has also been performed

  10. The photoevaporation of interstellar clouds

    International Nuclear Information System (INIS)

    Bertoldi, F.

    1989-01-01

    The dynamics of the photoevaporation of interstellar clouds and its consequences for the structure and evolution of H II regions are studied. An approximate analytical solution for the evolution of photoevaporating clouds is derived under the realistic assumption of axisymmetry. The effects of magnetic fields are taken into account in an approximate way. The evolution of a neutral cloud subjected to the ionizing radiation of an OB star has two distinct stages. When a cloud is first exposed to the radiation, the increase in pressure due to the ionization at the surface of the cloud leads to a radiation-driven implosion: an ionization front drives a shock into the cloud, ionizes part of it and compresses the remaining into a dense globule. The initial implosion is followed by an equilibrium cometary stage, in which the cloud maintains a semistationary comet-shaped configuration; it slowly evaporates while accelerating away from the ionizing star until the cloud has been completely ionized, reaches the edge of the H II region, or dies. Expressions are derived for the cloud mass-loss rate and acceleration. To investigate the effect of the cloud photoevaporation on the structure of H II regions, the evolution of an ensemble of clouds of a given mass distribution is studied. It is shown that the compressive effect of the ionizing radiation can induce star formation in clouds that were initially gravitationally stable, both for thermally and magnetically supported clouds

  11. Cloud Computing and Security Issues

    OpenAIRE

    Rohan Jathanna; Dhanamma Jagli

    2017-01-01

    Cloud computing has become one of the most interesting topics in the IT world today. Cloud model of computing as a resource has changed the landscape of computing as it promises of increased greater reliability, massive scalability, and decreased costs have attracted businesses and individuals alike. It adds capabilities to Information Technology’s. Over the last few years, cloud computing has grown considerably in Information Technology. As more and more information of individuals and compan...

  12. Studi Perbandingan Layanan Cloud Computing

    OpenAIRE

    Afdhal, Afdhal

    2013-01-01

    In the past few years, cloud computing has became a dominant topic in the IT area. Cloud computing offers hardware, infrastructure, platform and applications without requiring end-users knowledge of the physical location and the configuration of providers who deliver the services. It has been a good solution to increase reliability, reduce computing cost, and make opportunities to IT industries to get more advantages. The purpose of this article is to present a better understanding of cloud d...

  13. A reliable measure of footwear upper comfort enabled by an innovative sock equipped with textile pressure sensors.

    Science.gov (United States)

    Herbaut, Alexis; Simoneau-Buessinger, Emilie; Barbier, Franck; Cannard, Francis; Guéguen, Nils

    2016-10-01

    Footwear comfort is essential and pressure distribution on the foot was shown as a relevant objective measurement to assess it. However, asperities on the foot sides, especially the metatarsals and the instep, make its evaluation difficult with available equipment. Thus, a sock equipped with textile pressure sensors was designed. Results from the mechanical tests showed a high linearity of the sensor response under incremental loadings and allowed to determine the regression equation to convert voltage values into pressure measurements. The sensor response was also highly repeatable and the creep under constant loading was low. Pressure measurements on human feet associated with a perception questionnaire exhibited that significant relationships existed between pressure and comfort perceived on the first, the third and the fifth metatarsals and top of the instep. Practitioner Summary: A sock equipped with textile sensors was validated for measuring the pressure on the foot top, medial and lateral sides to evaluate footwear comfort. This device may be relevant to help individuals with low sensitivity, such as children, elderly or neuropathic, to choose the shoes that fit the best.

  14. Development, validity and reliability of a new pressure air biofeedback device (PAB) for measuring isometric extension strength of the lumbar spine.

    Science.gov (United States)

    Pienaar, Andries W; Barnard, Justhinus G

    2017-04-01

    This study describes the development of a new portable muscle testing device, using air pressure as a biofeedback and strength testing tool. For this purpose, a pressure air biofeedback device (PAB ® ) was developed to measure and record the isometric extension strength of the lumbar multifidus muscle in asymptomatic and low back pain (LBP) persons. A total of 42 subjects (age 47.58 years, ±18.58) participated in this study. The validity of PAB ® was assessed by comparing a selected measure, air pressure force in millibar (mb), to a standard criterion; calibrated weights in kilograms (kg) during day-to-day tests. Furthermore, clinical trial-to-trial and day-to-day tests of maximum voluntary isometric contraction (MVIC) of L5 lumbar multifidus were done to compare air pressure force (mb) to electromyography (EMG) in microvolt (μV) and to measure the reliability of PAB ® . A highly significant relationship were found between air pressure output (mb) and calibrated weights (kg). In addition, Pearson correlation calculations showed a significant relationship between PAB ® force (mb) and EMG activity (μV) for all subjects (n = 42) examined, as well as for the asymptomatic group (n = 24). No relationship was detected for the LBP group (n = 18). In terms of lumbar extension strength, we found that asymptomatic subjects were significantly stronger than LBP subjects. The results of the PAB ® test differentiated between LBP and asymptomatic subject's lumbar isometric extension strength without any risk to the subjects and also indicate that the lumbar isometric extension test with the new PAB ® device is reliable and valid.

  15. Study of Vapour Cloud Explosion Impact from Pressure Changes in the Liquefied Petroleum Gas Sphere Tank Storage Leakage

    Science.gov (United States)

    Rashid, Z. A.; Suhaimi Yeong, A. F. Mohd; Alias, A. B.; Ahmad, M. A.; AbdulBari Ali, S.

    2018-05-01

    This research was carried out to determine the risk impact of Liquefied Petroleum Gas (LPG) storage facilities, especially in the event of LPG tank explosion. In order to prevent the LPG tank explosion from occurring, it is important to decide the most suitable operating condition for the LPG tank itself, as the explosion of LPG tank could affect and cause extensive damage to the surrounding. The explosion of LPG tank usually occurs due to the rise of pressure in the tank. Thus, in this research, a method called Planas-Cuchi was applied to determine the Peak Side-On Overpressure (Po) of the LPG tank during the occurrence of explosion. Thermodynamic properties of saturated propane, (C3H8) have been chosen as a reference and basis of calculation to determine the parameters such as Explosion Energy (E), Equivalent Mass of TNT (WTNT), and Scaled Overpressure (PS ). A cylindrical LPG tank in Feyzin Refinery, France was selected as a case study in this research and at the end of this research, the most suitable operating pressure of the LPG tank was determined.

  16. A new method for improving the reliability of fracture toughness surveillance of nuclear pressure vessel by neutron irradiated embrittlement

    International Nuclear Information System (INIS)

    Zhang Xinping; Shi Yaowu

    1992-01-01

    In order to obtain more information from neutron irradiated sample specimens and raise the reliability of fracture toughness surveillance test, it has more important significance to repeatedly exploit the broken Charpy-size specimen which had been tested in surveillance test. In this work, on the renewing design and utilization for Charpy-size specimens, 9 data of fracture toughness can be gained from one pre-cracked side-grooved Charpy-size specimen while at the preset usually only 1 to 3 data of fracture toughness can be obtained from one Chharpy-size specimen. Thus, it is found that the new method would obviously improve the reliability of fracture toughness surveillance test and evaluation. Some factors which affect the reasonable design of pre-cracked deep side-groove Charpy-size compound specimen have been discussed

  17. GEWEX cloud assessment: A review

    Science.gov (United States)

    Stubenrauch, Claudia; Rossow, William B.; Kinne, Stefan; Ackerman, Steve; Cesana, Gregory; Chepfer, Hélène; Di Girolamo, Larry; Getzewich, Brian; Guignard, Anthony; Heidinger, Andy; Maddux, Brent; Menzel, Paul; Minnis, Patrick; Pearl, Cindy; Platnick, Steven; Poulsen, Caroline; Riedi, Jérôme; Sayer, Andrew; Sun-Mack, Sunny; Walther, Andi; Winker, Dave; Zeng, Shen; Zhao, Guangyu

    2013-05-01

    Clouds cover about 70% of the Earth's surface and play a dominant role in the energy and water cycle of our planet. Only satellite observations provide a continuous survey of the state of the atmosphere over the entire globe and across the wide range of spatial and temporal scales that comprise weather and climate variability. Satellite cloud data records now exceed more than 25 years; however, climatologies compiled from different satellite datasets can exhibit systematic biases. Questions therefore arise as to the accuracy and limitations of the various sensors. The Global Energy and Water cycle Experiment (GEWEX) Cloud Assessment, initiated in 2005 by the GEWEX Radiation Panel, provides the first coordinated intercomparison of publicly available, global cloud products (gridded, monthly statistics) retrieved from measurements of multi-spectral imagers (some with multi-angle view and polarization capabilities), IR sounders and lidar. Cloud properties under study include cloud amount, cloud height (in terms of pressure, temperature or altitude), cloud radiative properties (optical depth or emissivity), cloud thermodynamic phase and bulk microphysical properties (effective particle size and water path). Differences in average cloud properties, especially in the amount of high-level clouds, are mostly explained by the inherent instrument measurement capability for detecting and/or identifying optically thin cirrus, especially when overlying low-level clouds. The study of long-term variations with these datasets requires consideration of many factors. The monthly, gridded database presented here facilitates further assessments, climate studies, and the evaluation of climate models.

  18. Structure and characteristics of diffuse interstellar clouds

    International Nuclear Information System (INIS)

    Arshutkin, L.N.; Kolesnik, I.G.

    1978-01-01

    The results of model calculations for spherically symmetrical interstellar clouds being under external pressure are given. Thermal balance of gas clouds is considered. Ultraviolet radiation fields in clouds and equilibrium for chemical elements are calculated for this purpose. Calculations were carried out in the case when cooling is under way mainly by carbon atoms and ions. The clouds with mass up to 700 Msub(sun) under external pressure from 800 to 3000 K cm -3 are considered. In typical for Galactic disk conditions, clouds have dense n > or approximately 200 cm -3 , and cold T approximately 20-30 K state clouds depending on external pressure is given. The critical mass for clouds at the Galactic disk is approximately 500-600 Msub(sun). It is less than the isothermal solution by a factor of approximately 1.5. The massive gas-dust cloud formation problem is discussed

  19. RECENT THREATS TO CLOUD COMPUTING DATA AND ITS PREVENTION MEASURES

    OpenAIRE

    Rahul Neware*

    2017-01-01

    As the cloud computing is expanding day by day due to its benefits like Cost, Speed Global Scale, Productivity, Performance, Reliability etc. Everyone, like Business vendors, governments etc are using the cloud computing to grow fast. Although Cloud Computing has above mentioned and other benefits but security of cloud is problems and due to this security problem adoption of cloud computing is not growing. This paper gives information about recent threats to the cloud computing data and its p...

  20. High-Pressure Spark Plasma Sintering (HP SPS): A Promising and Reliable Method for Preparing Ti-Al-Si Alloys.

    Science.gov (United States)

    Knaislová, Anna; Novák, Pavel; Cygan, Sławomir; Jaworska, Lucyna; Cabibbo, Marcello

    2017-04-27

    Ti-Al-Si alloys are prospective material for high-temperature applications. Due to low density, good mechanical properties, and oxidation resistance, these intermetallic alloys can be used in the aerospace and automobile industries. Ti-Al-Si alloys were prepared by powder metallurgy using reactive sintering, milling, and spark plasma sintering. One of the novel SPS techniques is high-pressure spark plasma sintering (HP SPS), which was tested in this work and applied to a Ti-10Al-20Si intermetallic alloy using a pressure of 6 GPa and temperatures ranging from 1318 K (1045 °C) to 1597 K (1324 °C). The low-porosity consolidated samples consist of Ti₅Si₃ silicides in an aluminide (TiAl) matrix. The hardness varied between 720 and 892 HV 5.

  1. Advanced cloud fault tolerance system

    Science.gov (United States)

    Sumangali, K.; Benny, Niketa

    2017-11-01

    Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.

  2. Cloud Governance

    DEFF Research Database (Denmark)

    Berthing, Hans Henrik

    Denne præsentation beskriver fordele og værdier ved anvendelse af Cloud Computing. Endvidere inddrager resultater fra en række internationale analyser fra ISACA om Cloud Computing.......Denne præsentation beskriver fordele og værdier ved anvendelse af Cloud Computing. Endvidere inddrager resultater fra en række internationale analyser fra ISACA om Cloud Computing....

  3. Sustainable Availability Provision in Distributed Cloud Services

    OpenAIRE

    Yanovskaya, O; Yanovsky, M; Kharchenko, V; Kor, A; Pattinson, C

    2017-01-01

    The article is an extension of this paper 1 . It describes methods for dealing with reliability and fault tolerance issues in cloud-based datacenters. These methods mainly focus on the elimination of a single point of failure within any component of the cloud infrastructure, availability of infrastructure and accessibility of cloud services. Methods for providing the availability of hardware, software and network components are also presented. The analysis of the actual accessibility of cloud...

  4. A Stochastic Reliability Model for Application in a Multidisciplinary Optimization of a Low Pressure Turbine Blade Made of Titanium Aluminide

    OpenAIRE

    Dresbach,Christian; Becker,Thomas; Reh,Stefan; Wischek,Janine; Zur,Sascha; Buske,Clemens; Schmidt,Thomas; Tiefers,Ruediger

    2016-01-01

    Abstract Currently, there are a lot of research activities dealing with gamma titanium aluminide (γ-TiAl) alloys as new materials for low pressure turbine (LPT) blades. Even though the scatter in mechanical properties of such intermetallic alloys is more distinctive as in conventional metallic alloys, stochastic investigations on γ -TiAl alloys are very rare. For this reason, we analyzed the scatter in static and dynamic mechanical properties of the cast alloy Ti-48Al-2Cr-2Nb. It wa...

  5. Validity and reliability of a novel slow cuff-deflation system for noninvasive blood pressure monitoring in patients with continuous-flow left ventricular assist device.

    Science.gov (United States)

    Lanier, Gregg M; Orlanes, Khristine; Hayashi, Yacki; Murphy, Jennifer; Flannery, Margaret; Te-Frey, Rosie; Uriel, Nir; Yuzefpolskaya, Melana; Mancini, Donna M; Naka, Yoshifumi; Takayama, Hiroo; Jorde, Ulrich P; Demmer, Ryan T; Colombo, Paolo C

    2013-09-01

    Doppler ultrasound is the clinical gold standard for noninvasive blood pressure (BP) measurement among continuous-flow left ventricular assist device patients. The relationship of Doppler BP to systolic BP (SBP) and mean arterial pressure (MAP) is uncertain and Doppler measurements require a clinic visit. We studied the relationship between Doppler BP and both arterial-line (A-line) SBP and MAP. Validity and reliability of the Terumo Elemano BP Monitor, a novel slow cuff-deflation device that could potentially be used by patients at home, were assessed. Doppler and Terumo BP measurements were made in triplicate among 60 axial continuous-flow left ventricular assist device (HeartMate II) patients (30 inpatients and 30 outpatients) at 2 separate exams (360 possible measurements). A-line measures were also obtained among inpatients. Mean absolute differences (MADs) and correlations were used to determine within-device reliability (comparison of second and third BP measures) and between-device validity. Bland-Altman plots assessed BP agreement between A-line, Doppler BP, and Terumo Elemano. Success rates for Doppler and Terumo Elemano were 100% and 91%. Terumo Elemano MAD for repeat SBP and MAP were 4.6±0.6 and 4.2±0.6 mm Hg; repeat Doppler BP MAD was 2.9±0.2 mm Hg. Mean Doppler BP was lower than A-line SBP by 4.1 (MAD=6.4±1.4) mm Hg and higher than MAP by 9.5 (MAD=11.0±1.2) mm Hg; Terumo Elemano underestimated A-line SBP by 0.3 (MAD=5.6±0.9) mm Hg and MAP by 1.7 (MAD=6.0±1.0) mm Hg. Doppler BP more closely approximates SBP than MAP. Terumo Elemano was successful, reliable, and valid when compared with A-line and Doppler.

  6. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  7. Studi Perbandingan Layanan Cloud Computing

    Directory of Open Access Journals (Sweden)

    Afdhal Afdhal

    2014-03-01

    Full Text Available In the past few years, cloud computing has became a dominant topic in the IT area. Cloud computing offers hardware, infrastructure, platform and applications without requiring end-users knowledge of the physical location and the configuration of providers who deliver the services. It has been a good solution to increase reliability, reduce computing cost, and make opportunities to IT industries to get more advantages. The purpose of this article is to present a better understanding of cloud delivery service, correlation and inter-dependency. This article compares and contrasts the different levels of delivery services and the development models, identify issues, and future directions on cloud computing. The end-users comprehension of cloud computing delivery service classification will equip them with knowledge to determine and decide which business model that will be chosen and adopted securely and comfortably. The last part of this article provides several recommendations for cloud computing service providers and end-users.

  8. CO observations of southern high-latitude clouds

    International Nuclear Information System (INIS)

    Keto, E.R.; Myers, P.C.

    1986-01-01

    Results from a survey of 2.6 mm emission in the J = 1 to 0 transition of CO of clouds are reported for 15 high Galactic latitude clouds and three clouds located on the fringe of a large molecular cloud in the Chameleon dark cloud complex. The line widths, excitation temperatures, sizes, and n(CO)/N(H2) ratio of these clouds are similar to those seen in dark clouds. The densities, extinctions, and masses of the high-latitude clouds are one order of magnitude less than those found in dark clouds. For its size and velocity dispersion, the typical cloud has a mass of at least 10 times less than that needed to bind the cloud by self-gravity alone. External pressures are needed to maintain the typical cloud in equilibrium, and these values are consistent with several estimates of the intercloud pressure. 32 references

  9. Green Cloud Computing: A Literature Survey

    Directory of Open Access Journals (Sweden)

    Laura-Diana Radu

    2017-11-01

    Full Text Available Cloud computing is a dynamic field of information and communication technologies (ICTs, introducing new challenges for environmental protection. Cloud computing technologies have a variety of application domains, since they offer scalability, are reliable and trustworthy, and offer high performance at relatively low cost. The cloud computing revolution is redesigning modern networking, and offering promising environmental protection prospects as well as economic and technological advantages. These technologies have the potential to improve energy efficiency and to reduce carbon footprints and (e-waste. These features can transform cloud computing into green cloud computing. In this survey, we review the main achievements of green cloud computing. First, an overview of cloud computing is given. Then, recent studies and developments are summarized, and environmental issues are specifically addressed. Finally, future research directions and open problems regarding green cloud computing are presented. This survey is intended to serve as up-to-date guidance for research with respect to green cloud computing.

  10. Assuring reliability of unconventional weld joint configurations in austenitic stainless steel pressure vessels through non-destructive examination

    International Nuclear Information System (INIS)

    Jayakumar, I.; Manimohan, M.; Chandrasekaran, G.V.; Abdul Majeeth, S.; Subrahmanyam, P.S.

    1996-01-01

    Design of weld configurations in engineering structures is based on NDE inspectability apart from other considerations. They are mostly standardised. This paper deals with the development of an effective NDE methodology for an unconventional weld joint configuration occurring in a critical pressure vessel with edge preparation orientations different from that normally encountered in fabrication of such vessels. It is K-type butt joint between a heavy load bearing member and a curved vessel wall resulting in an oblique fillet weld. The heavy load bearing functional requirement needs a high integrity fail safe joint during its operating life and the stringent quality level specified by customer was ensured at every stage of its workmanship through effective NDE relying on conventional methods as explained. (author)

  11. Reliability study of the auxiliary feed-water system of a pressurized water reactor by faults tree and Bayesian Network

    International Nuclear Information System (INIS)

    Lava, Deise Diana; Borges, Diogo da Silva; Guimarães, Antonio Cesar Ferreira; Moreira, Maria de Lourdes

    2017-01-01

    This paper aims to present a study of the reliability of the Auxiliary Feed-water System (AFWS) through the methods of Fault Tree and Bayesian Network. Therefore, the paper consists of a literature review of the history of nuclear energy and the methodologies used. The AFWS is responsible for providing water system to cool the secondary circuit of nuclear reactors of the PWR type when normal feeding water system failure. How this system operates only when the primary system fails, it is expected that the AFWS failure probability is very low. The AFWS failure probability is divided into two cases: the first is the probability of failure in the first eight hours of operation and the second is the probability of failure after eight hours of operation, considering that the system has not failed within the first eight hours. The calculation of the probability of failure of the second case was made through the use of Fault Tree and Bayesian Network, that it was constructed from the Fault Tree. The results of the failure probability obtained were very close, on the order of 10 -3 . (author)

  12. Reliability study of the auxiliary feed-water system of a pressurized water reactor by faults tree and Bayesian Network

    Energy Technology Data Exchange (ETDEWEB)

    Lava, Deise Diana; Borges, Diogo da Silva; Guimarães, Antonio Cesar Ferreira; Moreira, Maria de Lourdes, E-mail: deise_dy@hotmail.com, E-mail: diogosb@outlook.com, E-mail: tony@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    This paper aims to present a study of the reliability of the Auxiliary Feed-water System (AFWS) through the methods of Fault Tree and Bayesian Network. Therefore, the paper consists of a literature review of the history of nuclear energy and the methodologies used. The AFWS is responsible for providing water system to cool the secondary circuit of nuclear reactors of the PWR type when normal feeding water system failure. How this system operates only when the primary system fails, it is expected that the AFWS failure probability is very low. The AFWS failure probability is divided into two cases: the first is the probability of failure in the first eight hours of operation and the second is the probability of failure after eight hours of operation, considering that the system has not failed within the first eight hours. The calculation of the probability of failure of the second case was made through the use of Fault Tree and Bayesian Network, that it was constructed from the Fault Tree. The results of the failure probability obtained were very close, on the order of 10{sup -3}. (author)

  13. Experimental aspects of buoyancy correction in measuring reliable high-pressure excess adsorption isotherms using the gravimetric method

    Science.gov (United States)

    Nguyen, Huong Giang T.; Horn, Jarod C.; Thommes, Matthias; van Zee, Roger D.; Espinal, Laura

    2017-12-01

    Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO2 and supercritical N2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.

  14. Reliability in engineering '87

    International Nuclear Information System (INIS)

    Tuma, M.

    1987-01-01

    The participants heard 51 papers dealing with the reliability of engineering products. Two of the papers were incorporated in INIS, namely ''Reliability comparison of two designs of low pressure regeneration of the 1000 MW unit at the Temelin nuclear power plant'' and ''Use of probability analysis of reliability in designing nuclear power facilities.''(J.B.)

  15. OMI/Aura Effective Cloud Pressure and Fraction (Raman Scattering) Daily L2 Global 0.25 deg Lat/Lon Grid V003

    Data.gov (United States)

    National Aeronautics and Space Administration — The reprocessed Version-3 Aura OMI Level-2G Cloud data product OMCLDRRG has been made available (in April 2012) to the public from the NASA Goddard Earth Sciences...

  16. OMI/Aura Cloud Pressure and Fraction (O2-O2 Absorption) 1-Orbit L2 Swath 13x24km V003

    Data.gov (United States)

    National Aeronautics and Space Administration — The reprocessed OMI/Aura Level-2 cloud data product OMCLDO2 is now available from the NASA GoddardEarth Sciences Data and Information Services Center (GES DISC) for...

  17. OMI/Aura Effective Cloud Pressure and Fraction (Raman Scattering) 1-Orbit L2 Swath 13x24 km V003 (OMCLDRR) at GES DISC

    Data.gov (United States)

    National Aeronautics and Space Administration — The reprocessed Aura Ozone Monitoring Instrument (OMI) Version 003 Level 2 Cloud Data Product OMCLDRR is available to the public from the NASA Goddard Earth Sciences...

  18. OMI/Aura Cloud Pressure and Fraction (O2-O2 Absorption) 1-Orbit L2 Swath 13x24km V003 (OMCLDO2) at GES DISC

    Data.gov (United States)

    National Aeronautics and Space Administration — The reprocessed OMI/Aura Level-2 cloud data product OMCLDO2 is now available from the NASA GoddardEarth Sciences Data and Information Services Center (GES DISC) for...

  19. Overall Key Performance Indicator to Optimizing Operation of High-Pressure Homogenizers for a Reliable Quantification of Intracellular Components in Pichia pastoris.

    Science.gov (United States)

    Garcia-Ortega, Xavier; Reyes, Cecilia; Montesinos, José Luis; Valero, Francisco

    2015-01-01

    The most commonly used cell disruption procedures may present lack of reproducibility, which introduces significant errors in the quantification of intracellular components. In this work, an approach consisting in the definition of an overall key performance indicator (KPI) was implemented for a lab scale high-pressure homogenizer (HPH) in order to determine the disruption settings that allow the reliable quantification of a wide sort of intracellular components. This innovative KPI was based on the combination of three independent reporting indicators: decrease of absorbance, release of total protein, and release of alkaline phosphatase activity. The yeast Pichia pastoris growing on methanol was selected as model microorganism due to it presents an important widening of the cell wall needing more severe methods and operating conditions than Escherichia coli and Saccharomyces cerevisiae. From the outcome of the reporting indicators, the cell disruption efficiency achieved using HPH was about fourfold higher than other lab standard cell disruption methodologies, such bead milling cell permeabilization. This approach was also applied to a pilot plant scale HPH validating the methodology in a scale-up of the disruption process. This innovative non-complex approach developed to evaluate the efficacy of a disruption procedure or equipment can be easily applied to optimize the most common disruption processes, in order to reach not only reliable quantification but also recovery of intracellular components from cell factories of interest.

  20. Concurrent validity and reliability of using ground reaction force and center of pressure parameters in the determination of leg movement initiation during single leg lift.

    Science.gov (United States)

    Aldabe, Daniela; de Castro, Marcelo Peduzzi; Milosavljevic, Stephan; Bussey, Melanie Dawn

    2016-09-01

    Postural adjustment evaluations during single leg lift requires the initiation of heel lift (T1) identification. T1 measured by means of motion analyses system is the most reliable approach. However, this method involves considerable workspace, expensive cameras, and time processing data and setting up laboratory. The use of ground reaction forces (GRF) and centre of pressure (COP) data is an alternative method as its data processing and setting up is less time consuming. Further, kinetic data is normally collected using frequency samples higher than 1000Hz whereas kinematic data are commonly captured using 50-200Hz. This study describes the concurrent-validity and reliability of GRF and COP measurements in determining T1, using a motion analysis system as reference standard. Kinematic and kinetic data during single leg lift were collected from ten participants. GRF and COP data were collected using one and two force plates. Displacement of a single heel marker was captured by means of ten Vicon(©) cameras. Kinetic and kinematic data were collected using a sample frequency of 1000Hz. Data were analysed in two stages: identification of key events in the kinetic data, and assessing concurrent validity of T1 based on the chosen key events with T1 provided by the kinematic data. The key event presenting the least systematic bias, along with a narrow 95% CI and limits of agreement against the reference standard T1, was the Baseline COPy event. Baseline COPy event was obtained using one force plate and presented excellent between-tester reliability. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Reliability of Center of Pressure Parameters in Postural Sway among Athlete and Non-athlete Men in Different Levels of Fatigue and Vision

    Directory of Open Access Journals (Sweden)

    Zohreh Meshkati

    2010-10-01

    Full Text Available Objective: This study aimed to investigate the skill, fatigue and vision-related differences between athletes and non-athletes in reliability of center of pressure (COP measures derived from force platform. Materials & Methods: Thirty-one healthy male participants (15 athletes and 16 non-athletes were tested on force platform on two sessions with a 48-72 hr interval. COP parameters was recorded during two-legged quiet standing before and after a generalized fatigue exercise by treadmill, with eyes-open (EO and eyes-closed (EC. Standard deviation (SD of amplitude, SD of velocity in anteroposterior (AP and mediolateral directions and mean total velocity were calculated from 30 sec COP data. Results: Higher intraclass correlation coefficient (ICC was found for COP measures in the athlete (compared with the non-athlete group. ICC was increased in post-fatigue (compared with pre-fatigue conditions. Also higher ICC was found for EC (compared with EO tests. Coefficients of variation smaller than 15% were obtained for most of the COP measures. Alpha level of 0. 05 was considered for all statistical analyses. Regarding the level of skill, fatigue and vision, mean total velocity (P=0. 001 and SD of velocity (AP (P=0. 001 were the most reliable parameters. Conclusion: The results aid researchers in selection of reliable COP measures for future studies of postural control in sports. In this way, researchers can use mean total velocity and SD of velocity (AP parameters in their studies in same conditions on athletes.

  2. Reliability and feasibility of gait initiation centre-of-pressure excursions using a Wii® Balance Board in older adults at risk of falling.

    Science.gov (United States)

    Lee, James; Webb, Graham; Shortland, Adam P; Edwards, Rebecca; Wilce, Charlotte; Jones, Gareth D

    2018-04-17

    Impairments in dynamic balance have a detrimental effect in older adults at risk of falls (OARF). Gait initiation (GI) is a challenging transitional movement. Centre of pressure (COP) excursions using force plates have been used to measure GI performance. The Nintendo Wii Balance Board (WBB) offers an alternative to a standard force plate for the measurement of CoP excursion. To determine the reliability of COP excursions using the WBB, and its feasibility within a 4-week strength and balance intervention (SBI) treating OARF. Ten OARF subjects attending SBI and ten young healthy adults, each performed three GI trials after 10 s of quiet stance from a standardised foot position (shoulder width) before walking forward 3 m to pick up an object. Averaged COP mediolateral (ML) and anteroposterior (AP) excursions (distance) and path-length time (GI-onset to first toe-off) were analysed. WBB ML (0.866) and AP COP excursion (0.895) reliability (ICC 3,1 ) was excellent, and COP path-length reliability was fair (0.517). Compared to OARF, healthy subjects presented with larger COP excursion in both directions and shorter COP path length. OARF subjects meaningfully improved their timed-up-and-go and ML COP excursion between weeks 1-4, while AP COP excursions, path length, and confidence-in-balance remained stable. COP path length and excursion directions probably measure different GI postural control attributes. Limitations in WBB accuracy and precision in transition tasks needs to be established before it can be used clinically to measure postural aspects of GI viably. The WBB could provide valuable clinical evaluation of balance function in OARF.

  3. Cloud Computing

    CERN Document Server

    Antonopoulos, Nick

    2010-01-01

    Cloud computing has recently emerged as a subject of substantial industrial and academic interest, though its meaning and scope is hotly debated. For some researchers, clouds are a natural evolution towards the full commercialisation of grid systems, while others dismiss the term as a mere re-branding of existing pay-per-use technologies. From either perspective, 'cloud' is now the label of choice for accountable pay-per-use access to third party applications and computational resources on a massive scale. Clouds support patterns of less predictable resource use for applications and services a

  4. The Ethics of Cloud Computing.

    Science.gov (United States)

    de Bruin, Boudewijn; Floridi, Luciano

    2017-02-01

    Cloud computing is rapidly gaining traction in business. It offers businesses online services on demand (such as Gmail, iCloud and Salesforce) and allows them to cut costs on hardware and IT support. This is the first paper in business ethics dealing with this new technology. It analyzes the informational duties of hosting companies that own and operate cloud computing datacentres (e.g., Amazon). It considers the cloud services providers leasing 'space in the cloud' from hosting companies (e.g., Dropbox, Salesforce). And it examines the business and private 'clouders' using these services. The first part of the paper argues that hosting companies, services providers and clouders have mutual informational (epistemic) obligations to provide and seek information about relevant issues such as consumer privacy, reliability of services, data mining and data ownership. The concept of interlucency is developed as an epistemic virtue governing ethically effective communication. The second part considers potential forms of government restrictions on or proscriptions against the development and use of cloud computing technology. Referring to the concept of technology neutrality, it argues that interference with hosting companies and cloud services providers is hardly ever necessary or justified. It is argued, too, however, that businesses using cloud services (e.g., banks, law firms, hospitals etc. storing client data in the cloud) will have to follow rather more stringent regulations.

  5. Cloud Cover

    Science.gov (United States)

    Schaffhauser, Dian

    2012-01-01

    This article features a major statewide initiative in North Carolina that is showing how a consortium model can minimize risks for districts and help them exploit the advantages of cloud computing. Edgecombe County Public Schools in Tarboro, North Carolina, intends to exploit a major cloud initiative being refined in the state and involving every…

  6. Cloud Control

    Science.gov (United States)

    Ramaswami, Rama; Raths, David; Schaffhauser, Dian; Skelly, Jennifer

    2011-01-01

    For many IT shops, the cloud offers an opportunity not only to improve operations but also to align themselves more closely with their schools' strategic goals. The cloud is not a plug-and-play proposition, however--it is a complex, evolving landscape that demands one's full attention. Security, privacy, contracts, and contingency planning are all…

  7. Performance Evaluation of Cloud Service Considering Fault Recovery

    Science.gov (United States)

    Yang, Bo; Tan, Feng; Dai, Yuan-Shun; Guo, Suchang

    In cloud computing, cloud service performance is an important issue. To improve cloud service reliability, fault recovery may be used. However, the use of fault recovery could have impact on the performance of cloud service. In this paper, we conduct a preliminary study on this issue. Cloud service performance is quantified by service response time, whose probability density function as well as the mean is derived.

  8. Factors Influencing Organization Adoption Decision On Cloud Computing

    OpenAIRE

    Ailar Rahimli

    2013-01-01

    Cloud computing is a developing field, using by organization that require to computing resource to provide the organizational computing needs. The goal of this research is evaluate the factors that influence on organization decision to adopt the cloud computing in Malaysia. Factors that relate to cloud computing adoption that include : need for cloud computing, cost effectiveness, security effectiveness of cloud computing and reliability. This paper evaluated the factors that influence on ado...

  9. Security in Service Level Agreements for Cloud Computing

    OpenAIRE

    Bernsmed, Karin; JAATUN, Martin Gilje; Undheim, Astrid

    2011-01-01

    The Cloud computing paradigm promises reliable services, accessible from anywhere in the world, in an on-demand manner. Insufficient security has been identified as a major obstacle to adopting Cloud services. To deal with the risks associated with outsourcing data and applications to the Cloud, new methods for security assurance are urgently needed. This paper presents a framework for security in Service Level Agreements for Cloud computing. The purpose is twofold; to help potential Cloud cu...

  10. Upper tropospheric cloud systems determined from IR Sounders and their influence on the atmosphere

    Science.gov (United States)

    Stubenrauch, Claudia; Protopapadaki, Sofia; Feofilov, Artem; Velasco, Carola Barrientos

    2017-02-01

    Covering about 30% of the Earth, upper tropospheric clouds play a key role in the climate system by modulating the Earth's energy budget and heat transport. Infrared Sounders reliably identify cirrus down to an IR optical depth of 0.1. Recently LMD has built global cloud climate data records from AIRS and IASI observations, covering the periods from 2003-2015 and 2008-2015, respectively. Upper tropospheric clouds often form mesoscale systems. Their organization and properties are being studied by (1) distinguishing cloud regimes within 2° × 2° regions and (2) applying a spatial composite technique on adjacent cloud pressures, which estimates the horizontal extent of the mesoscale cloud systems. Convective core, cirrus anvil and thin cirrus of these systems are then distinguished by their emissivity. Compared to other studies of tropical mesoscale convective systems our data include also the thinner anvil parts, which make out about 30% of the area of tropical mesoscale convective systems. Once the horizontal and vertical structure of these upper tropospheric cloud systems is known, we can estimate their radiative effects in terms of top of atmosphere and surface radiative fluxes and by computing their heating rates.

  11. Context-aware distributed cloud computing using CloudScheduler

    Science.gov (United States)

    Seuster, R.; Leavett-Brown, CR; Casteels, K.; Driemel, C.; Paterson, M.; Ring, D.; Sobie, RJ; Taylor, RP; Weldon, J.

    2017-10-01

    The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.

  12. Cloud Computing Fundamentals

    Science.gov (United States)

    Furht, Borko

    In the introductory chapter we define the concept of cloud computing and cloud services, and we introduce layers and types of cloud computing. We discuss the differences between cloud computing and cloud services. New technologies that enabled cloud computing are presented next. We also discuss cloud computing features, standards, and security issues. We introduce the key cloud computing platforms, their vendors, and their offerings. We discuss cloud computing challenges and the future of cloud computing.

  13. Cloud Computing

    CERN Document Server

    Baun, Christian; Nimis, Jens; Tai, Stefan

    2011-01-01

    Cloud computing is a buzz-word in today's information technology (IT) that nobody can escape. But what is really behind it? There are many interpretations of this term, but no standardized or even uniform definition. Instead, as a result of the multi-faceted viewpoints and the diverse interests expressed by the various stakeholders, cloud computing is perceived as a rather fuzzy concept. With this book, the authors deliver an overview of cloud computing architecture, services, and applications. Their aim is to bring readers up to date on this technology and thus to provide a common basis for d

  14. A MAS-Based Cloud Service Brokering System to Respond Security Needs of Cloud Customers

    Directory of Open Access Journals (Sweden)

    Jamal Talbi

    2017-03-01

    Full Text Available Cloud computing is becoming a key factor in computer science and an important technology for many organizations to deliver different types of services. The companies which provide services to customers are called as cloud service providers. The cloud users (CUs increase and require secure, reliable and trustworthy cloud service providers (CSPs from the market. So, it’s a challenge for a new customer to choose the highly secure provider. This paper presents a cloud service brokering system in order to analyze and rank the secured cloud service provider among the available providers list. This model uses an autonomous and flexible agent in multi-agent system (MASs that have an intelligent behavior and suitable tools for helping the brokering system to assess the security risks for the group of cloud providers which make decision of the more secured provider and justify the business needs of users in terms of security and reliability.

  15. OMI/Aura Cloud Pressure and Fraction (O2-O2 Absorption) Zoomed 1-Orbit L2 Swath 13x12km V003

    Data.gov (United States)

    National Aeronautics and Space Administration — The reprocessed OMI/Aura Level-2 zoomed cloud data product OMCLDO2Z at 13x12 km resolution is now available ( http://disc.gsfc.nasa.gov/Aura/OMI/omcldo2z_v003.shtml...

  16. OMI/Aura Cloud Pressure and Fraction (O2-O2 Absorption) Daily L2 Global 0.25 deg Lat/Lon Grid V003

    Data.gov (United States)

    National Aeronautics and Space Administration — The reprocessed OMI/Aura Level-2G cloud data product OMCLDO2G, is now available (http://disc.gsfc.nasa.gov/Aura/OMI/omcldo2g_v003.shtml) from the NASA Goddard Earth...

  17. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  18. A point of application study to determine the accuracy, precision and reliability of a low-cost balance plate for center of pressure measurement.

    Science.gov (United States)

    Goble, Daniel J; Khan, Ehran; Baweja, Harsimran S; O'Connor, Shawn M

    2018-04-11

    Changes in postural sway measured via force plate center of pressure have been associated with many aspects of human motor ability. A previous study validated the accuracy and precision of a relatively new, low-cost and portable force plate called the Balance Tracking System (BTrackS). This work compared a laboratory-grade force plate versus BTrackS during human-like dynamic sway conditions generated by an inverted pendulum device. The present study sought to extend previous validation attempts for BTrackS using a more traditional point of application (POA) approach. Computer numerical control (CNC) guided application of ∼155 N of force was applied five times to each of 21 points on five different BTrackS Balance Plate (BBP) devices with a hex-nose plunger. Results showed excellent agreement (ICC > 0.999) between the POAs and measured COP by the BBP devices, as well as high accuracy ( 0.999) providing evidence of almost perfect inter-device reliability. Taken together, these results provide an important, static corollary to the previously obtained dynamic COP results from inverted pendulum testing of the BBP. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Cloud Computing

    DEFF Research Database (Denmark)

    Krogh, Simon

    2013-01-01

    with technological changes, the paradigmatic pendulum has swung between increased centralization on one side and a focus on distributed computing that pushes IT power out to end users on the other. With the introduction of outsourcing and cloud computing, centralization in large data centers is again dominating...... the IT scene. In line with the views presented by Nicolas Carr in 2003 (Carr, 2003), it is a popular assumption that cloud computing will be the next utility (like water, electricity and gas) (Buyya, Yeo, Venugopal, Broberg, & Brandic, 2009). However, this assumption disregards the fact that most IT production......), for instance, in establishing and maintaining trust between the involved parties (Sabherwal, 1999). So far, research in cloud computing has neglected this perspective and focused entirely on aspects relating to technology, economy, security and legal questions. While the core technologies of cloud computing (e...

  20. Mobile Clouds

    DEFF Research Database (Denmark)

    Fitzek, Frank; Katz, Marcos

    A mobile cloud is a cooperative arrangement of dynamically connected communication nodes sharing opportunistic resources. In this book, authors provide a comprehensive and motivating overview of this rapidly emerging technology. The book explores how distributed resources can be shared by mobile...... users in very different ways and for various purposes. The book provides many stimulating examples of resource-sharing applications. Enabling technologies for mobile clouds are also discussed, highlighting the key role of network coding. Mobile clouds have the potential to enhance communications...... performance, improve utilization of resources and create flexible platforms to share resources in very novel ways. Energy efficient aspects of mobile clouds are discussed in detail, showing how being cooperative can bring mobile users significant energy saving. The book presents and discusses multiple...

  1. Self-Awareness of Cloud Applications

    NARCIS (Netherlands)

    Iosup, Alexandru; Zhu, Xiaoyun; Merchant, Arif; Kalyvianaki, Eva; Maggio, Martina; Spinner, Simon; Abdelzaher, Tarek; Mengshoel, Ole; Bouchenak, Sara

    2016-01-01

    Cloud applications today deliver an increasingly larger portion of the Information and Communication Technology (ICT) services. To address the scale, growth, and reliability of cloud applications, self-aware management and scheduling are becoming commonplace. How are they used in practice? In this

  2. Service quality of cloud-based applications

    CERN Document Server

    Bauer, Eric

    2014-01-01

    This book explains why applications running on cloud might not deliver the same service reliability, availability, latency and overall quality to end users as they do when the applications are running on traditional (non-virtualized, non-cloud) configurations, and explains what can be done to mitigate that risk.

  3. MEASURING THE FRACTAL STRUCTURE OF INTERSTELLAR CLOUDS

    NARCIS (Netherlands)

    VOGELAAR, MGR; WAKKER, BP; SCHWARZ, UJ

    1991-01-01

    To study the structure of interstellar clouds we used the so-called perimeter-area relation to estimate fractal dimensions. We studied the reliability of the method by applying it to artificial fractals and discuss some of the problems and pitfalls. Results for two different cloud types

  4. Characterization of the pressure wave originating in the explosion of an extended heavy gas cloud: critical analysis of the treatment of its propagation in air and interaction with obstacles

    International Nuclear Information System (INIS)

    Essers, J.A.

    1983-01-01

    The protection of nuclear power plants against external explosions of heavy gas clouds is a relevant topic of nuclear safety studies. The ultimate goal of such studies is to provide realistic inputs for the prediction of structure loadings and transient response. To obtain those inputs, relatively complex computer codes have been constructed to describe the propagation in air of strong perturbations due to unconfined gas cloud explosions. A detailed critical analysis of those codes is presented. In particular, the relative errors on wave speed, induced flow velocity, as well as on reflected wave speed and overpressure, respectively due to the use of a simplified non-linear isentropic approximation and of linear acoustic models, are estimated as functions of the overpressure of the incident pulse. The ability of the various models to accurately predict the time and distance required for sharp pressure front formation is discussed. Simple computer codes using implicit finite-difference discretizations are proposed to compare the results obtained with the various models for spherical wave propagation. Those codes are also useful to study the reflection of the waves on an outer spherical flexible wall and to investigate the effect of the elasticity and damping coefficients of the wall on the characteristics of the reflected pressure pulse

  5. CLOUD COMPUTING SECURITY ISSUES

    Directory of Open Access Journals (Sweden)

    Florin OGIGAU-NEAMTIU

    2012-01-01

    Full Text Available The term “cloud computing” has been in the spotlights of IT specialists the last years because of its potential to transform this industry. The promised benefits have determined companies to invest great sums of money in researching and developing this domain and great steps have been made towards implementing this technology. Managers have traditionally viewed IT as difficult and expensive and the promise of cloud computing leads many to think that IT will now be easy and cheap. The reality is that cloud computing has simplified some technical aspects of building computer systems, but the myriad challenges facing IT environment still remain. Organizations which consider adopting cloud based services must also understand the many major problems of information policy, including issues of privacy, security, reliability, access, and regulation. The goal of this article is to identify the main security issues and to draw the attention of both decision makers and users to the potential risks of moving data into “the cloud”.

  6. Soft Clouding

    DEFF Research Database (Denmark)

    Søndergaard, Morten; Markussen, Thomas; Wetton, Barnabas

    2012-01-01

    Soft Clouding is a blended concept, which describes the aim of a collaborative and transdisciplinary project. The concept is a metaphor implying a blend of cognitive, embodied interaction and semantic web. Furthermore, it is a metaphor describing our attempt of curating a new semantics of sound...... archiving. The Soft Clouding Project is part of LARM - a major infrastructure combining research in and access to sound and radio archives in Denmark. In 2012 the LARM infrastructure will consist of more than 1 million hours of radio, combined with metadata who describes the content. The idea is to analyse...... the concept of ‘infrastructure’ and ‘interface’ on a creative play with the fundamentals of LARM (and any sound archive situation combining many kinds and layers of data and sources). This paper will present and discuss the Soft clouding project from the perspective of the three practices and competencies...

  7. Global Software Development with Cloud Platforms

    Science.gov (United States)

    Yara, Pavan; Ramachandran, Ramaseshan; Balasubramanian, Gayathri; Muthuswamy, Karthik; Chandrasekar, Divya

    Offshore and outsourced distributed software development models and processes are facing challenges, previously unknown, with respect to computing capacity, bandwidth, storage, security, complexity, reliability, and business uncertainty. Clouds promise to address these challenges by adopting recent advances in virtualization, parallel and distributed systems, utility computing, and software services. In this paper, we envision a cloud-based platform that addresses some of these core problems. We outline a generic cloud architecture, its design and our first implementation results for three cloud forms - a compute cloud, a storage cloud and a cloud-based software service- in the context of global distributed software development (GSD). Our ”compute cloud” provides computational services such as continuous code integration and a compile server farm, ”storage cloud” offers storage (block or file-based) services with an on-line virtual storage service, whereas the on-line virtual labs represent a useful cloud service. We note some of the use cases for clouds in GSD, the lessons learned with our prototypes and identify challenges that must be conquered before realizing the full business benefits. We believe that in the future, software practitioners will focus more on these cloud computing platforms and see clouds as a means to supporting a ecosystem of clients, developers and other key stakeholders.

  8. Overview of MPLNET Version 3 Cloud Detection

    Science.gov (United States)

    Lewis, Jasper R.; Campbell, James; Welton, Ellsworth J.; Stewart, Sebastian A.; Haftings, Phillip

    2016-01-01

    The National Aeronautics and Space Administration Micro Pulse Lidar Network, version 3, cloud detection algorithm is described and differences relative to the previous version are highlighted. Clouds are identified from normalized level 1 signal profiles using two complementary methods. The first method considers vertical signal derivatives for detecting low-level clouds. The second method, which detects high-level clouds like cirrus, is based on signal uncertainties necessitated by the relatively low signal-to-noise ratio exhibited in the upper troposphere by eye-safe network instruments, especially during daytime. Furthermore, a multitemporal averaging scheme is used to improve cloud detection under conditions of a weak signal-to-noise ratio. Diurnal and seasonal cycles of cloud occurrence frequency based on one year of measurements at the Goddard Space Flight Center (Greenbelt, Maryland) site are compared for the new and previous versions. The largest differences, and perceived improvement, in detection occurs for high clouds (above 5 km, above MSL), which increase in occurrence by over 5%. There is also an increase in the detection of multilayered cloud profiles from 9% to 19%. Macrophysical properties and estimates of cloud optical depth are presented for a transparent cirrus dataset. However, the limit to which the cirrus cloud optical depth could be reliably estimated occurs between 0.5 and 0.8. A comparison using collocated CALIPSO measurements at the Goddard Space Flight Center and Singapore Micro Pulse Lidar Network (MPLNET) sites indicates improvements in cloud occurrence frequencies and layer heights.

  9. Implementing and developing cloud computing applications

    CERN Document Server

    Sarna, David E Y

    2010-01-01

    From small start-ups to major corporations, companies of all sizes have embraced cloud computing for the scalability, reliability, and cost benefits it can provide. It has even been said that cloud computing may have a greater effect on our lives than the PC and dot-com revolutions combined.Filled with comparative charts and decision trees, Implementing and Developing Cloud Computing Applications explains exactly what it takes to build robust and highly scalable cloud computing applications in any organization. Covering the major commercial offerings available, it provides authoritative guidan

  10. Cloud Chamber

    DEFF Research Database (Denmark)

    Gfader, Verina

    Cloud Chamber takes its roots in a performance project, titled The Guests 做东, devised by Verina Gfader for the 11th Shanghai Biennale, ‘Why Not Ask Again: Arguments, Counter-arguments, and Stories’. Departing from the inclusion of the biennale audience to write a future folk tale, Cloud Chamber......: fiction and translation and translation through time; post literacy; world picturing-world typing; and cartographic entanglements and expressions of subjectivity; through the lens a social imaginary of worlding or cosmological quest. Art at its core? Contributions by Nikos Papastergiadis, Rebecca Carson...

  11. Molecular clouds in Orion and Monoceros

    International Nuclear Information System (INIS)

    Maddalena, R.J.

    1986-01-01

    About one-eighth of a well-sampled 850 deg 2 region of Orion and Monoceros, extending from the Taurus dark cloud complex to the CMa OB 1 association, shows emission at the frequency of the J = 1 → 0 transition of CO coming from either local clouds (d 8 from the galactic plane or from more distant objects located within a few degrees of the plane and well outside the solar circle. Local giant molecular clouds associated with Orion A and B have enhanced temperatures and densities near their western edges possibly due to compression of molecular gas by a high pressure region created by the cumulative effects of ∼10 supernovae that occurred in the Orion OB association. Another giant molecular cloud found to be associated with Mon R2 may be related to the Orion clouds. Two filamentary clouds (one possible 200 pc long but only 3-10 pc wide) were found that may represent a new class of object; magnetic fields probably play a role in confining these filaments. An expanding ring of clouds concentric with the H II region S 264 and its ionizing 08 star λ Ori was also investigated, and a possible evolutionary sequence for the ring is given in detail: the clouds probably constitute fragments of the original cloud from which λ Ori formed, the gas pressure of the H II region and the rocket effect having disrupted the cloud and accelerated the fragments to their present velocities

  12. Telemedicine Based on Mobile Devices and Mobile Cloud Computing

    OpenAIRE

    Lidong Wang; Cheryl Ann Alexander

    2014-01-01

    Mobile devices such as smartphones and tablets support kinds of mobile computing and services. They can access to the cloud or offload the computation-intensive part to the cloud computing resources. Mobile cloud computing (MCC) integrates the cloud computing into the mobile environment, which extends mobile devices’ battery lifetime, improves their data storage capacity and processing power, and improves their reliability and information security. In this paper, the applications of smartphon...

  13. Cloud computing.

    Science.gov (United States)

    Wink, Diane M

    2012-01-01

    In this bimonthly series, the author examines how nurse educators can use Internet and Web-based technologies such as search, communication, and collaborative writing tools; social networking and social bookmarking sites; virtual worlds; and Web-based teaching and learning programs. This article describes how cloud computing can be used in nursing education.

  14. Cloud Computing

    Indian Academy of Sciences (India)

    IAS Admin

    2014-03-01

    Mar 1, 2014 ... There are several types of services available on a cloud. We describe .... CPU speed has been doubling every 18 months at constant cost. Besides this ... Plain text (e.g., email) may be read by anyone who is able to access it.

  15. Atmospheric cloud physics laboratory project study

    Science.gov (United States)

    Schultz, W. E.; Stephen, L. A.; Usher, L. H.

    1976-01-01

    Engineering studies were performed for the Zero-G Cloud Physics Experiment liquid cooling and air pressure control systems. A total of four concepts for the liquid cooling system was evaluated, two of which were found to closely approach the systems requirements. Thermal insulation requirements, system hardware, and control sensor locations were established. The reservoir sizes and initial temperatures were defined as well as system power requirements. In the study of the pressure control system, fluid analyses by the Atmospheric Cloud Physics Laboratory were performed to determine flow characteristics of various orifice sizes, vacuum pump adequacy, and control systems performance. System parameters predicted in these analyses as a function of time include the following for various orifice sizes: (1) chamber and vacuum pump mass flow rates, (2) the number of valve openings or closures, (3) the maximum cloud chamber pressure deviation from the allowable, and (4) cloud chamber and accumulator pressure.

  16. Cloud detection for MIPAS using singular vector decomposition

    Directory of Open Access Journals (Sweden)

    J. Hurley

    2009-09-01

    Full Text Available Satellite-borne high-spectral-resolution limb sounders, such as the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS onboard ENVISAT, provide information on clouds, especially optically thin clouds, which have been difficult to observe in the past. The aim of this work is to develop, implement and test a reliable cloud detection method for infrared spectra measured by MIPAS.

    Current MIPAS cloud detection methods used operationally have been developed to detect cloud effective filling more than 30% of the measurement field-of-view (FOV, under geometric and optical considerations – and hence are limited to detecting fairly thick cloud, or large physical extents of thin cloud. In order to resolve thin clouds, a new detection method using Singular Vector Decomposition (SVD is formulated and tested. This new SVD detection method has been applied to a year's worth of MIPAS data, and qualitatively appears to be more sensitive to thin cloud than the current operational method.

  17. PEDAGOGICAL ASPECTS OF CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    N. Morze

    2011-05-01

    Full Text Available Recent progress in computer science in the field of redundancy and protection has led to the sharing of data in many different repositories. Modern infrastructure has made cloud computing safe and reliable, and advancement of such computations radically changes the understanding of the use of resources and services. The materials in this article are connected with the definition of pedagogical possibilities of using cloud computing to provide education on the basis of competence-based approach and monitoring of learners (students.

  18. Cloud management and security

    CERN Document Server

    Abbadi, Imad M

    2014-01-01

    Written by an expert with over 15 years' experience in the field, this book establishes the foundations of Cloud computing, building an in-depth and diverse understanding of the technologies behind Cloud computing. In this book, the author begins with an introduction to Cloud computing, presenting fundamental concepts such as analyzing Cloud definitions, Cloud evolution, Cloud services, Cloud deployment types and highlighting the main challenges. Following on from the introduction, the book is divided into three parts: Cloud management, Cloud security, and practical examples. Part one presents the main components constituting the Cloud and federated Cloud infrastructure(e.g., interactions and deployment), discusses management platforms (resources and services), identifies and analyzes the main properties of the Cloud infrastructure, and presents Cloud automated management services: virtual and application resource management services. Part two analyzes the problem of establishing trustworthy Cloud, discuss...

  19. Comparison of Monthly Mean Cloud Fraction and Cloud Optical depth Determined from Surface Cloud Radar, TOVS, AVHRR, and MODIS over Barrow, Alaska

    Science.gov (United States)

    Uttal, Taneil; Frisch, Shelby; Wang, Xuan-Ji; Key, Jeff; Schweiger, Axel; Sun-Mack, Sunny; Minnis, Patrick

    2005-01-01

    A one year comparison is made of mean monthly values of cloud fraction and cloud optical depth over Barrow, Alaska (71 deg 19.378 min North, 156 deg 36.934 min West) between 35 GHz radar-based retrievals, the TOVS Pathfinder Path-P product, the AVHRR APP-X product, and a MODIS based cloud retrieval product from the CERES-Team. The data sets represent largely disparate spatial and temporal scales, however, in this paper, the focus is to provide a preliminary analysis of how the mean monthly values derived from these different data sets compare, and determine how they can best be used separately, and in combination to provide reliable estimates of long-term trends of changing cloud properties. The radar and satellite data sets described here incorporate Arctic specific modifications that account for cloud detection challenges specific to the Arctic environment. The year 2000 was chosen for this initial comparison because the cloud radar data was particularly continuous and reliable that year, and all of the satellite retrievals of interest were also available for the year 2000. Cloud fraction was chosen as a comparison variable as accurate detection of cloud is the primary product that is necessary for any other cloud property retrievals. Cloud optical depth was additionally selected as it is likely the single cloud property that is most closely correlated to cloud influences on surface radiation budgets.

  20. Cloud time

    CERN Document Server

    Lockwood, Dean

    2012-01-01

    The ‘Cloud’, hailed as a new digital commons, a utopia of collaborative expression and constant connection, actually constitutes a strategy of vitalist post-hegemonic power, which moves to dominate immanently and intensively, organizing our affective political involvements, instituting new modes of enclosure, and, crucially, colonizing the future through a new temporality of control. The virtual is often claimed as a realm of invention through which capitalism might be cracked, but it is precisely here that power now thrives. Cloud time, in service of security and profit, assumes all is knowable. We bear witness to the collapse of both past and future virtuals into a present dedicated to the exploitation of the spectres of both.

  1. Fragmentation of rotating protostellar clouds

    International Nuclear Information System (INIS)

    Tohline, J.E.

    1980-01-01

    We examine, with a three-dimensional hydrodynamic computer code, the behavior of rotating, isothermal gas clouds as they collapse from Jeans unstable configurations, in order to determine whether they are susceptible to fragmentation during the initial dynamic collapse phase of their evolution. We find that a gas cloud will not fragment unless (a) it begins collapsing from a radius much smaller than the Jeans radius (i.e., the cloud initially encloses many Jeans masses) and (b) irregularities in the cloud's initial structure (specifically, density inhomogeneities) enclose more than one Jeans mass of material. Gas pressure smooths out features that are not initially Jeans unstable while rotation plays no direct role in damping inhomogeneities. Instead of fragmenting, most of our models collapse to a ring configuration (as has been observed by other investigators in two-dimensional, axisymmetric models). The rings appear to be less susceptible to gragmentation from arbitrary perturbations in their structure than has previously been indicated in other work. Because our models, which include the effects of gas pressure, do not readily fragment during a phase of dynamic collapse, we suggest that gas clouds in the galactic disk undergo fragmentation only during quasi-equilibrium phases of their evolution

  2. Cloud computing for comparative genomics

    Directory of Open Access Journals (Sweden)

    Pivovarov Rimma

    2010-05-01

    Full Text Available Abstract Background Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD, to run within Amazon's Elastic Computing Cloud (EC2. We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. Results We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. Conclusions The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.

  3. Cloud computing for comparative genomics.

    Science.gov (United States)

    Wall, Dennis P; Kudtarkar, Parul; Fusaro, Vincent A; Pivovarov, Rimma; Patil, Prasad; Tonellato, Peter J

    2010-05-18

    Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.

  4. Essentials of cloud computing

    CERN Document Server

    Chandrasekaran, K

    2014-01-01

    ForewordPrefaceComputing ParadigmsLearning ObjectivesPreambleHigh-Performance ComputingParallel ComputingDistributed ComputingCluster ComputingGrid ComputingCloud ComputingBiocomputingMobile ComputingQuantum ComputingOptical ComputingNanocomputingNetwork ComputingSummaryReview PointsReview QuestionsFurther ReadingCloud Computing FundamentalsLearning ObjectivesPreambleMotivation for Cloud ComputingThe Need for Cloud ComputingDefining Cloud ComputingNIST Definition of Cloud ComputingCloud Computing Is a ServiceCloud Computing Is a Platform5-4-3 Principles of Cloud computingFive Essential Charact

  5. Scientific Services on the Cloud

    Science.gov (United States)

    Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong

    Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.

  6. Sequential cloud-point extraction for toxicological screening analysis of medicaments in human plasma by high pressure liquid chromatography with diode array detector.

    Science.gov (United States)

    Madej, Katarzyna; Persona, Karolina; Wandas, Monika; Gomółka, Ewa

    2013-10-18

    A complex extraction system with the use of cloud-point extraction technique (CPE) was developed for sequential isolation of basic and acidic/neutral medicaments from human plasma/serum, screened by HPLC/DAD method. Eight model drugs (paracetamol, promazine, chlorpromazine, amitriptyline, salicyclic acid, opipramol, alprazolam and carbamazepine) were chosen for the study of optimal CPE conditions. The CPE technique consists in partition of an aqueous sample with addition of a surfactant into two phases: micelle-rich phase with the isolated compounds and water phase containing a surfactant below the critical micellar concentration, mainly under influence of temperature change. The proposed extraction system consists of two chief steps: isolation of basic compounds (from pH 12) and then isolation of acidic/neutral compounds (from pH 6) using surfactant Triton X-114 as the extraction medium. Extraction recovery varied from 25.2 to 107.9% with intra-day and inter-day precision (RSD %) ranged 0.88-1087 and 5.32-17.96, respectively. The limits of detection for the studied medicaments at λ 254nm corresponded to therapeutic or low toxic plasma concentration levels. Usefulness of the proposed CPE-HPLC/DAD method for toxicological drug screening was tested via its application to analysis of two serum samples taken from patients suspected of drug overdosing. Published by Elsevier B.V.

  7. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  8. Cloud Computing, Tieto Cloud Server Model

    OpenAIRE

    Suikkanen, Saara

    2013-01-01

    The purpose of this study is to find out what is cloud computing. To be able to make wise decisions when moving to cloud or considering it, companies need to understand what cloud is consists of. Which model suits best to they company, what should be taken into account before moving to cloud, what is the cloud broker role and also SWOT analysis of cloud? To be able to answer customer requirements and business demands, IT companies should develop and produce new service models. IT house T...

  9. Blue skies for CLOUD

    CERN Multimedia

    2006-01-01

    Through the recently approved CLOUD experiment, CERN will soon be contributing to climate research. Tests are being performed on the first prototype of CLOUD, an experiment designed to assess cosmic radiation influence on cloud formation.

  10. The Geospatial Data Cloud: An Implementation of Applying Cloud Computing in Geosciences

    Directory of Open Access Journals (Sweden)

    Xuezhi Wang

    2014-11-01

    Full Text Available The rapid growth in the volume of remote sensing data and its increasing computational requirements bring huge challenges for researchers as traditional systems cannot adequately satisfy the huge demand for service. Cloud computing has the advantage of high scalability and reliability, which can provide firm technical support. This paper proposes a highly scalable geospatial cloud platform named the Geospatial Data Cloud, which is constructed based on cloud computing. The architecture of the platform is first introduced, and then two subsystems, the cloud-based data management platform and the cloud-based data processing platform, are described.  ––– This paper was presented at the First Scientific Data Conference on Scientific Research, Big Data, and Data Science, organized by CODATA-China and held in Beijing on 24-25 February, 2014.

  11. Cloud computing development in Armenia

    Directory of Open Access Journals (Sweden)

    Vazgen Ghazaryan

    2014-10-01

    Full Text Available Purpose – The purpose of the research is to clarify benefits and risks in regards with data protection, cost; business can have by the use of this new technologies for the implementation and management of organization’s information systems.Design/methodology/approach – Qualitative case study of the results obtained via interviews. Three research questions were raised: Q1: How can company benefit from using Cloud Computing compared to other solutions?; Q2: What are possible issues that occur with Cloud Computing?; Q3: How would Cloud Computing change an organizations’ IT infrastructure?Findings – The calculations provided in the interview section prove the financial advantages, even though the precise degree of flexibility and performance has not been assessed. Cloud Computing offers great scalability. Another benefit that Cloud Computing offers, in addition to better performance and flexibility, is reliable and simple backup data storage, physically distributed and so almost invulnerable to damage. Although the advantages of Cloud Computing more than compensate for the difficulties associated with it, the latter must be carefully considered. Since the cloud architecture is relatively new, so far the best guarantee against all risks it entails, from a single company's perspective, is a well-formulated service-level agreement, where the terms of service and the shared responsibility and security roles between the client and the provider are defined.Research limitations/implications – study was carried out on the bases of two companies, which gives deeper view, but for more widely applicable results, a wider analysis is necessary.Practical implications:Originality/Value – novelty of the research depends on the fact that existing approaches on this problem mainly focus on technical side of computing.Research type: case study

  12. Moving towards Cloud Security

    OpenAIRE

    Edit Szilvia Rubóczki; Zoltán Rajnai

    2015-01-01

    Cloud computing hosts and delivers many different services via Internet. There are a lot of reasons why people opt for using cloud resources. Cloud development is increasing fast while a lot of related services drop behind, for example the mass awareness of cloud security. However the new generation upload videos and pictures without reason to a cloud storage, but only few know about data privacy, data management and the proprietary of stored data in the cloud. In an enterprise environment th...

  13. A cost modelling system for cloud computing

    OpenAIRE

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh

    2014-01-01

    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  14. Use Trust Management Framework to Achieve Effective Security Mechanisms in Cloud Environment

    OpenAIRE

    Hicham Toumi; Bouchra Marzak; Amal Talea; Ahmed Eddaoui; Mohamed Talea

    2017-01-01

    Cloud Computing is an Internet based Computing where virtual shared servers provide software, infrastructure, platform and other resources to the customer on pay-as-you-use basis. Cloud Computing is increasingly becoming popular as many enterprise applications and data are moving into cloud platforms. However, with the enormous use of Cloud, the probability of occurring intrusion also increases. There is a major need of bringing security, transparency and reliability in cloud model for client...

  15. Cloud-Top Entrainment in Stratocumulus Clouds

    Science.gov (United States)

    Mellado, Juan Pedro

    2017-01-01

    Cloud entrainment, the mixing between cloudy and clear air at the boundary of clouds, constitutes one paradigm for the relevance of small scales in the Earth system: By regulating cloud lifetimes, meter- and submeter-scale processes at cloud boundaries can influence planetary-scale properties. Understanding cloud entrainment is difficult given the complexity and diversity of the associated phenomena, which include turbulence entrainment within a stratified medium, convective instabilities driven by radiative and evaporative cooling, shear instabilities, and cloud microphysics. Obtaining accurate data at the required small scales is also challenging, for both simulations and measurements. During the past few decades, however, high-resolution simulations and measurements have greatly advanced our understanding of the main mechanisms controlling cloud entrainment. This article reviews some of these advances, focusing on stratocumulus clouds, and indicates remaining challenges.

  16. Cloud Infrastructure & Applications - CloudIA

    Science.gov (United States)

    Sulistio, Anthony; Reich, Christoph; Doelitzscher, Frank

    The idea behind Cloud Computing is to deliver Infrastructure-as-a-Services and Software-as-a-Service over the Internet on an easy pay-per-use business model. To harness the potentials of Cloud Computing for e-Learning and research purposes, and to small- and medium-sized enterprises, the Hochschule Furtwangen University establishes a new project, called Cloud Infrastructure & Applications (CloudIA). The CloudIA project is a market-oriented cloud infrastructure that leverages different virtualization technologies, by supporting Service-Level Agreements for various service offerings. This paper describes the CloudIA project in details and mentions our early experiences in building a private cloud using an existing infrastructure.

  17. Multi-parameter brain tissue microsensor and interface systems: calibration, reliability and user experiences of pressure and temperature sensors in the setting of neurointensive care.

    Science.gov (United States)

    Childs, Charmaine; Wang, Li; Neoh, Boon Kwee; Goh, Hok Liok; Zu, Mya Myint; Aung, Phyo Wai; Yeo, Tseng Tsai

    2014-10-01

    The objective was to investigate sensor measurement uncertainty for intracerebral probes inserted during neurosurgery and remaining in situ during neurocritical care. This describes a prospective observational study of two sensor types and including performance of the complete sensor-bedside monitoring and readout system. Sensors from 16 patients with severe traumatic brain injury (TBI) were obtained at the time of removal from the brain. When tested, 40% of sensors achieved the manufacturer temperature specification of 0.1 °C. Pressure sensors calibration differed from the manufacturers at all test pressures in 8/20 sensors. The largest pressure measurement error was in the intraparenchymal triple sensor. Measurement uncertainty is not influenced by duration in situ. User experiences reveal problems with sensor 'handling', alarms and firmware. Rigorous investigation of the performance of intracerebral sensors in the laboratory and at the bedside has established measurement uncertainty in the 'real world' setting of neurocritical care.

  18. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  19. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  20. Cloud computing for data-intensive applications

    CERN Document Server

    Li, Xiaolin

    2014-01-01

    This book presents a range of cloud computing platforms for data-intensive scientific applications. It covers systems that deliver infrastructure as a service, including: HPC as a service; virtual networks as a service; scalable and reliable storage; algorithms that manage vast cloud resources and applications runtime; and programming models that enable pragmatic programming and implementation toolkits for eScience applications. Many scientific applications in clouds are also introduced, such as bioinformatics, biology, weather forecasting and social networks. Most chapters include case studie

  1. Silicon Photonics Cloud (SiCloud)

    DEFF Research Database (Denmark)

    DeVore, P. T. S.; Jiang, Y.; Lynch, M.

    2015-01-01

    Silicon Photonics Cloud (SiCloud.org) is the first silicon photonics interactive web tool. Here we report new features of this tool including mode propagation parameters and mode distribution galleries for user specified waveguide dimensions and wavelengths.......Silicon Photonics Cloud (SiCloud.org) is the first silicon photonics interactive web tool. Here we report new features of this tool including mode propagation parameters and mode distribution galleries for user specified waveguide dimensions and wavelengths....

  2. Safety and reliability criteria

    International Nuclear Information System (INIS)

    O'Neil, R.

    1978-01-01

    Nuclear power plants and, in particular, reactor pressure boundary components have unique reliability requirements, in that usually no significant redundancy is possible, and a single failure can give rise to possible widespread core damage and fission product release. Reliability may be required for availability or safety reasons, but in the case of the pressure boundary and certain other systems safety may dominate. Possible Safety and Reliability (S and R) criteria are proposed which would produce acceptable reactor design. Without some S and R requirement the designer has no way of knowing how far he must go in analysing his system or component, or whether his proposed solution is likely to gain acceptance. The paper shows how reliability targets for given components and systems can be individually considered against the derived S and R criteria at the design and construction stage. Since in the case of nuclear pressure boundary components there is often very little direct experience on which to base reliability studies, relevant non-nuclear experience is examined. (author)

  3. Validity and reliability of the Malay version of the Hill-Bone compliance to high blood pressure therapy scale for use in primary healthcare settings in Malaysia: A cross-sectional study.

    Science.gov (United States)

    Cheong, A T; Tong, S F; Sazlina, S G

    2015-01-01

    Hill-Bone compliance to high blood pressure therapy scale (HBTS) is one of the useful scales in primary care settings. It has been tested in America, Africa and Turkey with variable validity and reliability. The aim of this paper was to determine the validity and reliability of the Malay version of HBTS (HBTS-M) for the Malaysian population. HBTS comprises three subscales assessing compliance to medication, appointment and salt intake. The content validity of HBTS to the local population was agreed through consensus of expert panel. The 14 items used in the HBTS were adapted to reflect the local situations. It was translated into Malay and then back-translated into English. The translated version was piloted in 30 participants. This was followed by structural and predictive validity, and internal consistency testing in 262 patients with hypertension, who were on antihypertensive agent(s) for at least 1 year in two primary healthcare clinics in Kuala Lumpur, Malaysia. Exploratory factor analyses and the correlation between HBTS-M total score and blood pressure were performed. The Cronbach's alpha was calculated accordingly. Factor analysis revealed a three-component structure represented by two components on medication adherence and one on salt intake adherence. The Kaiser-Meyer-Olkin statistic was 0.764. The variance explained by each factors were 23.6%, 10.4% and 9.8%, respectively. However, the internal consistency for each component was suboptimal with Cronbach's alpha of 0.64, 0.55 and 0.29, respectively. Although there were two components representing medication adherence, the theoretical concepts underlying each concept cannot be differentiated. In addition, there was no correlation between the HBTS-M total score and blood pressure. HBTS-M did not conform to the structural and predictive validity of the original scale. Its reliability on assessing medication and salt intake adherence would most probably to be suboptimal in the Malaysian primary care setting.

  4. Jupiter's Multi-level Clouds

    Science.gov (United States)

    1997-01-01

    Clouds and hazes at various altitudes within the dynamic Jovian atmosphere are revealed by multi-color imaging taken by the Near-Infrared Mapping Spectrometer (NIMS) onboard the Galileo spacecraft. These images were taken during the second orbit (G2) on September 5, 1996 from an early-morning vantage point 2.1 million kilometers (1.3 million miles) above Jupiter. They show the planet's appearance as viewed at various near-infrared wavelengths, with distinct differences due primarily to variations in the altitudes and opacities of the cloud systems. The top left and right images, taken at 1.61 microns and 2.73 microns respectively, show relatively clear views of the deep atmosphere, with clouds down to a level about three times the atmospheric pressure at the Earth's surface.By contrast, the middle image in top row, taken at 2.17 microns, shows only the highest altitude clouds and hazes. This wavelength is severely affected by the absorption of light by hydrogen gas, the main constituent of Jupiter's atmosphere. Therefore, only the Great Red Spot, the highest equatorial clouds, a small feature at mid-northern latitudes, and thin, high photochemical polar hazes can be seen. In the lower left image, at 3.01 microns, deeper clouds can be seen dimly against gaseous ammonia and methane absorption. In the lower middle image, at 4.99 microns, the light observed is the planet's own indigenous heat from the deep, warm atmosphere.The false color image (lower right) succinctly shows various cloud and haze levels seen in the Jovian atmosphere. This image indicates the temperature and altitude at which the light being observed is produced. Thermally-rich red areas denote high temperatures from photons in the deep atmosphere leaking through minimal cloud cover; green denotes cool temperatures of the tropospheric clouds; blue denotes cold of the upper troposphere and lower stratosphere. The polar regions appear purplish, because small-particle hazes allow leakage and reflectivity

  5. Using MODIS Cloud Regimes to Sort Diagnostic Signals of Aerosol-Cloud-Precipitation Interactions.

    Science.gov (United States)

    Oreopoulos, Lazaros; Cho, Nayeong; Lee, Dongmin

    2017-05-27

    Coincident multi-year measurements of aerosol, cloud, precipitation and radiation at near-global scales are analyzed to diagnose their apparent relationships as suggestive of interactions previously proposed based on theoretical, observational, and model constructs. Specifically, we examine whether differences in aerosol loading in separate observations go along with consistently different precipitation, cloud properties, and cloud radiative effects. Our analysis uses a cloud regime (CR) framework to dissect and sort the results. The CRs come from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor and are defined as distinct groups of cloud systems with similar co-variations of cloud top pressure and cloud optical thickness. Aerosol optical depth used as proxy for aerosol loading comes from two sources, MODIS observations, and the MERRA-2 re-analysis, and its variability is defined with respect to local seasonal climatologies. The choice of aerosol dataset impacts our results substantially. We also find that the responses of the marine and continental component of a CR are frequently quite disparate. Overall, CRs dominated by warm clouds tend to exhibit less ambiguous signals, but also have more uncertainty with regard to precipitation changes. Finally, we find weak, but occasionally systematic co-variations of select meteorological indicators and aerosol, which serves as a sober reminder that ascribing changes in cloud and cloud-affected variables solely to aerosol variations is precarious.

  6. Observational constraints on Arctic boundary-layer clouds, surface moisture and sensible heat fluxes

    Science.gov (United States)

    Wu, D. L.; Boisvert, L.; Klaus, D.; Dethloff, K.; Ganeshan, M.

    2016-12-01

    The dry, cold environment and dynamic surface variations make the Arctic a unique but difficult region for observations, especially in the atmospheric boundary layer (ABL). Spaceborne platforms have been the key vantage point to capture basin-scale changes during the recent Arctic warming. Using the AIRS temperature, moisture and surface data, we found that the Arctic surface moisture flux (SMF) had increased by 7% during 2003-2013 (18 W/m2 equivalent in latent heat), mostly in spring and fall near the Arctic coastal seas where large sea ice reduction and sea surface temperature (SST) increase were observed. The increase in Arctic SMF correlated well with the increases in total atmospheric column water vapor and low-level clouds, when compared to CALIPSO cloud observations. It has been challenging for climate models to reliably determine Arctic cloud radiative forcing (CRF). Using the regional climate model HIRHAM5 and assuming a more efficient Bergeron-Findeisen process with generalized subgrid-scale variability for total water content, we were able to produce a cloud distribution that is more consistent with the CloudSat/CALIPSO observations. More importantly, the modified schemes decrease (increase) the cloud water (ice) content in mixed-phase clouds, which help to improve the modeled CRF and energy budget at the surface, because of the dominant role of the liquid water in CRF. Yet, the coupling between Arctic low clouds and the surface is complex and has strong impacts on ABL. Studying GPS/COSMIC radio occultation (RO) refractivity profiles in the Arctic coldest and driest months, we successfully derived ABL inversion height and surface-based inversion (SBI) frequency, and they were anti-correlated over the Arctic Ocean. For the late summer and early fall season, we further analyzed Japanese R/V Mirai ship measurements and found that the open-ocean surface sensible heat flux (SSHF) can explain 10 % of the ABL height variability, whereas mechanisms such as cloud

  7. Investigation into Mobile Learning Framework in Cloud Computing Platform

    OpenAIRE

    Wei, Guo; Joan, Lu

    2014-01-01

    Abstract—Cloud computing infrastructure is increasingly\\ud used for distributed applications. Mobile learning\\ud applications deployed in the cloud are a new research\\ud direction. The applications require specific development\\ud approaches for effective and reliable communication. This\\ud paper proposes an interdisciplinary approach for design and\\ud development of mobile applications in the cloud. The\\ud approach includes front service toolkit and backend service\\ud toolkit. The front servi...

  8. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  9. Reliability training

    Science.gov (United States)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  10. The CLOUD experiment

    CERN Multimedia

    Maximilien Brice

    2006-01-01

    The Cosmics Leaving Outdoor Droplets (CLOUD) experiment as shown by Jasper Kirkby (spokesperson). Kirkby shows a sketch to illustrate the possible link between galactic cosmic rays and cloud formations. The CLOUD experiment uses beams from the PS accelerator at CERN to simulate the effect of cosmic rays on cloud formations in the Earth's atmosphere. It is thought that cosmic ray intensity is linked to the amount of low cloud cover due to the formation of aerosols, which induce condensation.

  11. BUSINESS INTELLIGENCE IN CLOUD

    OpenAIRE

    Celina M. Olszak

    2014-01-01

    . The paper reviews and critiques current research on Business Intelligence (BI) in cloud. This review highlights that organizations face various challenges using BI cloud. The research objectives for this study are a conceptualization of the BI cloud issue, as well as an investigation of some benefits and risks from BI cloud. The study was based mainly on a critical analysis of literature and some reports on BI cloud using. The results of this research can be used by IT and business leaders ...

  12. Cloud Robotics Platforms

    Directory of Open Access Journals (Sweden)

    Busra Koken

    2015-01-01

    Full Text Available Cloud robotics is a rapidly evolving field that allows robots to offload computation-intensive and storage-intensive jobs into the cloud. Robots are limited in terms of computational capacity, memory and storage. Cloud provides unlimited computation power, memory, storage and especially collaboration opportunity. Cloud-enabled robots are divided into two categories as standalone and networked robots. This article surveys cloud robotic platforms, standalone and networked robotic works such as grasping, simultaneous localization and mapping (SLAM and monitoring.

  13. Cloud Processed CCN Suppress Stratus Cloud Drizzle

    Science.gov (United States)

    Hudson, J. G.; Noble, S. R., Jr.

    2017-12-01

    Conversion of sulfur dioxide to sulfate within cloud droplets increases the sizes and decreases the critical supersaturation, Sc, of cloud residual particles that had nucleated the droplets. Since other particles remain at the same sizes and Sc a size and Sc gap is often observed. Hudson et al. (2015) showed higher cloud droplet concentrations (Nc) in stratus clouds associated with bimodal high-resolution CCN spectra from the DRI CCN spectrometer compared to clouds associated with unimodal CCN spectra (not cloud processed). Here we show that CCN spectral shape (bimodal or unimodal) affects all aspects of stratus cloud microphysics and drizzle. Panel A shows mean differential cloud droplet spectra that have been divided according to traditional slopes, k, of the 131 measured CCN spectra in the Marine Stratus/Stratocumulus Experiment (MASE) off the Central California coast. K is generally high within the supersaturation, S, range of stratus clouds (< 0.5%). Because cloud processing decreases Sc of some particles, it reduces k. Panel A shows higher concentrations of small cloud droplets apparently grown on lower k CCN than clouds grown on higher k CCN. At small droplet sizes the concentrations follow the k order of the legend, black, red, green, blue (lowest to highest k). Above 13 µm diameter the lines cross and the hierarchy reverses so that blue (highest k) has the highest concentrations followed by green, red and black (lowest k). This reversed hierarchy continues into the drizzle size range (panel B) where the most drizzle drops, Nd, are in clouds grown on the least cloud-processed CCN (blue), while clouds grown on the most processed CCN (black) have the lowest Nd. Suppression of stratus cloud drizzle by cloud processing is an additional 2nd indirect aerosol effect (IAE) that along with the enhancement of 1st IAE by higher Nc (panel A) are above and beyond original IAE. However, further similar analysis is needed in other cloud regimes to determine if MASE was

  14. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  15. Relationship between cloud radiative forcing, cloud fraction and cloud albedo, and new surface-based approach for determining cloud albedo

    OpenAIRE

    Y. Liu; W. Wu; M. P. Jensen; T. Toto

    2011-01-01

    This paper focuses on three interconnected topics: (1) quantitative relationship between surface shortwave cloud radiative forcing, cloud fraction, and cloud albedo; (2) surfaced-based approach for measuring cloud albedo; (3) multiscale (diurnal, annual and inter-annual) variations and covariations of surface shortwave cloud radiative forcing, cloud fraction, and cloud albedo. An analytical expression is first derived to quantify the relationship between cloud radiative forcing, cloud fractio...

  16. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  17. Cloud CCN feedback

    International Nuclear Information System (INIS)

    Hudson, J.G.

    1992-01-01

    Cloud microphysics affects cloud albedo precipitation efficiency and the extent of cloud feedback in response to global warming. Compared to other cloud parameters, microphysics is unique in its large range of variability and the fact that much of the variability is anthropogenic. Probably the most important determinant of cloud microphysics is the spectra of cloud condensation nuclei (CCN) which display considerable variability and have a large anthropogenic component. When analyzed in combination three field observation projects display the interrelationship between CCN and cloud microphysics. CCN were measured with the Desert Research Institute (DRI) instantaneous CCN spectrometer. Cloud microphysical measurements were obtained with the National Center for Atmospheric Research Lockheed Electra. Since CCN and cloud microphysics each affect the other a positive feedback mechanism can result

  18. Nitrided steel with increased reliability for steam turbine blades of low pressure cylinders; Vysokoazotistaya stal` s povishennoj nadezhdnostni dlya lopatok ha tsilindrov niskogo davleniya parnikh turbin

    Energy Technology Data Exchange (ETDEWEB)

    Andreev, Ch; Lengarski, P [Bylgarska Akademiya na Naukite, Sofia (Bulgaria). Inst. po Metaloznanie i Tekhnologiya na Metalite

    1996-12-31

    A new type of steel has been developed, containing 0.11-0.20% N and less than 0.05% C, the sum of both components being within the range 0.16-0.26%. The metal has an austenite-martensite structure with 10-30% austenite content. Samples obtained by counter-pressure casting have been investigated with respect to the influence of the thermal treatment on mechanical properties. The best properties are obtained when applying hardening by heating at 1050{sup o} C and cooling at 550{sup o} C: fluidity limit R{sub 0}.2>=850 MPa, relative elongation A>=15%, relative shortening Z>=50%, impact viscosity KCU >= 588 kJ/m{sup 2} at critical temperature of brittleness <-40{sup o} C. These properties are combined with high corrosion and wear resistance and make the steel suitable for steam turbine blades. 5 refs., 2 figs., 4 tabs.

  19. Building Resilient Cloud Over Unreliable Commodity Infrastructure

    OpenAIRE

    Kedia, Piyus; Bansal, Sorav; Deshpande, Deepak; Iyer, Sreekanth

    2012-01-01

    Cloud Computing has emerged as a successful computing paradigm for efficiently utilizing managed compute infrastructure such as high speed rack-mounted servers, connected with high speed networking, and reliable storage. Usually such infrastructure is dedicated, physically secured and has reliable power and networking infrastructure. However, much of our idle compute capacity is present in unmanaged infrastructure like idle desktops, lab machines, physically distant server machines, and lapto...

  20. Hybrid cloud for dummies

    CERN Document Server

    Hurwitz, Judith; Halper, Fern; Kirsch, Dan

    2012-01-01

    Understand the cloud and implement a cloud strategy for your business Cloud computing enables companies to save money by leasing storage space and accessing technology services through the Internet instead of buying and maintaining equipment and support services. Because it has its own unique set of challenges, cloud computing requires careful explanation. This easy-to-follow guide shows IT managers and support staff just what cloud computing is, how to deliver and manage cloud computing services, how to choose a service provider, and how to go about implementation. It also covers security and

  1. Secure cloud computing

    CERN Document Server

    Jajodia, Sushil; Samarati, Pierangela; Singhal, Anoop; Swarup, Vipin; Wang, Cliff

    2014-01-01

    This book presents a range of cloud computing security challenges and promising solution paths. The first two chapters focus on practical considerations of cloud computing. In Chapter 1, Chandramouli, Iorga, and Chokani describe the evolution of cloud computing and the current state of practice, followed by the challenges of cryptographic key management in the cloud. In Chapter 2, Chen and Sion present a dollar cost model of cloud computing and explore the economic viability of cloud computing with and without security mechanisms involving cryptographic mechanisms. The next two chapters addres

  2. Clouds of Venus

    Energy Technology Data Exchange (ETDEWEB)

    Knollenberg, R G [Particle Measuring Systems, Inc., 1855 South 57th Court, Boulder, Colorado 80301, U.S.A.; Hansen, J [National Aeronautics and Space Administration, New York (USA). Goddard Inst. for Space Studies; Ragent, B [National Aeronautics and Space Administration, Moffett Field, Calif. (USA). Ames Research Center; Martonchik, J [Jet Propulsion Lab., Pasadena, Calif. (USA); Tomasko, M [Arizona Univ., Tucson (USA)

    1977-05-01

    The current state of knowledge of the Venusian clouds is reviewed. The visible clouds of Venus are shown to be quite similar to low level terrestrial hazes of strong anthropogenic influence. Possible nucleation and particle growth mechanisms are presented. The Pioneer Venus experiments that emphasize cloud measurements are described and their expected findings are discussed in detail. The results of these experiments should define the cloud particle composition, microphysics, thermal and radiative heat budget, rough dynamical features and horizontal and vertical variations in these and other parameters. This information should be sufficient to initialize cloud models which can be used to explain the cloud formation, decay, and particle life cycle.

  3. Overview of the CERES Edition-4 Multilayer Cloud Property Datasets

    Science.gov (United States)

    Chang, F. L.; Minnis, P.; Sun-Mack, S.; Chen, Y.; Smith, R. A.; Brown, R. R.

    2014-12-01

    Knowledge of the cloud vertical distribution is important for understanding the role of clouds on earth's radiation budget and climate change. Since high-level cirrus clouds with low emission temperatures and small optical depths can provide a positive feedback to a climate system and low-level stratus clouds with high emission temperatures and large optical depths can provide a negative feedback effect, the retrieval of multilayer cloud properties using satellite observations, like Terra and Aqua MODIS, is critically important for a variety of cloud and climate applications. For the objective of the Clouds and the Earth's Radiant Energy System (CERES), new algorithms have been developed using Terra and Aqua MODIS data to allow separate retrievals of cirrus and stratus cloud properties when the two dominant cloud types are simultaneously present in a multilayer system. In this paper, we will present an overview of the new CERES Edition-4 multilayer cloud property datasets derived from Terra as well as Aqua. Assessment of the new CERES multilayer cloud datasets will include high-level cirrus and low-level stratus cloud heights, pressures, and temperatures as well as their optical depths, emissivities, and microphysical properties.

  4. Human reliability

    International Nuclear Information System (INIS)

    Bubb, H.

    1992-01-01

    This book resulted from the activity of Task Force 4.2 - 'Human Reliability'. This group was established on February 27th, 1986, at the plenary meeting of the Technical Reliability Committee of VDI, within the framework of the joint committee of VDI on industrial systems technology - GIS. It is composed of representatives of industry, representatives of research institutes, of technical control boards and universities, whose job it is to study how man fits into the technical side of the world of work and to optimize this interaction. In a total of 17 sessions, information from the part of ergonomy dealing with human reliability in using technical systems at work was exchanged, and different methods for its evaluation were examined and analyzed. The outcome of this work was systematized and compiled in this book. (orig.) [de

  5. Microelectronics Reliability

    Science.gov (United States)

    2017-01-17

    inverters  connected in a chain. ................................................. 5  Figure 3  Typical graph showing frequency versus square root of...developing an experimental  reliability estimating methodology that could both illuminate the  lifetime  reliability of advanced devices,  circuits and...or  FIT of the device. In other words an accurate estimate of the device  lifetime  was found and thus the  reliability  that  can  be  conveniently

  6. Radiative properties of clouds

    International Nuclear Information System (INIS)

    Twomey, S.

    1993-01-01

    The climatic effects of condensation nuclei in the formation of cloud droplets and the subsequent role of the cloud droplets as contributors to the planetary short-wave albedo is emphasized. Microphysical properties of clouds, which can be greatly modified by the degree of mixing with cloud-free air from outside, are discussed. The effect of clouds on visible radiation is assessed through multiple scattering of the radiation. Cloudwater or ice absorbs more with increasing wavelength in the near-infrared region, with water vapor providing the stronger absorption over narrower wavelength bands. Cloud thermal infrared absorption can be solely related to liquid water content at least for shallow clouds and clouds in the early development state. Three-dimensional general circulation models have been used to study the climatic effect of clouds. It was found for such studies (which did not consider variations in cloud albedo) that the cooling effects due to the increase in planetary short-wave albedo from clouds were offset by heating effects due to thermal infrared absorption by the cloud. Two permanent direct effects of increased pollution are discussed in this chapter: (a) an increase of absorption in the visible and near infrared because of increased amounts of elemental carbon, which gives rise to a warming effect climatically, and (b) an increased optical thickness of clouds due to increasing cloud droplet number concentration caused by increasing cloud condensation nuclei number concentration, which gives rise to a cooling effect climatically. An increase in cloud albedo from 0.7 to 0.87 produces an appreciable climatic perturbation of cooling up to 2.5 K at the ground, using a hemispheric general circulation model. Effects of pollution on cloud thermal infrared absorption are negligible

  7. Relations between a novel, reliable, and rapid index of arterial compliance (PP-HDI) and well-established inidices of arterial blood pressure (ABP) in a sample of hypertensive elderly subjects.

    Science.gov (United States)

    Bergamini, L; Finelli, M E; Bendini, C; Ferrari, E; Veschi, M; Neviani, F; Manni, B; Pelosi, A; Rioli, G; Neri, M

    2009-01-01

    Hypertension is a risk factor for a long-lasting arterial wall-remodelling leading to stiffness. The rapid method measuring the pulse pressure (PP) by means of the tool of Hypertension Diagnostic Instruments (HDI) called PP-HDI, overcomes some of the problems arising with more-time consuming methods, like ambulatory blood pressure monitoring (ABPM), and give information about the elasticity of the arterial walls. We studied the relationship between the PP-HDI, the large artery compliance (LA-C) and small artery compliance (SA-C) and few well-established indices of arterial blood pressure (ABP) in a sample of 75 hypertensive subjects, aged 65 years and over. Significant correlations between LA-C and heart rate (HR), PP-ABPM and PP-HDI were found. SA-C relates with HR and systolic blood pressure (SBP) measured in lying and standing positions. Applying a stepwise regression analysis, we found that LA-C variance stems from PP-HDI and HR, while SA-C variance stems from SBP in lying position. Receiver operator characteristic (ROC) curves for thresholds of PP showed that PP-HDI reached levels of sensitivity/specificity similar to PP-ABPM. In conclusion, surveillance of ABP through hemo-dynamic indices, in particular of SBP, is essential, nevertheless the advantage of this control is not known in an elderly population where the organ damage is already evident. PP needs necessarily an instrumental measurement. The PP-HDI result is similar in reliability with respect to PPABPM, but is more rapid and well applicable in an elderly population.

  8. Design Private Cloud of Oil and Gas SCADA System

    Directory of Open Access Journals (Sweden)

    Liu Miao

    2014-05-01

    Full Text Available SCADA (Supervisory Control and Data Acquisition system is computer control system based on supervisory. SCADA system is very important to oil and gas pipeline engineering. Cloud computing is fundamentally altering the expectations for how and when computing, storage and networking resources should be allocated, managed and consumed. In order to increase resource utilization, reliability and availability of oil and gas pipeline SCADA system, the SCADA system based on cloud computing is proposed in the paper. This paper introduces the system framework of SCADA system based on cloud computing and the realization details about the private cloud platform of SCADA system.

  9. Inter rater reliability of Pressure Ulcer Scale for Healing (PUSH in patients with chronic leg ulcers Confiabilidad inter-observadores del Pressure Ulcer Scale for Healing (PUSH en pacientes con úlceras crónicas en la pierna Confiabilidade interobservadores do Pressure Ulcer Scale for Healing (PUSH, em pacientes com úlceras crônicas de perna

    Directory of Open Access Journals (Sweden)

    Vera Lúcia Conceição de Gouveia Santos

    2007-06-01

    Full Text Available This study aimed to evaluate the inter rater reliability of the Pressure Ulcer Scale for Healing (PUSH, in its version adapted to the Portuguese language, in patients with chronic leg ulcers. Kappa index was used for the analysis. After accomplishing ethical issues, 41 patients with ulcers were examined. A total of 49% of the ulcers were located in the right leg and 36% of them were venous ulcers. The Kappa indices (0.97 to 1.00 obtained in the comparison between the observations of the clinical nurses and the stomal therapists for all sub-scales and for total score, confirmed the tool inter rater reliability, with statistical significance (pEl objetivo del estúdio fue probar la confiabilidad inter-observadores del Pressure Ulcer Scale for Healing (PUSH, en su versión adaptada al portugués, en pacientes con úlceras crónicas en la pierna. Para el análisis de concordancia se utilizó el Indice Kappa. Posterior a la aprobación del Comité de Ética, 41 pacientes con úlcera fueron examinados, siendo que 49% de las úlceras se localizaron a la derecha y 36% eran de etiología venosa. Los indices Kappa obtenidos (0,97 a 1,00, con un nivel significativo de pTestar a confiabilidade interobservadores do Pressure Ulcer Scale for Healing (PUSH, em sua versão adaptada para o português, em pacientes com úlceras crônicas de perna foi o objetivo deste estudo. Para a análise de concordância, utilizou-se o índice Kappa. Após aprovação pelo Comitê de Ética, pacientes com úlceras (41 úlceras foram examinados, sendo que 49% das úlceras localizavam-se à direita e 36% eram de etiologia venosa. Os índices Kappa obtidos (0,97 a 1,00, com significância estatística (p<0,001, ratificaram a confiabilidade interobservadores, ao ser obtida concordância de muito boa a total entre as observações de enfermeiros clínicos e especialistas em estomaterapia (padrão-ouro, para todas as subescalas do PUSH, como para o escore total. Esses resultados

  10. Moving towards Cloud Security

    Directory of Open Access Journals (Sweden)

    Edit Szilvia Rubóczki

    2015-01-01

    Full Text Available Cloud computing hosts and delivers many different services via Internet. There are a lot of reasons why people opt for using cloud resources. Cloud development is increasing fast while a lot of related services drop behind, for example the mass awareness of cloud security. However the new generation upload videos and pictures without reason to a cloud storage, but only few know about data privacy, data management and the proprietary of stored data in the cloud. In an enterprise environment the users have to know the rule of cloud usage, however they have little knowledge about traditional IT security. It is important to measure the level of their knowledge, and evolve the training system to develop the security awareness. The article proves the importance of suggesting new metrics and algorithms for measuring security awareness of corporate users and employees to include the requirements of emerging cloud security.

  11. Cloud Computing for radiologists.

    Science.gov (United States)

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  12. Cloud Computing for radiologists

    International Nuclear Information System (INIS)

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future

  13. Cloud computing for radiologists

    Directory of Open Access Journals (Sweden)

    Amit T Kharat

    2012-01-01

    Full Text Available Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  14. Marine cloud brightening

    OpenAIRE

    Latham, John; Bower, Keith; Choularton, Tom; Coe, Hugh; Connolly, Paul; Cooper, Gary; Craft, Tim; Foster, Jack; Gadian, Alan; Galbraith, Lee; Iacovides, Hector; Johnston, David; Launder, Brian; Leslie, Brian; Meyer, John

    2012-01-01

    The idea behind the marine cloud-brightening (MCB) geoengineering technique is that seeding marine stratocumulus clouds with copious quantities of roughly monodisperse sub-micrometre sea water particles might significantly enhance the cloud droplet number concentration, and thereby the cloud albedo and possibly longevity. This would produce a cooling, which general circulation model (GCM) computations suggest could—subject to satisfactory resolution of technical and scientific problems identi...

  15. Cloud computing strategies

    CERN Document Server

    Chorafas, Dimitris N

    2011-01-01

    A guide to managing cloud projects, Cloud Computing Strategies provides the understanding required to evaluate the technology and determine how it can be best applied to improve business and enhance your overall corporate strategy. Based on extensive research, it examines the opportunities and challenges that loom in the cloud. It explains exactly what cloud computing is, what it has to offer, and calls attention to the important issues management needs to consider before passing the point of no return regarding financial commitments.

  16. Towards Indonesian Cloud Campus

    OpenAIRE

    Thamrin, Taqwan; Lukman, Iing; Wahyuningsih, Dina Ika

    2013-01-01

    Nowadays, Cloud Computing is most discussed term in business and academic environment.Cloud campus has many benefits such as accessing the file storages, e-mails, databases,educational resources, research applications and tools anywhere for faculty, administrators,staff, students and other users in university, on demand. Furthermore, cloud campus reduces universities’ IT complexity and cost.This paper discuss the implementation of Indonesian cloud campus and various opportunies and benefits...

  17. Cloud services in organization

    OpenAIRE

    FUXA, Jan

    2013-01-01

    The work deals with the definition of the word cloud computing, cloud computing models, types, advantages, disadvantages, and comparing SaaS solutions such as: Google Apps and Office 365 in the area of electronic communications. The work deals with the use of cloud computing in the corporate practice, both good and bad practice. The following section describes the methodology for choosing the appropriate cloud service organization. Another part deals with analyzing the possibilities of SaaS i...

  18. Orchestrating Your Cloud Orchestra

    OpenAIRE

    Hindle, Abram

    2015-01-01

    Cloud computing potentially ushers in a new era of computer music performance with exceptionally large computer music instruments consisting of 10s to 100s of virtual machines which we propose to call a `cloud-orchestra'. Cloud computing allows for the rapid provisioning of resources, but to deploy such a complicated and interconnected network of software synthesizers in the cloud requires a lot of manual work, system administration knowledge, and developer/operator skills. This is a barrier ...

  19. Cloud security mechanisms

    OpenAIRE

    2014-01-01

    Cloud computing has brought great benefits in cost and flexibility for provisioning services. The greatest challenge of cloud computing remains however the question of security. The current standard tools in access control mechanisms and cryptography can only partly solve the security challenges of cloud infrastructures. In the recent years of research in security and cryptography, novel mechanisms, protocols and algorithms have emerged that offer new ways to create secure services atop cloud...

  20. Cloud computing for radiologists

    OpenAIRE

    Amit T Kharat; Amjad Safvi; S S Thind; Amarjit Singh

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as...

  1. Cloud Robotics Model

    OpenAIRE

    Mester, Gyula

    2015-01-01

    Cloud Robotics was born from the merger of service robotics and cloud technologies. It allows robots to benefit from the powerful computational, storage, and communications resources of modern data centres. Cloud robotics allows robots to take advantage of the rapid increase in data transfer rates to offload tasks without hard real time requirements. Cloud Robotics has rapidly gained momentum with initiatives by companies such as Google, Willow Garage and Gostai as well as more than a dozen a...

  2. Genomics With Cloud Computing

    OpenAIRE

    Sukhamrit Kaur; Sandeep Kaur

    2015-01-01

    Abstract Genomics is study of genome which provides large amount of data for which large storage and computation power is needed. These issues are solved by cloud computing that provides various cloud platforms for genomics. These platforms provides many services to user like easy access to data easy sharing and transfer providing storage in hundreds of terabytes more computational power. Some cloud platforms are Google genomics DNAnexus and Globus genomics. Various features of cloud computin...

  3. Bipolar H II regions produced by cloud-cloud collisions

    Science.gov (United States)

    Whitworth, Anthony; Lomax, Oliver; Balfour, Scott; Mège, Pierre; Zavagno, Annie; Deharveng, Lise

    2018-05-01

    We suggest that bipolar H II regions may be the aftermath of collisions between clouds. Such a collision will produce a shock-compressed layer, and a star cluster can then condense out of the dense gas near the center of the layer. If the clouds are sufficiently massive, the star cluster is likely to contain at least one massive star, which emits ionizing radiation, and excites an H II region, which then expands, sweeping up the surrounding neutral gas. Once most of the matter in the clouds has accreted onto the layer, expansion of the H II region meets little resistance in directions perpendicular to the midplane of the layer, and so it expands rapidly to produce two lobes of ionized gas, one on each side of the layer. Conversely, in directions parallel to the midplane of the layer, expansion of the H II region stalls due to the ram pressure of the gas that continues to fall towards the star cluster from the outer parts of the layer; a ring of dense neutral gas builds up around the waist of the bipolar H II region, and may spawn a second generation of star formation. We present a dimensionless model for the flow of ionized gas in a bipolar H II region created according to the above scenario, and predict the characteristics of the resulting free-free continuum and recombination-line emission. This dimensionless model can be scaled to the physical parameters of any particular system. Our intention is that these predictions will be useful in testing the scenario outlined above, and thereby providing indirect support for the role of cloud-cloud collisions in triggering star formation.

  4. The sensitivities of in cloud and cloud top phase distributions to primary ice formation in ICON-LEM

    Science.gov (United States)

    Beydoun, H.; Karrer, M.; Tonttila, J.; Hoose, C.

    2017-12-01

    Mixed phase clouds remain a leading source of uncertainty in our attempt to quantify cloud-climate and aerosol-cloud climate interactions. Nevertheless, recent advances in parametrizing the primary ice formation process, high resolution cloud modelling, and retrievals of cloud phase distributions from satellite data offer an excellent opportunity to conduct closure studies on the sensitivity of the cloud phase to microphysical and dynamical processes. Particularly, the reliability of satellite data to resolve the phase at the top of the cloud provides a promising benchmark to compare model output to. We run large eddy simulations with the new ICOsahedral Non-hydrostatic atmosphere model (ICON) to place bounds on the sensitivity of in cloud and cloud top phase to the primary ice formation process. State of the art primary ice formation parametrizations in the form of the cumulative ice active site density ns are implemented in idealized deep convective cloud simulations. We exploit the ability of ICON-LEM to switch between a two moment microphysics scheme and the newly developed Predicted Particle Properties (P3) scheme by running our simulations in both configurations for comparison. To quantify the sensitivity of cloud phase to primary ice formation, cloud ice content is evaluated against order of magnitude changes in ns at variable convective strengths. Furthermore, we assess differences between in cloud and cloud top phase distributions as well as the potential impact of updraft velocity on the suppression of the Wegener-Bergeron-Findeisen process. The study aims to evaluate our practical understanding of primary ice formation in the context of predicting the structure and evolution of mixed phase clouds.

  5. Intra-session test-retest reliability of magnitude and structure of center of pressure from the Nintendo Wii Balance Board™ for a visually impaired and normally sighted population.

    Science.gov (United States)

    Jeter, Pamela E; Wang, Jiangxia; Gu, Jialiang; Barry, Michael P; Roach, Crystal; Corson, Marilyn; Yang, Lindsay; Dagnelie, Gislin

    2015-02-01

    Individuals with visual impairment (VI) have irreparable damage to one of the input streams contributing to postural stability. Here, we evaluated the intra-session test-retest reliability of the Wii Balance Board (WBB) for measuring Center of Pressure (COP) magnitude and structure, i.e. approximate entropy (ApEn) in fourteen legally blind participants and 21 participants with corrected-to-normal vision. Participants completed a validated balance protocol which included four sensory conditions: double-leg standing on a firm surface with eyes open (EO-firm); a firm surface with eyes closed (EC-firm); a foam surface with EO (EO-foam); and a foam surface with EC (EC-foam). Participants performed the full balance protocol twice during the session, separated by a period of 15min, to determine the intraclass correlation coefficient (ICC). Absolute reliability was determined by the standard error of measurement (SEM). The minimal difference (MD) was estimated to determine clinical significance for future studies. COP measures were derived from data sent by the WBB to a laptop via Bluetooth. COP scores increased with the difficulty of sensory condition indicating WBB sensitivity (all pbalance impairment among VI persons. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Chargeback for cloud services.

    NARCIS (Netherlands)

    Baars, T.; Khadka, R.; Stefanov, H.; Jansen, S.; Batenburg, R.; Heusden, E. van

    2014-01-01

    With pay-per-use pricing models, elastic scaling of resources, and the use of shared virtualized infrastructures, cloud computing offers more efficient use of capital and agility. To leverage the advantages of cloud computing, organizations have to introduce cloud-specific chargeback practices.

  7. On CLOUD nine

    CERN Multimedia

    2009-01-01

    The team from the CLOUD experiment - the world’s first experiment using a high-energy particle accelerator to study the climate - were on cloud nine after the arrival of their new three-metre diameter cloud chamber. This marks the end of three years’ R&D and design, and the start of preparations for data taking later this year.

  8. Cloud Computing Explained

    Science.gov (United States)

    Metz, Rosalyn

    2010-01-01

    While many talk about the cloud, few actually understand it. Three organizations' definitions come to the forefront when defining the cloud: Gartner, Forrester, and the National Institutes of Standards and Technology (NIST). Although both Gartner and Forrester provide definitions of cloud computing, the NIST definition is concise and uses…

  9. Greening the Cloud

    NARCIS (Netherlands)

    van den Hoed, Robert; Hoekstra, Eric; Procaccianti, G.; Lago, P.; Grosso, Paola; Taal, Arie; Grosskop, Kay; van Bergen, Esther

    The cloud has become an essential part of our daily lives. We use it to store our documents (Dropbox), to stream our music and lms (Spotify and Net ix) and without giving it any thought, we use it to work on documents in the cloud (Google Docs). The cloud forms a massive storage and processing

  10. Security in the cloud.

    Science.gov (United States)

    Degaspari, John

    2011-08-01

    As more provider organizations look to the cloud computing model, they face a host of security-related questions. What are the appropriate applications for the cloud, what is the best cloud model, and what do they need to know to choose the best vendor? Hospital CIOs and security experts weigh in.

  11. Software Product Lines for Multi-Cloud Microservices-Based Applications

    OpenAIRE

    Sousa , Gustavo; Rudametkin , Walter; Duchien , Laurence

    2016-01-01

    International audience; Multi-cloud computing is the use of resources and services from multiple independent cloud providers. It is used to avoid vendor lock-in, comply with location regulations, and optimize reliability, performance and costs. Microservices is an architectural style becoming increasingly used in cloud computing as it allows for better resources usage. However, building multi-cloud systems is a very complex and time consuming task, which calls for automation and supporting to...

  12. ResourceGate: A New Solution for Cloud Computing Resource Allocation

    OpenAIRE

    Abdullah A. Sheikh

    2012-01-01

    Cloud computing has taken place to be focused by educational and business communities. These concerns include their needs to improve the Quality of Services (QoS) provided, also services such as reliability, performance and reducing costs. Cloud computing provides many benefits in terms of low cost and accessibility of data. Ensuring these benefits is considered to be the major factor in the cloud computing environment. This paper surveys recent research related to cloud computing resource al...

  13. The Case Of The Elusive Electron Cloud

    CERN Multimedia

    2001-01-01

    Fig. 1 Electron cloud following a controlled beam bump. 'Elementary my dear Watson, you see this footprint proves it was the butler in the foyer with the butcher's knife.' Sir Arthur Conan Doyle's Sherlock Holmes may at first appear a long way from particle physics, but first appearances are often deceiving... The mysteries behind the 'Electron Cloud Effect', a dangerous electron multiplication phenomenon which could possibly limit the LHC's performance, have recently been under a detective level investigation that is yielding data that would make even the valiant Holmes balk. The electron cloud, a group of free floating electrons in the collider, is caused by electron multiplication on the vacuum chamber wall and was first observed in 1976. The cloud that develops is a serious problem because it can lead to beam growth, increased gas release from the collider surface, and a supplementary heat load to the LHC cryogenic system. The phenomenon has been observed since 1999 in the SPS where unexpected pressure...

  14. Electron clouds in high energy hadron accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Petrov, Fedor

    2013-08-29

    The formation of electron clouds in accelerators operating with positrons and positively charge ions is a well-known problem. Depending on the parameters of the beam the electron cloud manifests itself differently. In this thesis the electron cloud phenomenon is studied for the CERN Super Proton Synchrotron (SPS) and Large Hadron Collider (LHC) conditions, and for the heavy-ion synchrotron SIS-100 as a part of the FAIR complex in Darmstadt, Germany. Under the FAIR conditions the extensive use of slow extraction will be made. After the acceleration the beam will be debunched and continuously extracted to the experimental area. During this process, residual gas electrons can accumulate in the electric field of the beam. If this accumulation is not prevented, then at some point the beam can become unstable. Under the SPS and LHC conditions the beam is always bunched. The accumulation of electron cloud happens due to secondary electron emission. At the time when this thesis was being written the electron cloud was known to limit the maximum intensity of the two machines. During the operation with 25 ns bunch spacing, the electron cloud was causing significant beam quality deterioration. At moderate intensities below the instability threshold the electron cloud was responsible for the bunch energy loss. In the framework of this thesis it was found that the instability thresholds of the coasting beams with similar space charge tune shifts, emittances and energies are identical. First of their kind simulations of the effect of Coulomb collisions on electron cloud density in coasting beams were performed. It was found that for any hadron coasting beam one can choose vacuum conditions that will limit the accumulation of the electron cloud below the instability threshold. We call such conditions the ''good'' vacuum regime. In application to SIS-100 the design pressure 10{sup -12} mbar corresponds to the good vacuum regime. The transition to the bad vacuum

  15. Dynamics of magnetic clouds in interplanetary space

    International Nuclear Information System (INIS)

    Yeh, T.

    1987-01-01

    Magnetic clouds observed in interplanetary space may be regarded as extraneous bodies immersed in the magnetized medium of the solar wind. The interface between a magnetic cloud and its surrounding medium separates the internal and external magnetic fields. Polarization currents are induced in the peripheral layer to make the ambient magnetic field tangential. The motion of a magnetic cloud through the interplanetary medium may be partitioned into a translational motion of the magnetic cloud as a whole and an expansive motion of the volume relative to the axis of the magnetic cloud. The translational motion is determined by two kinds of forces, i.e., the gravitational force exerted by the Sun, and the hydromagnetic buoyancy force exerted by the surrounding medium. On the other hand, the expansive motion is determined by the pressure gradient sustaining the gross difference between the internal and external pressures and by the self-induced magnetic force that results from the interaction among the internal currents. The force resulting from the internal and external currents is a part of the hydromagnetic buoyancy force, manifested by a thermal stress caused by the inhomogeneity of the ambient magnetic pressure

  16. Dynamics of magnetic clouds in interplanetary space

    Science.gov (United States)

    Yeh, Tyan

    1987-09-01

    Magnetic clouds observed in interplanetary space may be regarded as extraneous bodies immersed in the magnetized medium of the solar wind. The interface between a magnetic cloud and its surrounding medium separates the internal and external magnetic fields. Polarization currents are induced in the peripheral layer to make the ambient magnetic field tangential. The motion of a magnetic cloud through the interplanetary medium may be partitioned into a translational motion of the magnetic cloud as a whole and an expansive motion of the volume relative to the axis of the magnetic cloud. The translational motion is determined by two kinds of forces, i.e., the gravitational force exerted by the Sun, and the hydromagnetic buoyancy force exerted by the surrounding medium. On the other hand, the expansive motion is determined by the pressure gradient sustaining the gross difference between the internal and external pressures and by the self-induced magnetic force that results from the interaction among the internal currents. The force resulting from the internal and external currents is a part of the hydromagnetic buoyancy force, manifested by a thermal stress caused by the inhomogeneity of the ambient magnetic pressure.

  17. Redefining reliability

    International Nuclear Information System (INIS)

    Paulson, S.L.

    1995-01-01

    Want to buy some reliability? The question would have been unthinkable in some markets served by the natural gas business even a few years ago, but in the new gas marketplace, industrial, commercial and even some residential customers have the opportunity to choose from among an array of options about the kind of natural gas service they need--and are willing to pay for. The complexities of this brave new world of restructuring and competition have sent the industry scrambling to find ways to educate and inform its customers about the increased responsibility they will have in determining the level of gas reliability they choose. This article discusses the new options and the new responsibilities of customers, the needed for continuous education, and MidAmerican Energy Company's experiment in direct marketing of natural gas

  18. Beam Measurements of a CLOUD (Cosmics Leaving OUtdoor Droplets) Chamber

    CERN Document Server

    Kirkby, Jasper

    2001-01-01

    A striking correlation has recently been observed between global cloud cover and the flux of incident cosmic rays. The effect of natural variations in the cosmic ray flux is large, causing estimated changes in the Earth's energy radiation balance that are comparable to those attributed to greenhouse gases from the burning of fossil fuels since the Industrial Revolution. However a direct link between cosmic rays and cloud formation has not been unambiguously established. We therefore propose to experimentally measure cloud (water droplet) formation under controlled conditions in a test beam at CERN with a CLOUD chamber, duplicating the conditions prevailing in the troposphere. These data, which have never been previously obtained, will allow a detailed understanding of the possible effects of cosmic rays on clouds and confirm, or otherwise, a direct link between cosmic rays, global cloud cover and the Earth's climate. The measurements will, in turn, allow more reliable calculations to be made of the residual e...

  19. Route Assessment for Unmanned Aerial Vehicle Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Xixia Sun

    2014-01-01

    Full Text Available An integrated route assessment approach based on cloud model is proposed in this paper, where various sources of uncertainties are well kept and modeled by cloud theory. Firstly, a systemic criteria framework incorporating models for scoring subcriteria is developed. Then, the cloud model is introduced to represent linguistic variables, and survivability probability histogram of each route is converted into normal clouds by cloud transformation, enabling both randomness and fuzziness in the assessment environment to be managed simultaneously. Finally, a new way to measure the similarity between two normal clouds satisfying reflexivity, symmetry, transitivity, and overlapping is proposed. Experimental results demonstrate that the proposed route assessment approach outperforms fuzzy logic based assessment approach with regard to feasibility, reliability, and consistency with human thinking.

  20. CLOUD STORAGE SERVICES

    OpenAIRE

    Yan, Cheng

    2017-01-01

    Cloud computing is a hot topic in recent research and applications. Because it is widely used in various fields. Up to now, Google, Microsoft, IBM, Amazon and other famous co partnership have proposed their cloud computing application. Look upon cloud computing as one of the most important strategy in the future. Cloud storage is the lower layer of cloud computing system which supports the service of the other layers above it. At the same time, it is an effective way to store and manage heavy...

  1. Cloud Computing Quality

    Directory of Open Access Journals (Sweden)

    Anamaria Şiclovan

    2013-02-01

    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offered to the consumers as a product delivered online. This paper is meant to describe the quality of cloud computing services, analyzing the advantages and characteristics offered by it. It is a theoretical paper.Keywords: Cloud computing, QoS, quality of cloud computing

  2. The Magellanic clouds

    International Nuclear Information System (INIS)

    1989-01-01

    As the two galaxies nearest to our own, the Magellanic Clouds hold a special place in studies of the extragalactic distance scale, of stellar evolution and the structure of galaxies. In recent years, results from the South African Astronomical Observatory (SAAO) and elsewhere have shown that it is possible to begin understanding the three dimensional structure of the Clouds. Studies of Magellanic Cloud Cepheids have continued, both to investigate the three-dimensional structure of the Clouds and to learn more about Cepheids and their use as extragalactic distance indicators. Other research undertaken at SAAO includes studies on Nova LMC 1988 no 2 and red variables in the Magellanic Clouds

  3. Cloud Computing Bible

    CERN Document Server

    Sosinsky, Barrie

    2010-01-01

    The complete reference guide to the hot technology of cloud computingIts potential for lowering IT costs makes cloud computing a major force for both IT vendors and users; it is expected to gain momentum rapidly with the launch of Office Web Apps later this year. Because cloud computing involves various technologies, protocols, platforms, and infrastructure elements, this comprehensive reference is just what you need if you'll be using or implementing cloud computing.Cloud computing offers significant cost savings by eliminating upfront expenses for hardware and software; its growing popularit

  4. CLOUD COMPUTING SECURITY

    Directory of Open Access Journals (Sweden)

    Ştefan IOVAN

    2016-05-01

    Full Text Available Cloud computing reprentes the software applications offered as a service online, but also the software and hardware components from the data center.In the case of wide offerd services for any type of client, we are dealing with a public cloud. In the other case, in wich a cloud is exclusively available for an organization and is not available to the open public, this is consider a private cloud [1]. There is also a third type, called hibrid in which case an user or an organization might use both services available in the public and private cloud. One of the main challenges of cloud computing are to build the trust and ofer information privacy in every aspect of service offerd by cloud computingle. The variety of existing standards, just like the lack of clarity in sustenability certificationis not a real help in building trust. Also appear some questions marks regarding the efficiency of traditionsecurity means that are applied in the cloud domain. Beside the economic and technology advantages offered by cloud, also are some advantages in security area if the information is migrated to cloud. Shared resources available in cloud includes the survey, use of the "best practices" and technology for advance security level, above all the solutions offered by the majority of medium and small businesses, big companies and even some guvermental organizations [2].

  5. Interaction of clouds with the hot interstellar medium (HIM) and cosmic rays

    International Nuclear Information System (INIS)

    Voelk, H.J.

    1983-01-01

    The modification, by cosmic rays, of the interaction of interstellar clouds with the ambient HIM is considered. Small clouds should still evaporate and thereby exclude cosmic rays if they do so without cosmic rays. The possible mass accretion of massice clouds is reduced by the pressure of the compressed cosmic rays. The consequences for diffuse galactic #betta#-ray emisison are discussed. (orig.)

  6. Star formation in evolving molecular clouds

    Science.gov (United States)

    Völschow, M.; Banerjee, R.; Körtgen, B.

    2017-09-01

    Molecular clouds are the principle stellar nurseries of our universe; they thus remain a focus of both observational and theoretical studies. From observations, some of the key properties of molecular clouds are well known but many questions regarding their evolution and star formation activity remain open. While numerical simulations feature a large number and complexity of involved physical processes, this plethora of effects may hide the fundamentals that determine the evolution of molecular clouds and enable the formation of stars. Purely analytical models, on the other hand, tend to suffer from rough approximations or a lack of completeness, limiting their predictive power. In this paper, we present a model that incorporates central concepts of astrophysics as well as reliable results from recent simulations of molecular clouds and their evolutionary paths. Based on that, we construct a self-consistent semi-analytical framework that describes the formation, evolution, and star formation activity of molecular clouds, including a number of feedback effects to account for the complex processes inside those objects. The final equation system is solved numerically but at much lower computational expense than, for example, hydrodynamical descriptions of comparable systems. The model presented in this paper agrees well with a broad range of observational results, showing that molecular cloud evolution can be understood as an interplay between accretion, global collapse, star formation, and stellar feedback.

  7. Exploring clouds, weather, climate, and modeling using bilingual content and activities from the Windows to the Universe program and the Center for Multiscale Modeling of Atmospheric Processes

    Science.gov (United States)

    Foster, S. Q.; Johnson, R. M.; Randall, D.; Denning, S.; Russell, R.; Gardiner, L.; Hatheway, B.; Genyuk, J.; Bergman, J.

    2008-12-01

    The need for improving the representation of cloud processes in climate models has been one of the most important limitations of the reliability of climate-change simulations. Now in its third year, the National Science Foundation-funded Center for Multi-scale Modeling of Atmospheric Processes (CMMAP) at Colorado State University is addressing this problem through a revolutionary new approach to representing cloud processes on their native scales, including the cloud-scale interaction processes that are active in cloud systems. CMMAP has set ambitious education and human-resource goals to share basic information about the atmosphere, clouds, weather, climate, and modeling with diverse K-12 and public audiences through its affiliation with the Windows to the Universe (W2U) program at University Corporation for Atmospheric Research (UCAR). W2U web pages are written at three levels in English and Spanish. This information targets learners at all levels, educators, and families who seek to understand and share resources and information about the nature of weather and the climate system, and career role models from related research fields. This resource can also be helpful to educators who are building bridges in the classroom between the sciences, the arts, and literacy. Visitors to the W2U's CMMAP web portal can access a beautiful new clouds image gallery; information about each cloud type and the atmospheric processes that produce them; a Clouds in Art interactive; collections of weather-themed poetry, art, and myths; links to games and puzzles for children; and extensive classroom- ready resources and activities for K-12 teachers. Biographies of CMMAP scientists and graduate students are featured. Basic science concepts important to understanding the atmosphere, such as condensation, atmosphere pressure, lapse rate, and more have been developed, as well as 'microworlds' that enable students to interact with experimental tools while building fundamental knowledge

  8. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part II: Multi-layered cloud

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, H; McCoy, R B; Klein, S A; Xie, S; Luo, Y; Avramov, A; Chen, M; Cole, J; Falk, M; Foster, M; Genio, A D; Harrington, J; Hoose, C; Khairoutdinov, M; Larson, V; Liu, X; McFarquhar, G; Poellot, M; Shipway, B; Shupe, M; Sud, Y; Turner, D; Veron, D; Walker, G; Wang, Z; Wolf, A; Xu, K; Yang, F; Zhang, G

    2008-02-27

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a deep, multi-layered, mixed-phase cloud system observed during the ARM Mixed-Phase Arctic Cloud Experiment. This cloud system was associated with strong surface turbulent sensible and latent heat fluxes as cold air flowed over the open Arctic Ocean, combined with a low pressure system that supplied moisture at mid-level. The simulations, performed by 13 single-column and 4 cloud-resolving models, generally overestimate the liquid water path and strongly underestimate the ice water path, although there is a large spread among the models. This finding is in contrast with results for the single-layer, low-level mixed-phase stratocumulus case in Part I of this study, as well as previous studies of shallow mixed-phase Arctic clouds, that showed an underprediction of liquid water path. The overestimate of liquid water path and underestimate of ice water path occur primarily when deeper mixed-phase clouds extending into the mid-troposphere were observed. These results suggest important differences in the ability of models to simulate Arctic mixed-phase clouds that are deep and multi-layered versus shallow and single-layered. In general, models with a more sophisticated, two-moment treatment of the cloud microphysics produce a somewhat smaller liquid water path that is closer to observations. The cloud-resolving models tend to produce a larger cloud fraction than the single-column models. The liquid water path and especially the cloud fraction have a large impact on the cloud radiative forcing at the surface, which is dominated by the longwave flux for this case.

  9. Searchable Encryption in Cloud Storage

    OpenAIRE

    Ren-Junn Hwang; Chung-Chien Lu; Jain-Shing Wu

    2014-01-01

    Cloud outsource storage is one of important services in cloud computing. Cloud users upload data to cloud servers to reduce the cost of managing data and maintaining hardware and software. To ensure data confidentiality, users can encrypt their files before uploading them to a cloud system. However, retrieving the target file from the encrypted files exactly is difficult for cloud server. This study proposes a protocol for performing multikeyword searches for encrypted cloud data by applying ...

  10. Contributions of Different Cloud Types to Feedbacks and Rapid Adjustments in CMIP5*

    Energy Technology Data Exchange (ETDEWEB)

    Zelinka, Mark D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Program for Climate Model Diagnosis and Intercomparison; Klein, Stephen A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Program for Climate Model Diagnosis and Intercomparison; Taylor, Karl E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Program for Climate Model Diagnosis and Intercomparison; Andrews, Timothy [Met Office Hadley Center, Exeter (United Kingdom); Webb, Mark J. [Met Office Hadley Center, Exeter (United Kingdom); Gregory, Jonathan M. [Univ. of Reading, Exeter (United Kingdom). National Center for Atmospheric Science; Forster, Piers M. [Univ. of Leeds (United Kingdom)

    2013-07-01

    When using five climate model simulations of the response to an abrupt quadrupling of CO2, the authors perform the first simultaneous model intercomparison of cloud feedbacks and rapid radiative adjustments with cloud masking effects removed, partitioned among changes in cloud types and gross cloud properties. After CO2 quadrupling, clouds exhibit a rapid reduction in fractional coverage, cloud-top pressure, and optical depth, with each contributing equally to a 1.1 W m-2 net cloud radiative adjustment, primarily from shortwave radiation. Rapid reductions in midlevel clouds and optically thick clouds are important in reducing planetary albedo in every model. As the planet warms, clouds become fewer, higher, and thicker, and global mean net cloud feedback is positive in all but one model and results primarily from increased trapping of longwave radiation. As was true for earlier models, high cloud changes are the largest contributor to intermodel spread in longwave and shortwave cloud feedbacks, but low cloud changes are the largest contributor to the mean and spread in net cloud feedback. The importance of the negative optical depth feedback relative to the amount feedback at high latitudes is even more marked than in earlier models. Furthermore, the authors show that the negative longwave cloud adjustment inferred in previous studies is primarily caused by a 1.3 W m-2 cloud masking of CO2 forcing. Properly accounting for cloud masking increases net cloud feedback by 0.3 W m-2 K-1, whereas accounting for rapid adjustments reduces by 0.14 W m-2 K-1 the ensemble mean net cloud feedback through a combination of smaller positive cloud amount and altitude feedbacks and larger negative optical depth feedbacks.

  11. Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing

    Science.gov (United States)

    Klems, Markus; Nimis, Jens; Tai, Stefan

    On-demand provisioning of scalable and reliable compute services, along with a cost model that charges consumers based on actual service usage, has been an objective in distributed computing research and industry for a while. Cloud Computing promises to deliver on this objective: consumers are able to rent infrastructure in the Cloud as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis. The acceptance of Cloud Computing, however, depends on the ability for Cloud Computing providers and consumers to implement a model for business value co-creation. Therefore, a systematic approach to measure costs and benefits of Cloud Computing is needed. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in estimating Cloud Computing costs and to compare these costs to conventional IT solutions. We demonstrate by means of representative use cases how our framework can be applied to real world scenarios.

  12. Enterprise Cloud Adoption - Cloud Maturity Assessment Model

    OpenAIRE

    Conway, Gerry; Doherty, Eileen; Carcary, Marian; Crowley, Catherine

    2017-01-01

    The introduction and use of cloud computing by an organization has the promise of significant benefits that include reduced costs, improved services, and a pay-per-use model. Organizations that successfully harness these benefits will potentially have a distinct competitive edge, due to their increased agility and flexibility to rapidly respond to an ever changing and complex business environment. However, as cloud technology is a relatively new ph...

  13. Star clouds of Magellan

    International Nuclear Information System (INIS)

    Tucker, W.

    1981-01-01

    The Magellanic Clouds are two irregular galaxies belonging to the local group which the Milky Way belongs to. By studying the Clouds, astronomers hope to gain insight into the origin and composition of the Milky Way. The overall structure and dynamics of the Clouds are clearest when studied in radio region of the spectrum. One benefit of directly observing stellar luminosities in the Clouds has been the discovery of the period-luminosity relation. Also, the Clouds are a splendid laboratory for studying stellar evolution. It is believed that both Clouds may be in the very early stage in the development of a regular, symmetric galaxy. This raises a paradox because some of the stars in the star clusters of the Clouds are as old as the oldest stars in our galaxy. An explanation for this is given. The low velocity of the Clouds with respect to the center of the Milky Way shows they must be bound to it by gravity. Theories are given on how the Magellanic Clouds became associated with the galaxy. According to current ideas the Clouds orbits will decay and they will spiral into the Galaxy

  14. Cloud Computing Governance Lifecycle

    Directory of Open Access Journals (Sweden)

    Soňa Karkošková

    2016-06-01

    Full Text Available Externally provisioned cloud services enable flexible and on-demand sourcing of IT resources. Cloud computing introduces new challenges such as need of business process redefinition, establishment of specialized governance and management, organizational structures and relationships with external providers and managing new types of risk arising from dependency on external providers. There is a general consensus that cloud computing in addition to challenges brings many benefits but it is unclear how to achieve them. Cloud computing governance helps to create business value through obtain benefits from use of cloud computing services while optimizing investment and risk. Challenge, which organizations are facing in relation to governing of cloud services, is how to design and implement cloud computing governance to gain expected benefits. This paper aims to provide guidance on implementation activities of proposed Cloud computing governance lifecycle from cloud consumer perspective. Proposed model is based on SOA Governance Framework and consists of lifecycle for implementation and continuous improvement of cloud computing governance model.

  15. Explicit prediction of ice clouds in general circulation models

    Science.gov (United States)

    Kohler, Martin

    1999-11-01

    Although clouds play extremely important roles in the radiation budget and hydrological cycle of the Earth, there are large quantitative uncertainties in our understanding of their generation, maintenance and decay mechanisms, representing major obstacles in the development of reliable prognostic cloud water schemes for General Circulation Models (GCMs). Recognizing their relative neglect in the past, both observationally and theoretically, this work places special focus on ice clouds. A recent version of the UCLA - University of Utah Cloud Resolving Model (CRM) that includes interactive radiation is used to perform idealized experiments to study ice cloud maintenance and decay mechanisms under various conditions in term of: (1) background static stability, (2) background relative humidity, (3) rate of cloud ice addition over a fixed initial time-period and (4) radiation: daytime, nighttime and no-radiation. Radiation is found to have major effects on the life-time of layer-clouds. Optically thick ice clouds decay significantly slower than expected from pure microphysical crystal fall-out (taucld = 0.9--1.4 h as opposed to no-motion taumicro = 0.5--0.7 h). This is explained by the upward turbulent fluxes of water induced by IR destabilization, which partially balance the downward transport of water by snowfall. Solar radiation further slows the ice-water decay by destruction of the inversion above cloud-top and the resulting upward transport of water. Optically thin ice clouds, on the other hand, may exhibit even longer life-times (>1 day) in the presence of radiational cooling. The resulting saturation mixing ratio reduction provides for a constant cloud ice source. These CRM results are used to develop a prognostic cloud water scheme for the UCLA-GCM. The framework is based on the bulk water phase model of Ose (1993). The model predicts cloud liquid water and cloud ice separately, and which is extended to split the ice phase into suspended cloud ice (predicted

  16. THE CALIFORNIA MOLECULAR CLOUD

    International Nuclear Information System (INIS)

    Lada, Charles J.; Lombardi, Marco; Alves, Joao F.

    2009-01-01

    We present an analysis of wide-field infrared extinction maps of a region in Perseus just north of the Taurus-Auriga dark cloud complex. From this analysis we have identified a massive, nearby, but previously unrecognized, giant molecular cloud (GMC). Both a uniform foreground star density and measurements of the cloud's velocity field from CO observations indicate that this cloud is likely a coherent structure at a single distance. From comparison of foreground star counts with Galactic models, we derive a distance of 450 ± 23 pc to the cloud. At this distance the cloud extends over roughly 80 pc and has a mass of ∼ 10 5 M sun , rivaling the Orion (A) molecular cloud as the largest and most massive GMC in the solar neighborhood. Although surprisingly similar in mass and size to the more famous Orion molecular cloud (OMC) the newly recognized cloud displays significantly less star formation activity with more than an order of magnitude fewer young stellar objects than found in the OMC, suggesting that both the level of star formation and perhaps the star formation rate in this cloud are an order of magnitude or more lower than in the OMC. Analysis of extinction maps of both clouds shows that the new cloud contains only 10% the amount of high extinction (A K > 1.0 mag) material as is found in the OMC. This, in turn, suggests that the level of star formation activity and perhaps the star formation rate in these two clouds may be directly proportional to the total amount of high extinction material and presumably high density gas within them and that there might be a density threshold for star formation on the order of n(H 2 ) ∼ a few x 10 4 cm -3 .

  17. Information content of OCO-2 oxygen A-band channels for retrieving marine liquid cloud properties

    Science.gov (United States)

    Richardson, Mark; Stephens, Graeme L.

    2018-03-01

    Information content analysis is used to select channels for a marine liquid cloud retrieval using the high-spectral-resolution oxygen A-band instrument on NASA's Orbiting Carbon Observatory-2 (OCO-2). Desired retrieval properties are cloud optical depth, cloud-top pressure and cloud pressure thickness, which is the geometric thickness expressed in hectopascals. Based on information content criteria we select a micro-window of 75 of the 853 functioning OCO-2 channels spanning 763.5-764.6 nm and perform a series of synthetic retrievals with perturbed initial conditions. We estimate posterior errors from the sample standard deviations and obtain ±0.75 in optical depth and ±12.9 hPa in both cloud-top pressure and cloud pressure thickness, although removing the 10 % of samples with the highest χ2 reduces posterior error in cloud-top pressure to ±2.9 hPa and cloud pressure thickness to ±2.5 hPa. The application of this retrieval to real OCO-2 measurements is briefly discussed, along with limitations and the greatest caution is urged regarding the assumption of a single homogeneous cloud layer, which is often, but not always, a reasonable approximation for marine boundary layer clouds.

  18. Comparison of cloud top heights derived from FY-2 meteorological satellites with heights derived from ground-based millimeter wavelength cloud radar

    Science.gov (United States)

    Wang, Zhe; Wang, Zhenhui; Cao, Xiaozhong; Tao, Fa

    2018-01-01

    Clouds are currently observed by both ground-based and satellite remote sensing techniques. Each technique has its own strengths and weaknesses depending on the observation method, instrument performance and the methods used for retrieval. It is important to study synergistic cloud measurements to improve the reliability of the observations and to verify the different techniques. The FY-2 geostationary orbiting meteorological satellites continuously observe the sky over China. Their cloud top temperature product can be processed to retrieve the cloud top height (CTH). The ground-based millimeter wavelength cloud radar can acquire information about the vertical structure of clouds-such as the cloud base height (CBH), CTH and the cloud thickness-and can continuously monitor changes in the vertical profiles of clouds. The CTHs were retrieved using both cloud top temperature data from the FY-2 satellites and the cloud radar reflectivity data for the same time period (June 2015 to May 2016) and the resulting datasets were compared in order to evaluate the accuracy of CTH retrievals using FY-2 satellites. The results show that the concordance rate of cloud detection between the two datasets was 78.1%. Higher consistencies were obtained for thicker clouds with larger echo intensity and for more continuous clouds. The average difference in the CTH between the two techniques was 1.46 km. The difference in CTH between low- and mid-level clouds was less than that for high-level clouds. An attenuation threshold of the cloud radar for rainfall was 0.2 mm/min; a rainfall intensity below this threshold had no effect on the CTH. The satellite CTH can be used to compensate for the attenuation error in the cloud radar data.

  19. Jovian cloud structure from 5-mu M images

    Science.gov (United States)

    Ortiz, J. L.; Moreno, F.; Molina, A.; Roos-Serote, M.; Orton, G. S.

    1999-09-01

    Most radiative transfer studies place the cloud clearings responsible for the 5-mu m bright areas at pressure levels greater than 1.5 bar whereas the low-albedo clouds are placed at lower pressure levels, in the so-called ammonia cloud. If this picture is correct, and assuming that the strong vertical shear of the zonal wind detected by the Galileo Entry Probe exists at all latitudes in Jupiter, the bright areas at 5 mu m should drift faster than the dark clouds, which is not observed. At the Galileo Probe Entry latitude this can be explained by a wave, but this is not a likely explanation for all regions where the anticorrelation between 5-mu m brightness and red-nIR reflectivity is observed. Therefore, either the vertical zonal wind shears are not global or cloud clearings and dark clouds are located at the same pressure level. We have developed a multiple scattering radiative transfer code to model the limb-darkening at several jovian features derived from IRTF 4.8-mu m images, in order to retrieve information on the cloud levels. The limb darkening coefficients range from 1.4 at hot spots to 0.58 at the Equatorial Region. We also find that reflected light is dominant over thermal emission in the Equatorial Region, as already pointed out by other investigators. Preliminary results from our code tend to favor the idea that the ammonia cloud is a very high-albedo cloud with little influence on the contrast seen in the red and nIR and that a deeper cloud at P >1.5 bar can be responsible for the cloud clearings and for the low-albedo features simultaneously. This research was supported by the Comision Interministerial de Ciencia y Tecnologia under contract ESP96-0623.

  20. New photoionization models of intergalactic clouds

    Science.gov (United States)

    Donahue, Megan; Shull, J. M.

    1991-01-01

    New photoionization models of optically thin low-density intergalactic gas at constant pressure, photoionized by QSOs, are presented. All ion stages of H, He, C, N, O, Si, and Fe, plus H2 are modeled, and the column density ratios of clouds at specified values of the ionization parameter of n sub gamma/n sub H and cloud metallicity are predicted. If Ly-alpha clouds are much cooler than the previously assumed value, 30,000 K, the ionization parameter must be very low, even with the cooling contribution of a trace component of molecules. If the clouds cool below 6000 K, their final equilibrium must be below 3000 K, owing to the lack of a stable phase between 6000 and 3000 K. If it is assumed that the clouds are being irradiated by an EUV power-law continuum typical of WSOs, with J0 = 10 exp -21 ergs/s sq cm Hz, typical cloud thicknesses along the line of sight that are much smaller than would be expected from shocks, thermal instabilities, or gravitational collapse are derived.

  1. Encyclopedia of cloud computing

    CERN Document Server

    Bojanova, Irena

    2016-01-01

    The Encyclopedia of Cloud Computing provides IT professionals, educators, researchers and students with a compendium of cloud computing knowledge. Authored by a spectrum of subject matter experts in industry and academia, this unique publication, in a single volume, covers a wide range of cloud computing topics, including technological trends and developments, research opportunities, best practices, standards, and cloud adoption. Providing multiple perspectives, it also addresses questions that stakeholders might have in the context of development, operation, management, and use of clouds. Furthermore, it examines cloud computing's impact now and in the future. The encyclopedia presents 56 chapters logically organized into 10 sections. Each chapter covers a major topic/area with cross-references to other chapters and contains tables, illustrations, side-bars as appropriate. Furthermore, each chapter presents its summary at the beginning and backend material, references and additional resources for further i...

  2. An Examination of the Nature of Global MODIS Cloud Regimes

    Science.gov (United States)

    Oreopoulos, Lazaros; Cho, Nayeong; Lee, Dongmin; Kato, Seiji; Huffman, George J.

    2014-01-01

    We introduce global cloud regimes (previously also referred to as "weather states") derived from cloud retrievals that use measurements by the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Aqua and Terra satellites. The regimes are obtained by applying clustering analysis on joint histograms of retrieved cloud top pressure and cloud optical thickness. By employing a compositing approach on data sets from satellites and other sources, we examine regime structural and thermodynamical characteristics. We establish that the MODIS cloud regimes tend to form in distinct dynamical and thermodynamical environments and have diverse profiles of cloud fraction and water content. When compositing radiative fluxes from the Clouds and the Earth's Radiant Energy System instrument and surface precipitation from the Global Precipitation Climatology Project, we find that regimes with a radiative warming effect on the atmosphere also produce the largest implied latent heat. Taken as a whole, the results of the study corroborate the usefulness of the cloud regime concept, reaffirm the fundamental nature of the regimes as appropriate building blocks for cloud system classification, clarify their association with standard cloud types, and underscore their distinct radiative and hydrological signatures.

  3. Parameterizing Size Distribution in Ice Clouds

    Energy Technology Data Exchange (ETDEWEB)

    DeSlover, Daniel; Mitchell, David L.

    2009-09-25

    PARAMETERIZING SIZE DISTRIBUTIONS IN ICE CLOUDS David L. Mitchell and Daniel H. DeSlover ABSTRACT An outstanding problem that contributes considerable uncertainty to Global Climate Model (GCM) predictions of future climate is the characterization of ice particle sizes in cirrus clouds. Recent parameterizations of ice cloud effective diameter differ by a factor of three, which, for overcast conditions, often translate to changes in outgoing longwave radiation (OLR) of 55 W m-2 or more. Much of this uncertainty in cirrus particle sizes is related to the problem of ice particle shattering during in situ sampling of the ice particle size distribution (PSD). Ice particles often shatter into many smaller ice fragments upon collision with the rim of the probe inlet tube. These small ice artifacts are counted as real ice crystals, resulting in anomalously high concentrations of small ice crystals (D < 100 µm) and underestimates of the mean and effective size of the PSD. Half of the cirrus cloud optical depth calculated from these in situ measurements can be due to this shattering phenomenon. Another challenge is the determination of ice and liquid water amounts in mixed phase clouds. Mixed phase clouds in the Arctic contain mostly liquid water, and the presence of ice is important for determining their lifecycle. Colder high clouds between -20 and -36 oC may also be mixed phase but in this case their condensate is mostly ice with low levels of liquid water. Rather than affecting their lifecycle, the presence of liquid dramatically affects the cloud optical properties, which affects cloud-climate feedback processes in GCMs. This project has made advancements in solving both of these problems. Regarding the first problem, PSD in ice clouds are uncertain due to the inability to reliably measure the concentrations of the smallest crystals (D < 100 µm), known as the “small mode”. Rather than using in situ probe measurements aboard aircraft, we employed a treatment of ice

  4. Considerations for Cloud Security Operations

    OpenAIRE

    Cusick, James

    2016-01-01

    Information Security in Cloud Computing environments is explored. Cloud Computing is presented, security needs are discussed, and mitigation approaches are listed. Topics covered include Information Security, Cloud Computing, Private Cloud, Public Cloud, SaaS, PaaS, IaaS, ISO 27001, OWASP, Secure SDLC.

  5. Evaluating statistical cloud schemes

    OpenAIRE

    Grützun, Verena; Quaas, Johannes; Morcrette , Cyril J.; Ament, Felix

    2015-01-01

    Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based re...

  6. Cloud Computing Governance Lifecycle

    OpenAIRE

    Soňa Karkošková; George Feuerlicht

    2016-01-01

    Externally provisioned cloud services enable flexible and on-demand sourcing of IT resources. Cloud computing introduces new challenges such as need of business process redefinition, establishment of specialized governance and management, organizational structures and relationships with external providers and managing new types of risk arising from dependency on external providers. There is a general consensus that cloud computing in addition to challenges brings many benefits but it is uncle...

  7. Security in cloud computing

    OpenAIRE

    Moreno Martín, Oriol

    2016-01-01

    Security in Cloud Computing is becoming a challenge for next generation Data Centers. This project will focus on investigating new security strategies for Cloud Computing systems. Cloud Computingisarecent paradigmto deliver services over Internet. Businesses grow drastically because of it. Researchers focus their work on it. The rapid access to exible and low cost IT resources on an on-demand fashion, allows the users to avoid planning ahead for provisioning, and enterprises to save money ...

  8. System reliability of corroding pipelines

    International Nuclear Information System (INIS)

    Zhou Wenxing

    2010-01-01

    A methodology is presented in this paper to evaluate the time-dependent system reliability of a pipeline segment that contains multiple active corrosion defects and is subjected to stochastic internal pressure loading. The pipeline segment is modeled as a series system with three distinctive failure modes due to corrosion, namely small leak, large leak and rupture. The internal pressure is characterized as a simple discrete stochastic process that consists of a sequence of independent and identically distributed random variables each acting over a period of one year. The magnitude of a given sequence follows the annual maximum pressure distribution. The methodology is illustrated through a hypothetical example. Furthermore, the impact of the spatial variability of the pressure loading and pipe resistances associated with different defects on the system reliability is investigated. The analysis results suggest that the spatial variability of pipe properties has a negligible impact on the system reliability. On the other hand, the spatial variability of the internal pressure, initial defect sizes and defect growth rates can have a significant impact on the system reliability.

  9. A piezo-bar pressure probe

    Science.gov (United States)

    Friend, W. H.; Murphy, C. L.; Shanfield, I.

    1967-01-01

    Piezo-bar pressure type probe measures the impact velocity or pressure of a moving debris cloud. It measures pressures up to 200,000 psi and peak pressures may be recorded with a total pulse duration between 5 and 65 musec.

  10. An Introduction To Reliability

    International Nuclear Information System (INIS)

    Park, Kyoung Su

    1993-08-01

    This book introduces reliability with definition of reliability, requirement of reliability, system of life cycle and reliability, reliability and failure rate such as summary, reliability characteristic, chance failure, failure rate which changes over time, failure mode, replacement, reliability in engineering design, reliability test over assumption of failure rate, and drawing of reliability data, prediction of system reliability, conservation of system, failure such as summary and failure relay and analysis of system safety.

  11. CLOUD TECHNOLOGY IN EDUCATION

    Directory of Open Access Journals (Sweden)

    Alexander N. Dukkardt

    2014-01-01

    Full Text Available This article is devoted to the review of main features of cloud computing that can be used in education. Particular attention is paid to those learning and supportive tasks, that can be greatly improved in the case of the using of cloud services. Several ways to implement this approach are proposed, based on widely accepted models of providing cloud services. Nevertheless, the authors have not ignored currently existing problems of cloud technologies , identifying the most dangerous risks and their impact on the core business processes of the university. 

  12. Cloud Computing: An Overview

    Science.gov (United States)

    Qian, Ling; Luo, Zhiguo; Du, Yujian; Guo, Leitao

    In order to support the maximum number of user and elastic service with the minimum resource, the Internet service provider invented the cloud computing. within a few years, emerging cloud computing has became the hottest technology. From the publication of core papers by Google since 2003 to the commercialization of Amazon EC2 in 2006, and to the service offering of AT&T Synaptic Hosting, the cloud computing has been evolved from internal IT system to public service, from cost-saving tools to revenue generator, and from ISP to telecom. This paper introduces the concept, history, pros and cons of cloud computing as well as the value chain and standardization effort.

  13. Genomics With Cloud Computing

    Directory of Open Access Journals (Sweden)

    Sukhamrit Kaur

    2015-04-01

    Full Text Available Abstract Genomics is study of genome which provides large amount of data for which large storage and computation power is needed. These issues are solved by cloud computing that provides various cloud platforms for genomics. These platforms provides many services to user like easy access to data easy sharing and transfer providing storage in hundreds of terabytes more computational power. Some cloud platforms are Google genomics DNAnexus and Globus genomics. Various features of cloud computing to genomics are like easy access and sharing of data security of data less cost to pay for resources but still there are some demerits like large time needed to transfer data less network bandwidth.

  14. Review of Cloud Computing and existing Frameworks for Cloud adoption

    OpenAIRE

    Chang, Victor; Walters, Robert John; Wills, Gary

    2014-01-01

    This paper presents a selected review for Cloud Computing and explains the benefits and risks of adopting Cloud Computing in a business environment. Although all the risks identified may be associated with two major Cloud adoption challenges, a framework is required to support organisations as they begin to use Cloud and minimise risks of Cloud adoption. Eleven Cloud Computing frameworks are investigated and a comparison of their strengths and limitations is made; the result of the comparison...

  15. +Cloud: An Agent-Based Cloud Computing Platform

    OpenAIRE

    González, Roberto; Hernández de la Iglesia, Daniel; de la Prieta Pintado, Fernando; Gil González, Ana Belén

    2017-01-01

    Cloud computing is revolutionizing the services provided through the Internet, and is continually adapting itself in order to maintain the quality of its services. This study presents the platform +Cloud, which proposes a cloud environment for storing information and files by following the cloud paradigm. This study also presents Warehouse 3.0, a cloud-based application that has been developed to validate the services provided by +Cloud.

  16. Oscillation of large air bubble cloud

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Y.Y.; Kim, H.Y.; Park, J.K. [Korea Atomic Energy Research Inst., Daejeon (Korea, Republic of)

    2001-07-01

    The behavior of a large air bubble cloud, which is generated by the air discharged from a perforated sparger, is analyzed by solving Rayleigh-Plesset equation, energy equations and energy balance equation. The equations are solved by Runge-Kutta integration and MacCormack finite difference method. Initial conditions such as driving pressure, air volume, and void fraction strongly affect the bubble pressure amplitude and oscillation frequency. The pool temperature has a strong effect on the oscillation frequency and a negligible effect on the pressure amplitude. The polytropic constant during the compression and expansion processes of individual bubbles ranges from 1.0 to 1.4, which may be attributed to the fact that small bubbles oscillated in frequencies different from their resonance. The temperature of the bubble cloud rapidly approaches the ambient temperature, as is expected from the polytropic constants being between 1.0 and 1.4. (authors)

  17. Oscillation of large air bubble cloud

    International Nuclear Information System (INIS)

    Bae, Y.Y.; Kim, H.Y.; Park, J.K.

    2001-01-01

    The behavior of a large air bubble cloud, which is generated by the air discharged from a perforated sparger, is analyzed by solving Rayleigh-Plesset equation, energy equations and energy balance equation. The equations are solved by Runge-Kutta integration and MacCormack finite difference method. Initial conditions such as driving pressure, air volume, and void fraction strongly affect the bubble pressure amplitude and oscillation frequency. The pool temperature has a strong effect on the oscillation frequency and a negligible effect on the pressure amplitude. The polytropic constant during the compression and expansion processes of individual bubbles ranges from 1.0 to 1.4, which may be attributed to the fact that small bubbles oscillated in frequencies different from their resonance. The temperature of the bubble cloud rapidly approaches the ambient temperature, as is expected from the polytropic constants being between 1.0 and 1.4. (authors)

  18. Lost in Cloud

    Science.gov (United States)

    Maluf, David A.; Shetye, Sandeep D.; Chilukuri, Sri; Sturken, Ian

    2012-01-01

    Cloud computing can reduce cost significantly because businesses can share computing resources. In recent years Small and Medium Businesses (SMB) have used Cloud effectively for cost saving and for sharing IT expenses. With the success of SMBs, many perceive that the larger enterprises ought to move into Cloud environment as well. Government agency s stove-piped environments are being considered as candidates for potential use of Cloud either as an enterprise entity or pockets of small communities. Cloud Computing is the delivery of computing as a service rather than as a product, whereby shared resources, software, and information are provided to computers and other devices as a utility over a network. Underneath the offered services, there exists a modern infrastructure cost of which is often spread across its services or its investors. As NASA is considered as an Enterprise class organization, like other enterprises, a shift has been occurring in perceiving its IT services as candidates for Cloud services. This paper discusses market trends in cloud computing from an enterprise angle and then addresses the topic of Cloud Computing for NASA in two possible forms. First, in the form of a public Cloud to support it as an enterprise, as well as to share it with the commercial and public at large. Second, as a private Cloud wherein the infrastructure is operated solely for NASA, whether managed internally or by a third-party and hosted internally or externally. The paper addresses the strengths and weaknesses of both paradigms of public and private Clouds, in both internally and externally operated settings. The content of the paper is from a NASA perspective but is applicable to any large enterprise with thousands of employees and contractors.

  19. Partial storage optimization and load control strategy of cloud data centers.

    Science.gov (United States)

    Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.

  20. Partial Storage Optimization and Load Control Strategy of Cloud Data Centers

    Science.gov (United States)

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444

  1. Characterization of Mixed-Phase Clouds in the Laboratory

    Science.gov (United States)

    Foster, T. C.; Hallett, J.

    2005-12-01

    A technique was developed in which a mixed-phase cloud of controllable ice and water content is created. First a freezer filled with a water droplet cloud becomes supercooled. Then, in an isolated small volume of the freezer, an adjustable adiabatic expansion locally nucleates ice. Finally the two regions of the cloud are vigorously stirred together producing a mixed-phase cloud throughout the chamber. At this point the water droplets evaporate and the crystals grow at a slow measurable rate, until a fully glaciated cloud results. Experiments were carried out at temperatures near -20 C in a standard top-opening chest freezer. A cloud of supercooled water droplets several micrometers in diameter was produced by a commercial ultrasonic nebulizer. Ice was nucleated using the discharge of an empty compressed air pistol pumped to different initial pressures. In that process high-pressure room temperature air in the pistol expands adiabatically, cooling the air enough to nucleate water droplets which then freeze homogeneously if sufficiently cold. The freezer was partitioned with thick movable walls of foam material to isolate the ice cloud in a small volume of the freezer before mixing occurs. Clouds of supercooled water droplets or of ice particles are readily produced and examined in collimated white light beams. They look similar visually in some cases although normally large crystals with flat reflecting surfaces clearly differ due to the flashes of reflected light. When the pistol is discharged into the supercooled water cloud, it displays a distinct hazy bluish "plume." But discharge into the ice particle cloud leaves no such plume: that discharge only mixes the particles present. This discharge is a test of glaciation in our initially mixed freezer cloud. A visible plume indicates that supercooled water remains in the cloud and no plume indicates the cloud is entirely ice at a high concentration. Our first unsuccessful experiments were done with the freezer

  2. Mobile Cloud Computing for Telemedicine Solutions

    Directory of Open Access Journals (Sweden)

    Mihaela GHEORGHE

    2014-01-01

    Full Text Available Mobile Cloud Computing is a significant technology which combines emerging domains such as mobile computing and cloud computing which has conducted to the development of one of the most IT industry challenging and innovative trend. This is still at the early stage of devel-opment but its main characteristics, advantages and range of services which are provided by an internet-based cluster system have a strong impact on the process of developing telemedi-cine solutions for overcoming the wide challenges the medical system is confronting with. Mo-bile Cloud integrates cloud computing into the mobile environment and has the advantage of overcoming obstacles related to performance (e.g. battery life, storage, and bandwidth, envi-ronment (e.g. heterogeneity, scalability, availability and security (e.g. reliability and privacy which are commonly present at mobile computing level. In this paper, I will present a compre-hensive overview on mobile cloud computing including definitions, services and the use of this technology for developing telemedicine application.

  3. Research on cloud computing solutions

    OpenAIRE

    Liudvikas Kaklauskas; Vaida Zdanytė

    2015-01-01

    Cloud computing can be defined as a new style of computing in which dynamically scala-ble and often virtualized resources are provided as a services over the Internet. Advantages of the cloud computing technology include cost savings, high availability, and easy scalability. Voas and Zhang adapted six phases of computing paradigms, from dummy termi-nals/mainframes, to PCs, networking computing, to grid and cloud computing. There are four types of cloud computing: public cloud, private cloud, ...

  4. VMware vCloud security

    CERN Document Server

    Sarkar, Prasenjit

    2013-01-01

    VMware vCloud Security provides the reader with in depth knowledge and practical exercises sufficient to implement a secured private cloud using VMware vCloud Director and vCloud Networking and Security.This book is primarily for technical professionals with system administration and security administration skills with significant VMware vCloud experience who want to learn about advanced concepts of vCloud security and compliance.

  5. Security Architecture of Cloud Computing

    OpenAIRE

    V.KRISHNA REDDY; Dr. L.S.S.REDDY

    2011-01-01

    The Cloud Computing offers service over internet with dynamically scalable resources. Cloud Computing services provides benefits to the users in terms of cost and ease of use. Cloud Computing services need to address the security during the transmission of sensitive data and critical applications to shared and public cloud environments. The cloud environments are scaling large for data processing and storage needs. Cloud computing environment have various advantages as well as disadvantages o...

  6. Security in hybrid cloud computing

    OpenAIRE

    Koudelka, Ondřej

    2016-01-01

    This bachelor thesis deals with the area of hybrid cloud computing, specifically with its security. The major aim of the thesis is to analyze and compare the chosen hybrid cloud providers. For the minor aim this thesis compares the security challenges of hybrid cloud as opponent to other deployment models. In order to accomplish said aims, this thesis defines the terms cloud computing and hybrid cloud computing in its theoretical part. Furthermore the security challenges for cloud computing a...

  7. Protection of electronic health records (EHRs) in cloud.

    Science.gov (United States)

    Alabdulatif, Abdulatif; Khalil, Ibrahim; Mai, Vu

    2013-01-01

    EHR technology has come into widespread use and has attracted attention in healthcare institutions as well as in research. Cloud services are used to build efficient EHR systems and obtain the greatest benefits of EHR implementation. Many issues relating to building an ideal EHR system in the cloud, especially the tradeoff between flexibility and security, have recently surfaced. The privacy of patient records in cloud platforms is still a point of contention. In this research, we are going to improve the management of access control by restricting participants' access through the use of distinct encrypted parameters for each participant in the cloud-based database. Also, we implement and improve an existing secure index search algorithm to enhance the efficiency of information control and flow through a cloud-based EHR system. At the final stage, we contribute to the design of reliable, flexible and secure access control, enabling quick access to EHR information.

  8. [Treatment of cloud radiative effects in general circulation models

    International Nuclear Information System (INIS)

    Wang, W.C.

    1993-01-01

    This is a renewal proposal for an on-going project of the Department of Energy (DOE)/Atmospheric Radiation Measurement (ARM) Program. The objective of the ARM Program is to improve the treatment of radiation-cloud in GCMs so that reliable predictions of the timing and magnitude of greenhouse gas-induced global warming and regional responses can be made. The ARM Program supports two research areas: (I) The modeling and analysis of data related to the parameterization of clouds and radiation in general circulation models (GCMs); and (II) the development of advanced instrumentation for both mapping the three-dimensional structure of the atmosphere and high accuracy/precision radiometric observations. The present project conducts research in area (I) and focuses on GCM treatment of cloud life cycle, optical properties, and vertical overlapping. The project has two tasks: (1) Development and Refinement of GCM Radiation-Cloud Treatment Using ARM Data; and (2) Validation of GCM Radiation-Cloud Treatment

  9. Cloud Computing : Goals, Issues, SOA, Integrated Technologies and Future -scope

    Directory of Open Access Journals (Sweden)

    V. Madhu Viswanatham

    2016-10-01

    Full Text Available The expansion of networking infrastructure has provided a novel way to store and access resources in a reliable, convenient and affordable means of technology called the Cloud. The cloud has become so popular and established its dominance in many recent world innovations and has highly influenced the trend of the Business process Management with the advantage of shared resources. The ability to remain disaster tolerant, on-demand scalability, flexible deployment and cost effectiveness has made the future world technologies like Internet of Things, to determine the cloud as their data and processing center. However, along with the implementation of cloud based technologies, we must also address the issues involved in its realization. This paper is a review on the advancements, scopes and issues involved in realizing a secured cloud powered environments.

  10. A cosmic ray-climate link and cloud observations

    Directory of Open Access Journals (Sweden)

    Dunne Eimear M.

    2012-11-01

    Full Text Available Despite over 35 years of constant satellite-based measurements of cloud, reliable evidence of a long-hypothesized link between changes in solar activity and Earth’s cloud cover remains elusive. This work examines evidence of a cosmic ray cloud link from a range of sources, including satellite-based cloud measurements and long-term ground-based climatological measurements. The satellite-based studies can be divided into two categories: (1 monthly to decadal timescale analysis and (2 daily timescale epoch-superpositional (composite analysis. The latter analyses frequently focus on sudden high-magnitude reductions in the cosmic ray flux known as Forbush decrease events. At present, two long-term independent global satellite cloud datasets are available (ISCCP and MODIS. Although the differences between them are considerable, neither shows evidence of a solar-cloud link at either long or short timescales. Furthermore, reports of observed correlations between solar activity and cloud over the 1983–1995 period are attributed to the chance agreement between solar changes and artificially induced cloud trends. It is possible that the satellite cloud datasets and analysis methods may simply be too insensitive to detect a small solar signal. Evidence from ground-based studies suggests that some weak but statistically significant cosmic ray-cloud relationships may exist at regional scales, involving mechanisms related to the global electric circuit. However, a poor understanding of these mechanisms and their effects on cloud makes the net impacts of such links uncertain. Regardless of this, it is clear that there is no robust evidence of a widespread link between the cosmic ray flux and clouds.

  11. Microwave Atmospheric-Pressure Sensor

    Science.gov (United States)

    Flower, D. A.; Peckham, G. E.; Bradford, W. J.

    1986-01-01

    Report describes tests of microwave pressure sounder (MPS) for use in satellite measurements of atmospheric pressure. MPS is multifrequency radar operating between 25 and 80 GHz. Determines signal absorption over vertical path through atmosphere by measuring strength of echoes from ocean surface. MPS operates with cloud cover, and suitable for use on current meteorological satellites.

  12. Towards Information Security Metrics Framework for Cloud Computing

    OpenAIRE

    Muhammad Imran Tariq

    2012-01-01

    Cloud computing has recently emerged as new computing paradigm which basically aims to provide customized, reliable, dynamic services over the internet.  Cost and security are influential issues to deploy cloud computing in large enterprise.  Privacy and security are very important issues in terms of user trust and legal compliance. Information Security (IS) metrics are best tool used to measure the efficiency, performance, effectiveness and impact of the security constraints. It is very hard...

  13. Design Private Cloud of Oil and Gas SCADA System

    OpenAIRE

    Liu Miao; Mancang Yuan; Guodong Li

    2014-01-01

    SCADA (Supervisory Control and Data Acquisition) system is computer control system based on supervisory. SCADA system is very important to oil and gas pipeline engineering. Cloud computing is fundamentally altering the expectations for how and when computing, storage and networking resources should be allocated, managed and consumed. In order to increase resource utilization, reliability and availability of oil and gas pipeline SCADA system, the SCADA system based on cloud computing is propos...

  14. Cloud security in vogelvlucht

    NARCIS (Netherlands)

    Pieters, Wolter

    2011-01-01

    Cloud computing is dé hype in IT op het moment, en hoewel veel aspecten niet nieuw zijn, leidt het concept wel tot de noodzaak voor nieuwe vormen van beveiliging. Het idee van cloud computing biedt echter ook juist kansen om hierover na te denken: wat is de rol van informatiebeveiliging in een

  15. CLOUD SERVICES IN EDUCATION

    Directory of Open Access Journals (Sweden)

    Z.S. Seydametova

    2011-05-01

    Full Text Available We present the on-line services based on cloud computing, provided by Google to educational institutions. We describe the own experience of the implementing the Google Apps Education Edition in the educational process. We analyzed and compared the other universities experience of using cloud technologies.

  16. Cloud MicroAtlas

    Indian Academy of Sciences (India)

    We begin by outlining the life cycle of a tall cloud, and thenbriefly discuss cloud systems. We choose one aspect of thislife cycle, namely, the rapid growth of water droplets in ice freeclouds, to then discuss in greater detail. Taking a singlevortex to be a building block of turbulence, we demonstrateone mechanism by which ...

  17. Greening the cloud

    NARCIS (Netherlands)

    van den Hoed, Robert; Hoekstra, Eric; Procaccianti, Giuseppe; Lago, Patricia; Grosso, Paolo; Taal, Arie; Grosskop, Kay; van Bergen, Esther

    The cloud has become an essential part of our daily lives. We use it to store our documents (Dropbox), to stream our music and films (Spotify and Netflix) and without giving it any thought, we use it to work on documents in the cloud (Google Docs).

  18. Learning in the Clouds?

    Science.gov (United States)

    Butin, Dan W.

    2013-01-01

    Engaged learning--the type that happens outside textbooks and beyond the four walls of the classroom--moves beyond right and wrong answers to grappling with the uncertainties and contradictions of a complex world. iPhones back up to the "cloud." GoogleDocs is all about "cloud computing." Facebook is as ubiquitous as the sky.…

  19. Kernel structures for Clouds

    Science.gov (United States)

    Spafford, Eugene H.; Mckendry, Martin S.

    1986-01-01

    An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.

  20. Designing Modern Applications for Virtual Private Cloud

    OpenAIRE

    Dunets, Roman

    2016-01-01

    Delivering enterprise applications that can operate with high availability, excellent performance, and strong security has always been challenging. Important aspect of delivering application in a distributed environment is application portability and packaging its dependencies. This work attempts to address these challenges using state-of-the-art tools and application delivery methods in a cloud. The goal of the project was to create and set up sustainable and reliable service architecture fo...

  1. Distributed Cloud Storage Using Network Coding

    OpenAIRE

    Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk

    2014-01-01

    Distributed storage is usually considered within acloud provider to ensure availability and reliability of the data.However, the user is still directly dependent on the quality of asingle system. It is also entrusting the service provider with largeamounts of private data, which may be accessed by a successfulattack to that cloud system or even be inspected by governmentagencies in some countries. This paper advocates a generalframework for network coding enabled distributed storage overmulti...

  2. Insights from a Regime Decomposition Approach on CERES and CloudSat-inferred Cloud Radiative Effects

    Science.gov (United States)

    Oreopoulos, L.; Cho, N.; Lee, D.

    2015-12-01

    Our knowledge of the Cloud Radiative Effect (CRE) not only at the Top-of-the-Atmosphere (TOA), but also (with the help of some modeling) at the surface (SFC) and within the atmospheric column (ATM) has been steadily growing in recent years. Not only do we have global values for these CREs, but we can now also plot global maps of their geographical distribution. The next step in our effort to advance our knowledge of CRE is to systematically assess the contributions of prevailing cloud systems to the global values. The presentation addresses this issue directly. We identify the world's prevailing cloud systems, which we call "Cloud Regimes" (CRs) via clustering analysis of MODIS (Aqua-Terra) daily joint histograms of Cloud Top Pressure and Cloud Optical Thickness (TAU) at 1 degree scales. We then composite CERES diurnal values of CRE (TOA, SFC, ATM) separately for each CR by averaging these values for each CR occurrence, and thus find the contribution of each CR to the global value of CRE. But we can do more. We can actually decompose vertical profiles of inferred instantaneous CRE from CloudSat/CALIPSO (2B-FLXHR-LIDAR product) by averaging over Aqua CR occurrences (since A-Train formation flying allows collocation). Such an analysis greatly enhances our understanding of the radiative importance of prevailing cloud mixtures at different atmospheric levels. We can, for example, in addition to examining whether the CERES findings on which CRs contribute to radiative cooling and warming of the atmospheric column are consistent with CloudSat, also gain insight on why and where exactly this happens from the shape of the full instantaneous CRE vertical profiles.

  3. Cloud computing basics

    CERN Document Server

    Srinivasan, S

    2014-01-01

    Cloud Computing Basics covers the main aspects of this fast moving technology so that both practitioners and students will be able to understand cloud computing. The author highlights the key aspects of this technology that a potential user might want to investigate before deciding to adopt this service. This book explains how cloud services can be used to augment existing services such as storage, backup and recovery. Addressing the details on how cloud security works and what the users must be prepared for when they move their data to the cloud. Also this book discusses how businesses could prepare for compliance with the laws as well as industry standards such as the Payment Card Industry.

  4. Solar variability and clouds

    CERN Document Server

    Kirkby, Jasper

    2000-01-01

    Satellite observations have revealed a surprising imprint of the 11- year solar cycle on global low cloud cover. The cloud data suggest a correlation with the intensity of Galactic cosmic rays. If this apparent connection between cosmic rays and clouds is real, variations of the cosmic ray flux caused by long-term changes in the solar wind could have a significant influence on the global energy radiation budget and the climate. However a direct link between cosmic rays and clouds has not been unambiguously established and, moreover, the microphysical mechanism is poorly understood. New experiments are being planned to find out whether cosmic rays can affect cloud formation, and if so how. (37 refs).

  5. BlueSky Cloud Framework: An E-Learning Framework Embracing Cloud Computing

    Science.gov (United States)

    Dong, Bo; Zheng, Qinghua; Qiao, Mu; Shu, Jian; Yang, Jie

    Currently, E-Learning has grown into a widely accepted way of learning. With the huge growth of users, services, education contents and resources, E-Learning systems are facing challenges of optimizing resource allocations, dealing with dynamic concurrency demands, handling rapid storage growth requirements and cost controlling. In this paper, an E-Learning framework based on cloud computing is presented, namely BlueSky cloud framework. Particularly, the architecture and core components of BlueSky cloud framework are introduced. In BlueSky cloud framework, physical machines are virtualized, and allocated on demand for E-Learning systems. Moreover, BlueSky cloud framework combines with traditional middleware functions (such as load balancing and data caching) to serve for E-Learning systems as a general architecture. It delivers reliable, scalable and cost-efficient services to E-Learning systems, and E-Learning organizations can establish systems through these services in a simple way. BlueSky cloud framework solves the challenges faced by E-Learning, and improves the performance, availability and scalability of E-Learning systems.

  6. Stratocumulus Cloud Top Radiative Cooling and Cloud Base Updraft Speeds

    Science.gov (United States)

    Kazil, J.; Feingold, G.; Balsells, J.; Klinger, C.

    2017-12-01

    Cloud top radiative cooling is a primary driver of turbulence in the stratocumulus-topped marine boundary. A functional relationship between cloud top cooling and cloud base updraft speeds may therefore exist. A correlation of cloud top radiative cooling and cloud base updraft speeds has been recently identified empirically, providing a basis for satellite retrieval of cloud base updraft speeds. Such retrievals may enable analysis of aerosol-cloud interactions using satellite observations: Updraft speeds at cloud base co-determine supersaturation and therefore the activation of cloud condensation nuclei, which in turn co-determine cloud properties and precipitation formation. We use large eddy simulation and an off-line radiative transfer model to explore the relationship between cloud-top radiative cooling and cloud base updraft speeds in a marine stratocumulus cloud over the course of the diurnal cycle. We find that during daytime, at low cloud water path (CWP correlated, in agreement with the reported empirical relationship. During the night, in the absence of short-wave heating, CWP builds up (CWP > 50 g m-2) and long-wave emissions from cloud top saturate, while cloud base heating increases. In combination, cloud top cooling and cloud base updrafts become weakly anti-correlated. A functional relationship between cloud top cooling and cloud base updraft speed can hence be expected for stratocumulus clouds with a sufficiently low CWP and sub-saturated long-wave emissions, in particular during daytime. At higher CWPs, in particular at night, the relationship breaks down due to saturation of long-wave emissions from cloud top.

  7. Cloud Computing and Risk: A look at the EU and the application of the Data Protection Directive to cloud computing

    OpenAIRE

    Victoria Ostrzenski

    2013-01-01

    The use of cloud services for the management of records presents many challenges, both in terms of the particulars of data security as well the need to sustain and ensure the greater reliability, authenticity, and accuracy of records. To properly grapple with these concerns requires the development of more specifically applicable and effective binding legislation; an important first step is the examination and identification of the risks specific to cloud computing coupled with an evaluation...

  8. Formation of Massive Molecular Cloud Cores by Cloud-cloud Collision

    OpenAIRE

    Inoue, Tsuyoshi; Fukui, Yasuo

    2013-01-01

    Recent observations of molecular clouds around rich massive star clusters including NGC3603, Westerlund 2, and M20 revealed that the formation of massive stars could be triggered by a cloud-cloud collision. By using three-dimensional, isothermal, magnetohydrodynamics simulations with the effect of self-gravity, we demonstrate that massive, gravitationally unstable, molecular cloud cores are formed behind the strong shock waves induced by the cloud-cloud collision. We find that the massive mol...

  9. Multi-Spectral Cloud Retrievals from Moderate Image Spectrometer (MODIS)

    Science.gov (United States)

    Platnick, Steven

    2004-01-01

    MODIS observations from the NASA EOS Terra spacecraft (1030 local time equatorial sun-synchronous crossing) launched in December 1999 have provided a unique set of Earth observation data. With the launch of the NASA EOS Aqua spacecraft (1330 local time crossing! in May 2002: two MODIS daytime (sunlit) and nighttime observations are now available in a 24-hour period allowing some measure of diurnal variability. A comprehensive set of remote sensing algorithms for cloud masking and the retrieval of cloud physical and optical properties has been developed by members of the MODIS atmosphere science team. The archived products from these algorithms have applications in climate modeling, climate change studies, numerical weather prediction, as well as fundamental atmospheric research. In addition to an extensive cloud mask, products include cloud-top properties (temperature, pressure, effective emissivity), cloud thermodynamic phase, cloud optical and microphysical parameters (optical thickness, effective particle radius, water path), as well as derived statistics. An overview of the instrument and cloud algorithms will be presented along with various examples, including an initial analysis of several operational global gridded (Level-3) cloud products from the two platforms. Statistics of cloud optical and microphysical properties as a function of latitude for land and Ocean regions will be shown. Current algorithm research efforts will also be discussed.

  10. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation

    Directory of Open Access Journals (Sweden)

    Wei Jin

    2016-12-01

    Full Text Available Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC, atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency.

  11. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation

    Science.gov (United States)

    Jin, Wei; Gong, Fei; Zeng, Xingbin; Fu, Randi

    2016-01-01

    Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC) is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC), atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency. PMID:27999261

  12. Cloud retrievals from satellite data using optimal estimation: evaluation and application to ATSR

    Directory of Open Access Journals (Sweden)

    C. A. Poulsen

    2012-08-01

    Full Text Available Clouds play an important role in balancing the Earth's radiation budget. Hence, it is vital that cloud climatologies are produced that quantify cloud macro and micro physical parameters and the associated uncertainty. In this paper, we present an algorithm ORAC (Oxford-RAL retrieval of Aerosol and Cloud which is based on fitting a physically consistent cloud model to satellite observations simultaneously from the visible to the mid-infrared, thereby ensuring that the resulting cloud properties provide both a good representation of the short-wave and long-wave radiative effects of the observed cloud. The advantages of the optimal estimation method are that it enables rigorous error propagation and the inclusion of all measurements and any a priori information and associated errors in a rigorous mathematical framework. The algorithm provides a measure of the consistency between retrieval representation of cloud and satellite radiances. The cloud parameters retrieved are the cloud top pressure, cloud optical depth, cloud effective radius, cloud fraction and cloud phase.

    The algorithm can be applied to most visible/infrared satellite instruments. In this paper, we demonstrate the applicability to the Along-Track Scanning Radiometers ATSR-2 and AATSR. Examples of applying the algorithm to ATSR-2 flight data are presented and the sensitivity of the retrievals assessed, in particular the algorithm is evaluated for a number of simulated single-layer and multi-layer conditions. The algorithm was found to perform well for single-layer cloud except when the cloud was very thin; i.e., less than 1 optical depths. For the multi-layer cloud, the algorithm was robust except when the upper ice cloud layer is less than five optical depths. In these cases the retrieved cloud top pressure and cloud effective radius become a weighted average of the 2 layers. The sum of optical depth of multi-layer cloud is retrieved well until the cloud becomes thick

  13. Aerosol microphysical and radiative effects on continental cloud ensembles

    Science.gov (United States)

    Wang, Yuan; Vogel, Jonathan M.; Lin, Yun; Pan, Bowen; Hu, Jiaxi; Liu, Yangang; Dong, Xiquan; Jiang, Jonathan H.; Yung, Yuk L.; Zhang, Renyi

    2018-02-01

    Aerosol-cloud-radiation interactions represent one of the largest uncertainties in the current climate assessment. Much of the complexity arises from the non-monotonic responses of clouds, precipitation and radiative fluxes to aerosol perturbations under various meteorological conditions. In this study, an aerosol-aware WRF model is used to investigate the microphysical and radiative effects of aerosols in three weather systems during the March 2000 Cloud Intensive Observational Period campaign at the US Southern Great Plains. Three simulated cloud ensembles include a low-pressure deep convective cloud system, a collection of less-precipitating stratus and shallow cumulus, and a cold frontal passage. The WRF simulations are evaluated by several ground-based measurements. The microphysical properties of cloud hydrometeors, such as their mass and number concentrations, generally show monotonic trends as a function of cloud condensation nuclei concentrations. Aerosol radiative effects do not influence the trends of cloud microphysics, except for the stratus and shallow cumulus cases where aerosol semi-direct effects are identified. The precipitation changes by aerosols vary with the cloud types and their evolving stages, with a prominent aerosol invigoration effect and associated enhanced precipitation from the convective sources. The simulated aerosol direct effect suppresses precipitation in all three cases but does not overturn the aerosol indirect effect. Cloud fraction exhibits much smaller sensitivity (typically less than 2%) to aerosol perturbations, and the responses vary with aerosol concentrations and cloud regimes. The surface shortwave radiation shows a monotonic decrease by increasing aerosols, while the magnitude of the decrease depends on the cloud type.

  14. The collision of a strong shock with a gas cloud: a model for Cassiopeia A

    International Nuclear Information System (INIS)

    Sgro, A.G.

    1975-01-01

    The result of the collision of the shock with the cloud is a shock traveling around the cloud, a shock transmitted into the cloud, and a shock reflected from the cloud. By equating the cooling time of the posttransmitted shock gas to the time required for the transmitted shock to travel the length of the cloud, a critical cloud density n/subc/ /sup prime/ is defined. For clouds with density greater than n/subc/ /sup prime/, the posttransmitted shock gas cools rapidly and then emits the lines of the lower ionization stages of its constituent elements. The structure of such and its expected appearance to an observer are discussed and compared with the quasi-stationary condensations of Cas A. Conversely, clouds with density less than n/subc//sup prime/ remain hot for several thousand years, and are sources of X-radiation whose temperatures are much less than that of the intercloud gas. After the transmitted shock passes, the cloud pressure is greater than the pressure in the surrounding gas, causing the cloud to expand and the emission to decrease from its value just after the collision. A model in which the soft X-radiation of Cas A is due to a collection of such clouds is discussed. The faint emission patches to the north of Cas A are interpreted as preshocked clouds which will probably become quasi-stationary condensations after being hit by the shock

  15. Making and Breaking Clouds

    Science.gov (United States)

    Kohler, Susanna

    2017-10-01

    Molecular clouds which youre likely familiar with from stunning popular astronomy imagery lead complicated, tumultuous lives. A recent study has now found that these features must be rapidly built and destroyed.Star-Forming CollapseA Hubble view of a molecular cloud, roughly two light-years long, that has broken off of the Carina Nebula. [NASA/ESA, N. Smith (University of California, Berkeley)/The Hubble Heritage Team (STScI/AURA)]Molecular gas can be found throughout our galaxy in the form of eminently photogenic clouds (as featured throughout this post). Dense, cold molecular gas makes up more than 20% of the Milky Ways total gas mass, and gravitational instabilities within these clouds lead them to collapse under their own weight, resulting in the formation of our galaxys stars.How does this collapse occur? The simplest explanation is that the clouds simply collapse in free fall, with no source of support to counter their contraction. But if all the molecular gas we observe collapsed on free-fall timescales, star formation in our galaxy would churn a rate thats at least an order of magnitude higher than the observed 12 solar masses per year in the Milky Way.Destruction by FeedbackAstronomers have theorized that there may be some mechanism that supports these clouds against gravity, slowing their collapse. But both theoretical studies and observations of the clouds have ruled out most of these potential mechanisms, and mounting evidence supports the original interpretation that molecular clouds are simply gravitationally collapsing.A sub-mm image from ESOs APEX telescope of part of the Taurus molecular cloud, roughly ten light-years long, superimposed on a visible-light image of the region. [ESO/APEX (MPIfR/ESO/OSO)/A. Hacar et al./Digitized Sky Survey 2. Acknowledgment: Davide De Martin]If this is indeed the case, then one explanation for our low observed star formation rate could be that molecular clouds are rapidly destroyed by feedback from the very stars

  16. Cloud Computing: An Overview

    Directory of Open Access Journals (Sweden)

    Libor Sarga

    2012-10-01

    Full Text Available As cloud computing is gaining acclaim as a cost-effective alternative to acquiring processing resources for corporations, scientific applications and individuals, various challenges are rapidly coming to the fore. While academia struggles to procure a concise definition, corporations are more interested in competitive advantages it may generate and individuals view it as a way of speeding up data access times or a convenient backup solution. Properties of the cloud architecture largely preclude usage of existing practices while achieving end-users’ and companies’ compliance requires considering multiple infrastructural as well as commercial factors, such as sustainability in case of cloud-side interruptions, identity management and off-site corporate data handling policies. The article overviews recent attempts at formal definitions of cloud computing, summarizes and critically evaluates proposed delimitations, and specifies challenges associated with its further proliferation. Based on the conclusions, future directions in the field of cloud computing are also briefly hypothesized to include deeper focus on community clouds and bolstering innovative cloud-enabled platforms and devices such as tablets, smart phones, as well as entertainment applications.

  17. Cloud Computing Law

    CERN Document Server

    Millard, Christopher

    2013-01-01

    This book is about the legal implications of cloud computing. In essence, ‘the cloud’ is a way of delivering computing resources as a utility service via the internet. It is evolving very rapidly with substantial investments being made in infrastructure, platforms and applications, all delivered ‘as a service’. The demand for cloud resources is enormous, driven by such developments as the deployment on a vast scale of mobile apps and the rapid emergence of ‘Big Data’. Part I of this book explains what cloud computing is and how it works. Part II analyses contractual relationships between cloud service providers and their customers, as well as the complex roles of intermediaries. Drawing on primary research conducted by the Cloud Legal Project at Queen Mary University of London, cloud contracts are analysed in detail, including the appropriateness and enforceability of ‘take it or leave it’ terms of service, as well as the scope for negotiating cloud deals. Specific arrangements for public sect...

  18. Community Cloud Computing

    Science.gov (United States)

    Marinos, Alexandros; Briscoe, Gerard

    Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.

  19. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  20. What do satellite backscatter ultraviolet and visible spectrometers see over snow and ice? A study of clouds and ozone using the A-train

    Directory of Open Access Journals (Sweden)

    A. P. Vasilkov

    2010-05-01

    Full Text Available In this paper, we examine how clouds over snow and ice affect ozone absorption and how these effects may be accounted for in satellite retrieval algorithms. Over snow and ice, the Aura Ozone Monitoring Instrument (OMI Raman cloud pressure algorithm derives an effective scene pressure. When this scene pressure differs appreciably from the surface pressure, the difference is assumed to be caused by a cloud that is shielding atmospheric absorption and scattering below cloud-top from satellite view. A pressure difference of 100 hPa is used as a crude threshold for the detection of clouds that significantly shield tropospheric ozone absorption. Combining the OMI effective scene pressure and the Aqua MODerate-resolution Imaging Spectroradiometer (MODIS cloud top pressure, we can distinguish between shielding and non-shielding clouds.

    To evaluate this approach, we performed radiative transfer simulations under various observing conditions. Using cloud vertical extinction profiles from the CloudSat Cloud Profiling Radar (CPR, we find that clouds over a bright surface can produce significant shielding (i.e., a reduction in the sensitivity of the top-of-the-atmosphere radiance to ozone absorption below the clouds. The amount of shielding provided by clouds depends upon the geometry (solar and satellite zenith angles and the surface albedo as well as cloud optical thickness. We also use CloudSat observations to qualitatively evaluate our approach. The CloudSat, Aqua, and Aura satellites fly in an afternoon polar orbit constellation with ground overpass times within 15 min of each other.

    The current Total Ozone Mapping Spectrometer (TOMS total column ozone algorithm (that has also been applied to the OMI assumes no clouds over snow and ice. This assumption leads to errors in the retrieved ozone column. We show that the use of OMI effective scene pressures over snow and ice reduces these errors and leads to a more homogeneous spatial

  1. Diffuse interstellar clouds

    International Nuclear Information System (INIS)

    Black, J.H.

    1987-01-01

    The author defines and discusses the nature of diffuse interstellar clouds. He discusses how they contribute to the general extinction of starlight. The atomic and molecular species that have been identified in the ultraviolet, visible, and near infrared regions of the spectrum of a diffuse cloud are presented. The author illustrates some of the practical considerations that affect absorption line observations of interstellar atoms and molecules. Various aspects of the theoretical description of diffuse clouds required for a full interpretation of the observations are discussed

  2. Cloud Computing Security

    OpenAIRE

    Ngongang, Guy

    2011-01-01

    This project aimed to show how possible it is to use a network intrusion detection system in the cloud. The security in the cloud is a concern nowadays and security professionals are still finding means to make cloud computing more secure. First of all the installation of the ESX4.0, vCenter Server and vCenter lab manager in server hardware was successful in building the platform. This allowed the creation and deployment of many virtual servers. Those servers have operating systems and a...

  3. Aerosols, clouds and radiation

    Energy Technology Data Exchange (ETDEWEB)

    Twomey, S [University of Arizona, Tucson, AZ (USA). Inst. of Atmospheric Physics

    1991-01-01

    Most of the so-called 'CO{sub 2} effect' is, in fact, an 'H{sub 2}O effect' brought into play by the climate modeler's assumption that planetary average temperature dictates water-vapor concentration (following Clapeyron-Clausius). That assumption ignores the removal process, which cloud physicists know to be influenced by the aerosol, since the latter primarily controls cloud droplet number and size. Droplet number and size are also influential for shortwave (solar) energy. The reflectance of many thin to moderately thick clouds changes when nuclei concentrations change and make shortwave albedo susceptible to aerosol influence.

  4. Effect of radiation pressure in the cores of globular clusters

    Energy Technology Data Exchange (ETDEWEB)

    Angeletti, L; Capuzzo-Dolcetta, R; Giannone

    1981-10-01

    The possible effects of a presence of a dust cloud in the cores of globular clusters was investigated. Two cluster models were considered together with various models of clouds. The problem of radiation transfer was solved under some simplifying assumptions. Owing to a differential absorption of the star light in the cloud, radiation pressure turned out be inward-directed in some cloud models. This fact may lead to a confinement of some dust in the central regions of globular clusters.

  5. Adaptive Network Coded Clouds: High Speed Downloads and Cost-Effective Version Control

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Heide, Janus; Roetter, Daniel Enrique Lucani

    2018-01-01

    Although cloud systems provide a reliable and flexible storage solution, the use of a single cloud service constitutes a single point of failure, which can compromise data availability, download speed, and security. To address these challenges, we advocate for the use of multiple cloud storage...... providers simultaneously using network coding as the key enabling technology. Our goal is to study two challenges of network coded storage systems. First, the efficient update of the number of coded fragments per cloud in a system aggregating multiple clouds in order to boost the download speed of files. We...... developed a novel scheme using recoding with limited packets to trade-off storage space, reliability, and data retrieval speed. Implementation and measurements with commercial cloud providers show that up to 9x less network use is needed compared to other network coding schemes, while maintaining similar...

  6. Synergetic cloud fraction determination for SCIAMACHY using MERIS

    Directory of Open Access Journals (Sweden)

    C. Schlundt

    2011-02-01

    Full Text Available Since clouds play an essential role in the Earth's climate system, it is important to understand the cloud characteristics as well as their distribution on a global scale using satellite observations. The main scientific objective of SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY onboard the ENVISAT satellite is the retrieval of vertical columns of trace gases.

    On the one hand, SCIAMACHY has to be sensitive to low variations in trace gas concentrations which means the ground pixel size has to be large enough. On the other hand, such a large pixel size leads to the problem that SCIAMACHY spectra are often contaminated by clouds. SCIAMACHY spectral measurements are not well suitable to derive a reliable sub-pixel cloud fraction that can be used as input parameter for subsequent retrievals of cloud properties or vertical trace gas columns. Therefore, we use MERIS/ENVISAT spectral measurements with its high spatial resolution as sub-pixel information for the determination of MerIs Cloud fRation fOr Sciamachy (MICROS. Since MERIS covers an even broader swath width than SCIAMACHY, no problems in spatial and temporal collocation of measurements occur. This enables the derivation of a SCIAMACHY cloud fraction with an accuracy much higher as compared with other current cloud fractions that are based on SCIAMACHY's PMD (Polarization Measurement Device data.

    We present our new developed MICROS algorithm, based on the threshold approach, as well as a qualitative validation of our results with MERIS satellite images for different locations, especially with respect to bright surfaces such as snow/ice and sands. In addition, the SCIAMACHY cloud fractions derived from MICROS are intercompared with other current SCIAMACHY cloud fractions based on different approaches demonstrating a considerable improvement regarding geometric cloud fraction determination using the MICROS algorithm.

  7. Pressurized-water reactors

    International Nuclear Information System (INIS)

    Bush, S.H.

    1983-03-01

    An overview of the pressurized-water reactor (PWR) pressure boundary problems is presented. Specifically exempted will be discussions of problems with pumps, valves and steam generators on the basis that they will be covered in other papers. Pressure boundary reliability is examined in the context of real or perceived problems occurring over the past 5 to 6 years since the last IAEA Reliability Symposium. Issues explicitly covered will include the status of the pressurized thermal-shock problem, reliability of inservice inspections with emphasis on examination of the region immediately under the reactor pressure vessel (RPV) cladding, history of piping failures with emphasis on failure modes and mechanisms. Since nondestructive examination is the topic of one session, discussion will be limited to results rather than techniques

  8. COMPARATIVE STUDY OF CLOUD COMPUTING AND MOBILE CLOUD COMPUTING

    OpenAIRE

    Nidhi Rajak*, Diwakar Shukla

    2018-01-01

    Present era is of Information and Communication Technology (ICT) and there are number of researches are going on Cloud Computing and Mobile Cloud Computing such security issues, data management, load balancing and so on. Cloud computing provides the services to the end user over Internet and the primary objectives of this computing are resource sharing and pooling among the end users. Mobile Cloud Computing is a combination of Cloud Computing and Mobile Computing. Here, data is stored in...

  9. Molecular clouds near supernova remnants

    International Nuclear Information System (INIS)

    Wootten, H.A.

    1978-01-01

    The physical properties of molecular clouds near supernova remnants were investigated. Various properties of the structure and kinematics of these clouds are used to establish their physical association with well-known remmnants. An infrared survey of the most massive clouds revealed embedded objects, probably stars whose formation was induced by the supernova blast wave. In order to understand the relationship between these and other molecular clouds, a control group of clouds was also observed. Excitation models for dense regions of all the clouds are constructed to evaluate molecular abundances in these regions. Those clouds that have embedded stars have lower molecular abundances than the clouds that do not. A cloud near the W28 supernova remnant also has low abundances. Molecular abundances are used to measure an important parameter, the electron density, which is not directly observable. In some clouds extensive deuterium fractionation is observed which confirms electron density measurements in those clouds. Where large deuterium fractionation is observed, the ionization rate in the cloud interior can also be measured. The electron density and ionization rate in the cloud near W28 are higher than in most clouds. The molecular abundances and electron densities are functions of the chemical and dynamical state of evolution of the cloud. Those clouds with lowest abundances are probably the youngest clouds. As low-abundance clouds, some clouds near supernova remnants may have been recently swept from the local interstellar material. Supernova remnants provide sites for star formation in ambient clouds by compressing them, and they sweep new clouds from more diffuse local matter

  10. Are Cloud Environments Ready for Scientific Applications?

    Science.gov (United States)

    Mehrotra, P.; Shackleford, K.

    2011-12-01

    Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to

  11. Taxonomy of cloud computing services

    NARCIS (Netherlands)

    Hoefer, C.N.; Karagiannis, Georgios

    2010-01-01

    Cloud computing is a highly discussed topic, and many big players of the software industry are entering the development of cloud services. Several companies want to explore the possibilities and benefits of cloud computing, but with the amount of cloud computing services increasing quickly, the need

  12. Cloud Computing (1/2)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Cloud computing, the recent years buzzword for distributed computing, continues to attract and keep the interest of both the computing and business world. These lectures aim at explaining "What is Cloud Computing?" identifying and analyzing it's characteristics, models, and applications. The lectures will explore different "Cloud definitions" given by different authors and use them to introduce the particular concepts. The main cloud models (SaaS, PaaS, IaaS), cloud types (public, private, hybrid), cloud standards and security concerns will be presented. The borders between Cloud Computing and Grid Computing, Server Virtualization, Utility Computing will be discussed and analyzed.

  13. Cloud Computing (2/2)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Cloud computing, the recent years buzzword for distributed computing, continues to attract and keep the interest of both the computing and business world. These lectures aim at explaining "What is Cloud Computing?" identifying and analyzing it's characteristics, models, and applications. The lectures will explore different "Cloud definitions" given by different authors and use them to introduce the particular concepts. The main cloud models (SaaS, PaaS, IaaS), cloud types (public, private, hybrid), cloud standards and security concerns will be presented. The borders between Cloud Computing and Grid Computing, Server Virtualization, Utility Computing will be discussed and analyzed.

  14. IBM SmartCloud essentials

    CERN Document Server

    Schouten, Edwin

    2013-01-01

    A practical, user-friendly guide that provides an introduction to cloud computing using IBM SmartCloud, along with a thorough understanding of resource management in a cloud environment.This book is great for anyone who wants to get a grasp of what cloud computing is and what IBM SmartCloud has to offer. If you are an IT specialist, IT architect, system administrator, or a developer who wants to thoroughly understand the cloud computing resource model, this book is ideal for you. No prior knowledge of cloud computing is expected.

  15. Cloud flexibility using DIRAC interware

    International Nuclear Information System (INIS)

    Albor, Víctor Fernandez; Miguelez, Marcos Seco; Silva, Juan Jose Saborido; Pena, Tomas Fernandez; Muñoz, Victor Mendez; Diaz, Ricardo Graciani

    2014-01-01

    Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system library or a specific platform is required by the collaboration to which they belong. On this scenario, if a data center wants to service software to incompatible communities, it has to split its physical resources among those communities. This splitting will inevitably lead to an underuse of resources because the data centers are bound to have periods where one or more of its subclusters are idle. It is, in this situation, where Cloud Computing provides the flexibility and reduction in computational cost that data centers are searching for. This paper describes a set of realistic tests that we ran on one of such implementations. The test comprise software from three different HEP communities (Auger, LHCb and QCD phenomelogists) and the Parsec Benchmark Suite running on one or more of three Linux flavors (SL5, Ubuntu 10.04 and Fedora 13). The implemented infrastructure has, at the cloud level, CloudStack that manages the virtual machines (VM) and the hosts on which they run, and, at the user level, the DIRAC framework along with a VM extension that will submit, monitorize and keep track of the user jobs and also requests CloudStack to start or stop the necessary VM's. In this infrastructure, the community software is distributed via the CernVM-FS, which has been proven to be a reliable and scalable software distribution system. With the resulting infrastructure, users are allowed to send their jobs transparently to the Data Center. The main purpose of this system is the creation of flexible cluster, multiplatform with an scalable method for software distribution for

  16. Cloud flexibility using DIRAC interware

    Science.gov (United States)

    Fernandez Albor, Víctor; Seco Miguelez, Marcos; Fernandez Pena, Tomas; Mendez Muñoz, Victor; Saborido Silva, Juan Jose; Graciani Diaz, Ricardo

    2014-06-01

    Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system library or a specific platform is required by the collaboration to which they belong. On this scenario, if a data center wants to service software to incompatible communities, it has to split its physical resources among those communities. This splitting will inevitably lead to an underuse of resources because the data centers are bound to have periods where one or more of its subclusters are idle. It is, in this situation, where Cloud Computing provides the flexibility and reduction in computational cost that data centers are searching for. This paper describes a set of realistic tests that we ran on one of such implementations. The test comprise software from three different HEP communities (Auger, LHCb and QCD phenomelogists) and the Parsec Benchmark Suite running on one or more of three Linux flavors (SL5, Ubuntu 10.04 and Fedora 13). The implemented infrastructure has, at the cloud level, CloudStack that manages the virtual machines (VM) and the hosts on which they run, and, at the user level, the DIRAC framework along with a VM extension that will submit, monitorize and keep track of the user jobs and also requests CloudStack to start or stop the necessary VM's. In this infrastructure, the community software is distributed via the CernVM-FS, which has been proven to be a reliable and scalable software distribution system. With the resulting infrastructure, users are allowed to send their jobs transparently to the Data Center. The main purpose of this system is the creation of flexible cluster, multiplatform with an scalable method for software distribution for several

  17. Distributed and cloud computing from parallel processing to the Internet of Things

    CERN Document Server

    Hwang, Kai; Fox, Geoffrey C

    2012-01-01

    Distributed and Cloud Computing, named a 2012 Outstanding Academic Title by the American Library Association's Choice publication, explains how to create high-performance, scalable, reliable systems, exposing the design principles, architecture, and innovative applications of parallel, distributed, and cloud computing systems. Starting with an overview of modern distributed models, the book provides comprehensive coverage of distributed and cloud computing, including: Facilitating management, debugging, migration, and disaster recovery through virtualization Clustered systems for resear

  18. Cloud MicroAtlas∗

    Indian Academy of Sciences (India)

    ∗Any resemblance to the title of David Mitchell's book is purely intentional! RESONANCE | March 2017. 269 .... The most comprehensive reference we know of on the subject of cloud microphysics is the book .... Economic and. Political Weekly ...

  19. Experimental project - Cloud chamber

    International Nuclear Information System (INIS)

    Nour, Elena; Quinchard, Gregory; Soudon, Paul

    2015-01-01

    This document reports an academic experimental project dealing with the general concepts of radioactivity and their application to the cloud room experiment. The author first recalls the history of the design and development of a cloud room, and some definitions and characteristics of cosmic radiation, and proposes a description of the principle and physics of a cloud room. The second part is a theoretical one, and addresses the involved particles, the origins of electrons, and issues related to the transfer of energy (Bremsstrahlung effect, Bragg peak). The third part reports the experimental work with the assessment of a cloud droplet radius, the identification of a trace for each particle (alphas and electrons), and the study of the magnetic field deviation

  20. Green symbiotic cloud communications

    CERN Document Server

    Mustafa, H D; Desai, Uday B; Baveja, Brij Mohan

    2017-01-01

    This book intends to change the perception of modern day telecommunications. Communication systems, usually perceived as “dumb pipes”, carrying information / data from one point to another, are evolved into intelligently communicating smart systems. The book introduces a new field of cloud communications. The concept, theory, and architecture of this new field of cloud communications are discussed. The book lays down nine design postulates that form the basis of the development of a first of its kind cloud communication paradigm entitled Green Symbiotic Cloud Communications or GSCC. The proposed design postulates are formulated in a generic way to form the backbone for development of systems and technologies of the future. The book can be used to develop courses that serve as an essential part of graduate curriculum in computer science and electrical engineering. Such courses can be independent or part of high-level research courses. The book will also be of interest to a wide range of readers including b...

  1. Entangled Cloud Storage

    DEFF Research Database (Denmark)

    Ateniese, Giuseppe; Dagdelen, Özgür; Damgård, Ivan Bjerre

    2012-01-01

    keeps the files in it private but still lets each client P_i recover his own data by interacting with S; no cooperation from other clients is needed. At the same time, the cloud provider is discouraged from altering or overwriting any significant part of c as this will imply that none of the clients can......Entangled cloud storage enables a set of clients {P_i} to “entangle” their files {f_i} into a single clew c to be stored by a (potentially malicious) cloud provider S. The entanglement makes it impossible to modify or delete significant part of the clew without affecting all files in c. A clew...... recover their files. We provide theoretical foundations for entangled cloud storage, introducing the notion of an entangled encoding scheme that guarantees strong security requirements capturing the properties above. We also give a concrete construction based on privacy-preserving polynomial interpolation...

  2. Satellite Cloud and Radiative Property Processing and Distribution System on the NASA Langley ASDC OpenStack and OpenShift Cloud Platform

    Science.gov (United States)

    Nguyen, L.; Chee, T.; Palikonda, R.; Smith, W. L., Jr.; Bedka, K. M.; Spangenberg, D.; Vakhnin, A.; Lutz, N. E.; Walter, J.; Kusterer, J.

    2017-12-01

    Cloud Computing offers new opportunities for large-scale scientific data producers to utilize Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) IT resources to process and deliver data products in an operational environment where timely delivery, reliability, and availability are critical. The NASA Langley Research Center Atmospheric Science Data Center (ASDC) is building and testing a private and public facing cloud for users in the Science Directorate to utilize as an everyday production environment. The NASA SatCORPS (Satellite ClOud and Radiation Property Retrieval System) team processes and derives near real-time (NRT) global cloud products from operational geostationary (GEO) satellite imager datasets. To deliver these products, we will utilize the public facing cloud and OpenShift to deploy a load-balanced webserver for data storage, access, and dissemination. The OpenStack private cloud will host data ingest and computational capabilities for SatCORPS processing. This paper will discuss the SatCORPS migration towards, and usage of, the ASDC Cloud Services in an operational environment. Detailed lessons learned from use of prior cloud providers, specifically the Amazon Web Services (AWS) GovCloud and the Government Cloud administered by the Langley Managed Cloud Environment (LMCE) will also be discussed.

  3. Marine Cloud Brightening

    Energy Technology Data Exchange (ETDEWEB)

    Latham, John; Bower, Keith; Choularton, Tom; Coe, H.; Connolly, P.; Cooper, Gary; Craft, Tim; Foster, Jack; Gadian, Alan; Galbraith, Lee; Iacovides, Hector; Johnston, David; Launder, Brian; Leslie, Brian; Meyer, John; Neukermans, Armand; Ormond, Bob; Parkes, Ben; Rasch, Philip J.; Rush, John; Salter, Stephen; Stevenson, Tom; Wang, Hailong; Wang, Qin; Wood, Robert

    2012-09-07

    The idea behind the marine cloud-brightening (MCB) geoengineering technique is that seeding marine stratocumulus clouds with copious quantities of roughly monodisperse sub-micrometre sea water particles might significantly enhance the cloud droplet number concentration, and thereby the cloud albedo and possibly longevity. This would produce a cooling, which general circulation model (GCM) computations suggest could - subject to satisfactory resolution of technical and scientific problems identified herein - have the capacity to balance global warming up to the carbon dioxide-doubling point. We describe herein an account of our recent research on a number of critical issues associated with MCB. This involves (i) GCM studies, which are our primary tools for evaluating globally the effectiveness of MCB, and assessing its climate impacts on rainfall amounts and distribution, and also polar sea-ice cover and thickness; (ii) high-resolution modelling of the effects of seeding on marine stratocumulus, which are required to understand the complex array of interacting processes involved in cloud brightening; (iii) microphysical modelling sensitivity studies, examining the influence of seeding amount, seedparticle salt-mass, air-mass characteristics, updraught speed and other parameters on cloud-albedo change; (iv) sea water spray-production techniques; (v) computational fluid dynamics studies of possible large-scale periodicities in Flettner rotors; and (vi) the planning of a three-stage limited-area field research experiment, with the primary objectives of technology testing and determining to what extent, if any, cloud albedo might be enhanced by seeding marine stratocumulus clouds on a spatial scale of around 100 km. We stress that there would be no justification for deployment of MCB unless it was clearly established that no significant adverse consequences would result. There would also need to be an international agreement firmly in favour of such action.

  4. CLOUD COMPUTING SECURITY ISSUES

    OpenAIRE

    Florin OGIGAU-NEAMTIU

    2012-01-01

    The term “cloud computing” has been in the spotlights of IT specialists the last years because of its potential to transform this industry. The promised benefits have determined companies to invest great sums of money in researching and developing this domain and great steps have been made towards implementing this technology. Managers have traditionally viewed IT as difficult and expensive and the promise of cloud computing leads many to think that IT will now be easy and cheap. The reality ...

  5. Marine cloud brightening.

    Science.gov (United States)

    Latham, John; Bower, Keith; Choularton, Tom; Coe, Hugh; Connolly, Paul; Cooper, Gary; Craft, Tim; Foster, Jack; Gadian, Alan; Galbraith, Lee; Iacovides, Hector; Johnston, David; Launder, Brian; Leslie, Brian; Meyer, John; Neukermans, Armand; Ormond, Bob; Parkes, Ben; Rasch, Phillip; Rush, John; Salter, Stephen; Stevenson, Tom; Wang, Hailong; Wang, Qin; Wood, Rob

    2012-09-13

    The idea behind the marine cloud-brightening (MCB) geoengineering technique is that seeding marine stratocumulus clouds with copious quantities of roughly monodisperse sub-micrometre sea water particles might significantly enhance the cloud droplet number concentration, and thereby the cloud albedo and possibly longevity. This would produce a cooling, which general circulation model (GCM) computations suggest could-subject to satisfactory resolution of technical and scientific problems identified herein-have the capacity to balance global warming up to the carbon dioxide-doubling point. We describe herein an account of our recent research on a number of critical issues associated with MCB. This involves (i) GCM studies, which are our primary tools for evaluating globally the effectiveness of MCB, and assessing its climate impacts on rainfall amounts and distribution, and also polar sea-ice cover and thickness; (ii) high-resolution modelling of the effects of seeding on marine stratocumulus, which are required to understand the complex array of interacting processes involved in cloud brightening; (iii) microphysical modelling sensitivity studies, examining the influence of seeding amount, seed-particle salt-mass, air-mass characteristics, updraught speed and other parameters on cloud-albedo change; (iv) sea water spray-production techniques; (v) computational fluid dynamics studies of possible large-scale periodicities in Flettner rotors; and (vi) the planning of a three-stage limited-area field research experiment, with the primary objectives of technology testing and determining to what extent, if any, cloud albedo might be enhanced by seeding marine stratocumulus clouds on a spatial scale of around 100×100 km. We stress that there would be no justification for deployment of MCB unless it was clearly established that no significant adverse consequences would result. There would also need to be an international agreement firmly in favour of such action.

  6. Cloud benchmarking for performance

    OpenAIRE

    Varghese, Blesson; Akgun, Ozgur; Miguel, Ian; Thai, Long; Barker, Adam

    2014-01-01

    Date of Acceptance: 20/09/2014 How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computa...

  7. Toward Cloud Computing Evolution

    OpenAIRE

    Susanto, Heru; Almunawar, Mohammad Nabil; Kang, Chen Chin

    2012-01-01

    -Information Technology (IT) shaped the success of organizations, giving them a solid foundation that increases both their level of efficiency as well as productivity. The computing industry is witnessing a paradigm shift in the way computing is performed worldwide. There is a growing awareness among consumers and enterprises to access their IT resources extensively through a "utility" model known as "cloud computing." Cloud computing was initially rooted in distributed grid-based computing. ...

  8. On Cloud-based Oversubscription

    OpenAIRE

    Householder, Rachel; Arnold, Scott; Green, Robert

    2014-01-01

    Rising trends in the number of customers turning to the cloud for their computing needs has made effective resource allocation imperative for cloud service providers. In order to maximize profits and reduce waste, providers have started to explore the role of oversubscribing cloud resources. However, the benefits of cloud-based oversubscription are not without inherent risks. This paper attempts to unveil the incentives, risks, and techniques behind oversubscription in a cloud infrastructure....

  9. SOME CONSIDERATIONS ON CLOUD ACCOUNTING

    OpenAIRE

    Doina Pacurari; Elena Nechita

    2013-01-01

    Cloud technologies have developed intensively during the last years. Cloud computing allows the customers to interact with their data and applications at any time, from any location, while the providers host these resources. A client company may choose to run in the cloud a part of its business (sales by agents, payroll, etc.), or even the entire business. The company can get access to a large category of cloud-based software, including accounting software. Cloud solutions are especially reco...

  10. CLOUD COMPUTING TECHNOLOGY TRENDS

    Directory of Open Access Journals (Sweden)

    Cristian IVANUS

    2014-05-01

    Full Text Available Cloud computing has been a tremendous innovation, through which applications became available online, accessible through an Internet connection and using any computing device (computer, smartphone or tablet. According to one of the most recent studies conducted in 2012 by Everest Group and Cloud Connect, 57% of companies said they already use SaaS application (Software as a Service, and 38% reported using standard tools PaaS (Platform as a Service. However, in the most cases, the users of these solutions highlighted the fact that one of the main obstacles in the development of this technology is the fact that, in cloud, the application is not available without an Internet connection. The new challenge of the cloud system has become now the offline, specifically accessing SaaS applications without being connected to the Internet. This topic is directly related to user productivity within companies as productivity growth is one of the key promises of cloud computing system applications transformation. The aim of this paper is the presentation of some important aspects related to the offline cloud system and regulatory trends in the European Union (EU.

  11. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  12. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    Science.gov (United States)

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  13. Research on Quantum Authentication Methods for the Secure Access Control Among Three Elements of Cloud Computing

    Science.gov (United States)

    Dong, Yumin; Xiao, Shufen; Ma, Hongyang; Chen, Libo

    2016-12-01

    Cloud computing and big data have become the developing engine of current information technology (IT) as a result of the rapid development of IT. However, security protection has become increasingly important for cloud computing and big data, and has become a problem that must be solved to develop cloud computing. The theft of identity authentication information remains a serious threat to the security of cloud computing. In this process, attackers intrude into cloud computing services through identity authentication information, thereby threatening the security of data from multiple perspectives. Therefore, this study proposes a model for cloud computing protection and management based on quantum authentication, introduces the principle of quantum authentication, and deduces the quantum authentication process. In theory, quantum authentication technology can be applied in cloud computing for security protection. This technology cannot be cloned; thus, it is more secure and reliable than classical methods.

  14. Factors Influencing the Adoption of Cloud Storage by Information Technology Decision Makers

    Science.gov (United States)

    Wheelock, Michael D.

    2013-01-01

    This dissertation uses a survey methodology to determine the factors behind the decision to adopt cloud storage. The dependent variable in the study is the intent to adopt cloud storage. Four independent variables are utilized including need, security, cost-effectiveness and reliability. The survey includes a pilot test, field test and statistical…

  15. Errors resulting from assuming opaque Lambertian clouds in TOMS ozone retrieval

    International Nuclear Information System (INIS)

    Liu, X.; Newchurch, M.J.; Loughman, R.; Bhartia, P.K.

    2004-01-01

    Accurate remote sensing retrieval of atmospheric constituents over cloudy areas is very challenging because of insufficient knowledge of cloud parameters. Cloud treatments are highly idealized in most retrieval algorithms. Using a radiative transfer model treating clouds as scattering media, we investigate the effects of assuming opaque Lambertian clouds and employing a Partial Cloud Model (PCM) on Total Ozone Mapping Spectrometer (TOMS) ozone retrievals, especially for tropical high-reflectivity clouds. Assuming angularly independent cloud reflection is good because the Ozone Retrieval Errors (OREs) are within 1.5% of the total ozone (i.e., within TOMS retrieval precision) when Cloud Optical Depth (COD)≥20. Because of Intra-Cloud Ozone Absorption ENhancement (ICOAEN), assuming opaque clouds can introduce large OREs even for optically thick clouds. For a water cloud of COD 40 spanning 2-12 km with 20.8 Dobson Unit (DU) ozone homogeneously distributed in the cloud, the ORE is 17.8 DU in the nadir view. The ICOAEN effect depends greatly on solar zenith angle, view zenith angle, and intra-cloud ozone amount and distribution. The TOMS PCM is good because negative errors from the cloud fraction being underestimated partly cancel other positive errors. At COD≤5, the TOMS algorithm retrieves approximately the correct total ozone because of compensating errors. With increasing COD up to 20-40, the overall positive ORE increases and is finally dominated by the ICOAEN effect. The ICOAEN effect is typically 5-13 DU on average over the Atlantic and Africa and 1-7 DU over the Pacific for tropical high-altitude (cloud top pressure ≤300 hPa) and high-reflectivity (reflectivity ≥ 80%) clouds. Knowledge of TOMS ozone retrieval errors has important implications for remote sensing of ozone/trace gases from other satellite instruments

  16. The Invigoration of Deep Convective Clouds Over the Atlantic: Aerosol Effect, Meteorology or Retrieval Artifact?

    Science.gov (United States)

    Koren, Ilan; Feingold, Graham; Remer, Lorraine A.

    2010-01-01

    Associations between cloud properties and aerosol loading are frequently observed in products derived from satellite measurements. These observed trends between clouds and aerosol optical depth suggest aerosol modification of cloud dynamics, yet there are uncertainties involved in satellite retrievals that have the potential to lead to incorrect conclusions. Two of the most challenging problems are addressed here: the potential for retrieved aerosol optical depth to be cloud-contaminated, and as a result, artificially correlated with cloud parameters; and the potential for correlations between aerosol and cloud parameters to be erroneously considered to be causal. Here these issues are tackled directly by studying the effects of the aerosol on convective clouds in the tropical Atlantic Ocean using satellite remote sensing, a chemical transport model, and a reanalysis of meteorological fields. Results show that there is a robust positive correlation between cloud fraction or cloud top height and the aerosol optical depth, regardless of whether a stringent filtering of aerosol measurements in the vicinity of clouds is applied, or not. These same positive correlations emerge when replacing the observed aerosol field with that derived from a chemical transport model. Model-reanalysis data is used to address the causality question by providing meteorological context for the satellite observations. A correlation exercise between the full suite of meteorological fields derived from model reanalysis and satellite-derived cloud fields shows that observed cloud top height and cloud fraction correlate best with model pressure updraft velocity and relative humidity. Observed aerosol optical depth does correlate with meteorological parameters but usually different parameters from those that correlate with observed cloud fields. The result is a near-orthogonal influence of aerosol and meteorological fields on cloud top height and cloud fraction. The results strengthen the case

  17. Analysis of co-located MODIS and CALIPSO observations near clouds

    Directory of Open Access Journals (Sweden)

    T. Várnai

    2012-02-01

    Full Text Available This paper aims at helping synergistic studies in combining data from different satellites for gaining new insights into two critical yet poorly understood aspects of anthropogenic climate change, aerosol-cloud interactions and aerosol radiative effects. In particular, the paper examines the way cloud information from the MODIS (MODerate resolution Imaging Spectroradiometer imager can refine our perceptions based on CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization lidar measurements about the systematic aerosol changes that occur near clouds.

    The statistical analysis of a yearlong dataset of co-located global maritime observations from the Aqua and CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation satellites reveals that MODIS's multispectral imaging ability can greatly help the interpretation of CALIOP observations. The results show that imagers on Aqua and CALIPSO yield very similar pictures, and that the discrepancies – due mainly to wind drift and differences in view angle – do not significantly hinder aerosol measurements near clouds. By detecting clouds outside the CALIOP track, MODIS reveals that clouds are usually closer to clear areas than CALIOP data alone would suggest. The paper finds statistical relationships between the distances to clouds in MODIS and CALIOP data, and proposes a rescaling approach to statistically account for the impact of clouds outside the CALIOP track even when MODIS cannot reliably detect low clouds, for example at night or over sea ice. Finally, the results show that the typical distance to clouds depends on both cloud coverage and cloud type, and accordingly varies with location and season. In maritime areas perceived cloud free, the global median distance to clouds below 3 km altitude is in the 4–5 km range.

  18. Cloud networking understanding cloud-based data center networks

    CERN Document Server

    Lee, Gary

    2014-01-01

    Cloud Networking: Understanding Cloud-Based Data Center Networks explains the evolution of established networking technologies into distributed, cloud-based networks. Starting with an overview of cloud technologies, the book explains how cloud data center networks leverage distributed systems for network virtualization, storage networking, and software-defined networking. The author offers insider perspective to key components that make a cloud network possible such as switch fabric technology and data center networking standards. The final chapters look ahead to developments in architectures

  19. Fast cloud parameter retrievals of MIPAS/Envisat

    Directory of Open Access Journals (Sweden)

    R. Spang

    2012-08-01

    and tropospheric clouds similar to that of space- and ground-based lidars, with a tendency for higher cloud top heights and consequently higher sensitivity for some of the MIPAS detection methods. For the high cloud amount (HCA, pressure levels below 440 hPa on global scales the sensitivity of MIPAS is significantly greater than that of passive nadir viewers. This means that the high cloud fraction will be underestimated in the ISCCP dataset compared to the amount of high clouds deduced by MIPAS. Good correspondence in seasonal variability and geographical distribution of cloud occurrence and zonal means of cloud top height is found in a detailed comparison with a climatology for subvisible cirrus clouds from the Stratospheric Aerosol and Gas Experiment II (SAGE II limb sounder. Overall, validation with various sensors shows the need to consider differences in sensitivity, and especially the viewing geometries and field-of-view size, to make the datasets comparable (e.g. applying integration along the limb path through nadir cloud fields. The simulation of the limb path integration will be an important issue for comparisons with cloud-resolving global circulation or chemical transport models.

  20. Marine cloud brightening

    Science.gov (United States)

    Latham, John; Bower, Keith; Choularton, Tom; Coe, Hugh; Connolly, Paul; Cooper, Gary; Craft, Tim; Foster, Jack; Gadian, Alan; Galbraith, Lee; Iacovides, Hector; Johnston, David; Launder, Brian; Leslie, Brian; Meyer, John; Neukermans, Armand; Ormond, Bob; Parkes, Ben; Rasch, Phillip; Rush, John; Salter, Stephen; Stevenson, Tom; Wang, Hailong; Wang, Qin; Wood, Rob

    2012-01-01

    The idea behind the marine cloud-brightening (MCB) geoengineering technique is that seeding marine stratocumulus clouds with copious quantities of roughly monodisperse sub-micrometre sea water particles might significantly enhance the cloud droplet number concentration, and thereby the cloud albedo and possibly longevity. This would produce a cooling, which general circulation model (GCM) computations suggest could—subject to satisfactory resolution of technical and scientific problems identified herein—have the capacity to balance global warming up to the carbon dioxide-doubling point. We describe herein an account of our recent research on a number of critical issues associated with MCB. This involves (i) GCM studies, which are our primary tools for evaluating globally the effectiveness of MCB, and assessing its climate impacts on rainfall amounts and distribution, and also polar sea-ice cover and thickness; (ii) high-resolution modelling of the effects of seeding on marine stratocumulus, which are required to understand the complex array of interacting processes involved in cloud brightening; (iii) microphysical modelling sensitivity studies, examining the influence of seeding amount, seed-particle salt-mass, air-mass characteristics, updraught speed and other parameters on cloud–albedo change; (iv) sea water spray-production techniques; (v) computational fluid dynamics studies of possible large-scale periodicities in Flettner rotors; and (vi) the planning of a three-stage limited-area field research experiment, with the primary objectives of technology testing and determining to what extent, if any, cloud albedo might be enhanced by seeding marine stratocumulus clouds on a spatial scale of around 100×100 km. We stress that there would be no justification for deployment of MCB unless it was clearly established that no significant adverse consequences would result. There would also need to be an international agreement firmly in favour of such action

  1. USGEO DMWG Cloud Computing Recommendations

    Science.gov (United States)

    de la Beaujardiere, J.; McInerney, M.; Frame, M. T.; Summers, C.

    2017-12-01

    The US Group on Earth Observations (USGEO) Data Management Working Group (DMWG) has been developing Cloud Computing Recommendations for Earth Observations. This inter-agency report is currently in draft form; DMWG hopes to have released the report as a public Request for Information (RFI) by the time of AGU. The recommendations are geared toward organizations that have already decided to use the Cloud for some of their activities (i.e., the focus is not on "why you should use the Cloud," but rather "If you plan to use the Cloud, consider these suggestions.") The report comprises Introductory Material, including Definitions, Potential Cloud Benefits, and Potential Cloud Disadvantages, followed by Recommendations in several areas: Assessing When to Use the Cloud, Transferring Data to the Cloud, Data and Metadata Contents, Developing Applications in the Cloud, Cost Minimization, Security Considerations, Monitoring and Metrics, Agency Support, and Earth Observations-specific recommendations. This talk will summarize the recommendations and invite comment on the RFI.

  2. Cloud GIS Based Watershed Management

    Science.gov (United States)

    Bediroğlu, G.; Colak, H. E.

    2017-11-01

    In this study, we generated a Cloud GIS based watershed management system with using Cloud Computing architecture. Cloud GIS is used as SAAS (Software as a Service) and DAAS (Data as a Service). We applied GIS analysis on cloud in terms of testing SAAS and deployed GIS datasets on cloud in terms of DAAS. We used Hybrid cloud computing model in manner of using ready web based mapping services hosted on cloud (World Topology, Satellite Imageries). We uploaded to system after creating geodatabases including Hydrology (Rivers, Lakes), Soil Maps, Climate Maps, Rain Maps, Geology and Land Use. Watershed of study area has been determined on cloud using ready-hosted topology maps. After uploading all the datasets to systems, we have applied various GIS analysis and queries. Results shown that Cloud GIS technology brings velocity and efficiency for watershed management studies. Besides this, system can be easily implemented for similar land analysis and management studies.

  3. Two Models of Magnetic Support for Photoevaporated Molecular Clouds

    International Nuclear Information System (INIS)

    Ryutov, D; Kane, J; Mizuta, A; Pound, M; Remington, B

    2004-01-01

    The thermal pressure inside molecular clouds is insufficient for maintaining the pressure balance at an ablation front at the cloud surface illuminated by nearby UV stars. Most probably, the required stiffness is provided by the magnetic pressure. After surveying existing models of this type, we concentrate on two of them: the model of a quasi-homogeneous magnetic field and the recently proposed model of a ''magnetostatic turbulence''. We discuss observational consequences of the two models, in particular, the structure and the strength of the magnetic field inside the cloud and in the ionized outflow. We comment on the possible role of reconnection events and their observational signatures. We mention laboratory experiments where the most significant features of the models can be tested

  4. Cloud type comparisons of AIRS, CloudSat, and CALIPSO cloud height and amount

    Directory of Open Access Journals (Sweden)

    B. H. Kahn

    2008-03-01

    Full Text Available The precision of the two-layer cloud height fields derived from the Atmospheric Infrared Sounder (AIRS is explored and quantified for a five-day set of observations. Coincident profiles of vertical cloud structure by CloudSat, a 94 GHz profiling radar, and the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO, are compared to AIRS for a wide range of cloud types. Bias and variability in cloud height differences are shown to have dependence on cloud type, height, and amount, as well as whether CloudSat or CALIPSO is used as the comparison standard. The CloudSat-AIRS biases and variability range from −4.3 to 0.5±1.2–3.6 km for all cloud types. Likewise, the CALIPSO-AIRS biases range from 0.6–3.0±1.2–3.6 km (−5.8 to −0.2±0.5–2.7 km for clouds ≥7 km (<7 km. The upper layer of AIRS has the greatest sensitivity to Altocumulus, Altostratus, Cirrus, Cumulonimbus, and Nimbostratus, whereas the lower layer has the greatest sensitivity to Cumulus and Stratocumulus. Although the bias and variability generally decrease with increasing cloud amount, the ability of AIRS to constrain cloud occurrence, height, and amount is demonstrated across all cloud types for many geophysical conditions. In particular, skill is demonstrated for thin Cirrus, as well as some Cumulus and Stratocumulus, cloud types infrared sounders typically struggle to quantify. Furthermore, some improvements in the AIRS Version 5 operational retrieval algorithm are demonstrated. However, limitations in AIRS cloud retrievals are also revealed, including the existence of spurious Cirrus near the tropopause and low cloud layers within Cumulonimbus and Nimbostratus clouds. Likely causes of spurious clouds are identified and the potential for further improvement is discussed.

  5. Can Clouds replace Grids? Will Clouds replace Grids?

    International Nuclear Information System (INIS)

    Shiers, J D

    2010-01-01

    The world's largest scientific machine - comprising dual 27km circular proton accelerators cooled to 1.9 o K and located some 100m underground - currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared 'open' and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability - as seen by the experiments, as opposed to that measured by the official tools - still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently 'Cloud Computing' - in terms of pay-per-use fabric provisioning - has emerged as a potentially viable alternative but with rather different strengths and no doubt weaknesses too. Based on the concrete needs of the LHC experiments - where the total data volume that will be acquired over the full lifetime of the project, including the additional data copies that are required by the Computing Models of the experiments, approaches 1 Exabyte - we analyze the pros and cons of Grids versus Clouds. This analysis covers not only technical issues - such as those related to demanding database and data management needs - but also sociological aspects, which cannot be ignored, neither in terms of funding nor in the wider context of the essential but often overlooked role of science in society, education and economy.

  6. Can Clouds replace Grids? Will Clouds replace Grids?

    Energy Technology Data Exchange (ETDEWEB)

    Shiers, J D, E-mail: Jamie.Shiers@cern.c [CERN, 1211 Geneva 23 (Switzerland)

    2010-04-01

    The world's largest scientific machine - comprising dual 27km circular proton accelerators cooled to 1.9{sup o}K and located some 100m underground - currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared 'open' and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability - as seen by the experiments, as opposed to that measured by the official tools - still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently 'Cloud Computing' - in terms of pay-per-use fabric provisioning - has emerged as a potentially viable alternative but with rather different strengths and no doubt weaknesses too. Based on the concrete needs of the LHC experiments - where the total data volume that will be acquired over the full lifetime of the project, including the additional data copies that are required by the Computing Models of the experiments, approaches 1 Exabyte - we analyze the pros and cons of Grids versus Clouds. This analysis covers not only technical issues - such as those related to demanding database and data management needs - but also sociological aspects, which cannot be ignored, neither in terms of funding nor in the wider context of the essential but often overlooked role of science in society, education and economy.

  7. Can Clouds replace Grids? Will Clouds replace Grids?

    Science.gov (United States)

    Shiers, J. D.

    2010-04-01

    The world's largest scientific machine - comprising dual 27km circular proton accelerators cooled to 1.9oK and located some 100m underground - currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared "open" and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability - as seen by the experiments, as opposed to that measured by the official tools - still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently "Cloud Computing" - in terms of pay-per-use fabric provisioning - has emerged as a potentially viable alternative but with rather different strengths and no doubt weaknesses too. Based on the concrete needs of the LHC experiments - where the total data volume that will be acquired over the full lifetime of the project, including the additional data copies that are required by the Computing Models of the experiments, approaches 1 Exabyte - we analyze the pros and cons of Grids versus Clouds. This analysis covers not only technical issues - such as those related to demanding database and data management needs - but also sociological aspects, which cannot be ignored, neither in terms of funding nor in the wider context of the essential but often overlooked role of science in society, education and economy.

  8. Counting the clouds

    International Nuclear Information System (INIS)

    Randall, David A

    2005-01-01

    Cloud processes are very important for the global circulation of the atmosphere. It is now possible, though very expensive, to simulate the global circulation of the atmosphere using a model with resolution fine enough to explicitly represent the larger individual clouds. An impressive preliminary calculation of this type has already been performed by Japanese scientists, using the Earth Simulator. Within the next few years, such global cloud-resolving models (GCRMs) will be applied to weather prediction, and later they will be used in climatechange simulations. The tremendous advantage of GCRMs, relative to conventional lowerresolution global models, is that GCRMs can avoid many of the questionable 'parameterizations' used to represent cloud effects in lower-resolution global models. Although cloud microphysics, turbulence, and radiation must still be parameterized in GCRMs, the high resolution of a GCRM simplifies these problems considerably, relative to conventional models. The United States currently has no project to develop a GCRM, although we have both the computer power and the expertise to do it. A research program aimed at development and applications of GCRMs is outlined

  9. Trust management in cloud services

    CERN Document Server

    Noor, Talal H; Bouguettaya, Athman

    2014-01-01

    This book describes the design and implementation of Cloud Armor, a novel approach for credibility-based trust management and automatic discovery of cloud services in distributed and highly dynamic environments. This book also helps cloud users to understand the difficulties of establishing trust in cloud computing and the best criteria for selecting a service cloud. The techniques have been validated by a prototype system implementation and experimental studies using a collection of real world trust feedbacks on cloud services.The authors present the design and implementation of a novel pro

  10. Scale analysis of convective clouds

    Directory of Open Access Journals (Sweden)

    Micha Gryschka

    2008-12-01

    Full Text Available The size distribution of cumulus clouds due to shallow and deep convection is analyzed using satellite pictures, LES model results and data from the German rain radar network. The size distributions found can be described by simple power laws as has also been proposed for other cloud data in the literature. As the observed precipitation at ground stations is finally determined by cloud numbers in an area and individual sizes and rain rates of single clouds, the cloud size distributions might be used for developing empirical precipitation forecasts or for validating results from cloud resolving models being introduced to routine weather forecasts.

  11. Reviews on Security Issues and Challenges in Cloud Computing

    Science.gov (United States)

    An, Y. Z.; Zaaba, Z. F.; Samsudin, N. F.

    2016-11-01

    Cloud computing is an Internet-based computing service provided by the third party allowing share of resources and data among devices. It is widely used in many organizations nowadays and becoming more popular because it changes the way of how the Information Technology (IT) of an organization is organized and managed. It provides lots of benefits such as simplicity and lower costs, almost unlimited storage, least maintenance, easy utilization, backup and recovery, continuous availability, quality of service, automated software integration, scalability, flexibility and reliability, easy access to information, elasticity, quick deployment and lower barrier to entry. While there is increasing use of cloud computing service in this new era, the security issues of the cloud computing become a challenges. Cloud computing must be safe and secure enough to ensure the privacy of the users. This paper firstly lists out the architecture of the cloud computing, then discuss the most common security issues of using cloud and some solutions to the security issues since security is one of the most critical aspect in cloud computing due to the sensitivity of user's data.

  12. Effective ASCII-HEX steganography for secure cloud

    International Nuclear Information System (INIS)

    Afghan, S.

    2015-01-01

    There are many reasons of cloud computing popularity some of the most important are; backup and rescue, cost effective, nearly limitless storage, automatic software amalgamation, easy access to information and many more. Pay-as-you-go model is followed to provide everything as a service. Data is secured by using standard security policies available at cloud end. In spite of its many benefits, as mentioned above, cloud computing has also some security issues. Provider as well as customer has to provide and collect data in a secure manner. Both of these issues plus efficient transmitting of data over cloud are very critical issues and needed to be resolved. There is need of security during the travel time of sensitive data over the network that can be processed or stored by the customer. Security to the customer's data at the provider end can be provided by using current security algorithms, which are not known by the customer. There is reliability problem due to existence of multiple boundaries in the cloud resource access. ASCII and HEX security with steganography is used to propose an algorithm that stores the encrypted data/cipher text in an image file which will be then sent to the cloud end. This is done by using CDM (Common Deployment Model). In future, an algorithm should be proposed and implemented for the security of virtual images in the cloud computing. (author)

  13. Assessment of Global Cloud Datasets from Satellites: Project and Database Initiated by the GEWEX Radiation Panel

    Science.gov (United States)

    Stubenrauch, C. J.; Rossow, W. B.; Kinne, S.; Ackerman, S.; Cesana, G.; Chepfer, H.; Getzewich, B.; Di Girolamo, L.; Guignard, A.; Heidinger, A.; hide

    2012-01-01

    Clouds cover about 70% of the Earth's surface and play a dominant role in the energy and water cycle of our planet. Only satellite observations provide a continuous survey of the state of the atmosphere over the whole globe and across the wide range of spatial and temporal scales that comprise weather and climate variability. Satellite cloud data records now exceed more than 25 years in length. However, climatologies compiled from different satellite datasets can exhibit systematic biases. Questions therefore arise as to the accuracy and limitations of the various sensors. The Global Energy and Water cycle Experiment (GEWEX) Cloud Assessment, initiated in 2005 by the GEWEX Radiation Panel, provided the first coordinated intercomparison of publically available, standard global cloud products (gridded, monthly statistics) retrieved from measurements of multi-spectral imagers (some with multiangle view and polarization capabilities), IR sounders and lidar. Cloud properties under study include cloud amount, cloud height (in terms of pressure, temperature or altitude), cloud radiative properties (optical depth or emissivity), cloud thermodynamic phase and bulk microphysical properties (effective particle size and water path). Differences in average cloud properties, especially in the amount of high-level clouds, are mostly explained by the inherent instrument measurement capability for detecting and/or identifying optically thin cirrus, especially when overlying low-level clouds. The study of long-term variations with these datasets requires consideration of many factors. A monthly, gridded database, in common format, facilitates further assessments, climate studies and the evaluation of climate models.

  14. Moving HammerCloud to CERN's private cloud

    CERN Document Server

    Barrand, Quentin

    2013-01-01

    HammerCloud is a testing framework for the Worldwide LHC Computing Grid. Currently deployed on about 20 hand-managed machines, it was desirable to move it to the Agile Infrastructure, CERN's OpenStack-based private cloud.

  15. Modeling of Cloud/Radiation Processes for Cirrus Cloud Formation

    National Research Council Canada - National Science Library

    Liou, K

    1997-01-01

    This technical report includes five reprints and pre-prints of papers associated with the modeling of cirrus cloud and radiation processes as well as remote sensing of cloud optical and microphysical...

  16. CLON: Overlay Networks and Gossip Protocols for Cloud Environments

    Science.gov (United States)

    Matos, Miguel; Sousa, António; Pereira, José; Oliveira, Rui; Deliot, Eric; Murray, Paul

    Although epidemic or gossip-based multicast is a robust and scalable approach to reliable data dissemination, its inherent redundancy results in high resource consumption on both links and nodes. This problem is aggravated in settings that have costlier or resource constrained links as happens in Cloud Computing infrastructures composed by several interconnected data centers across the globe.

  17. The Advance of Computing from the Ground to the Cloud

    Science.gov (United States)

    Breeding, Marshall

    2009-01-01

    A trend toward the abstraction of computing platforms that has been developing in the broader IT arena over the last few years is just beginning to make inroads into the library technology scene. Cloud computing offers for libraries many interesting possibilities that may help reduce technology costs and increase capacity, reliability, and…

  18. AMSAA Reliability Growth Guide

    National Research Council Canada - National Science Library

    Broemm, William

    2000-01-01

    ... has developed reliability growth methodology for all phases of the process, from planning to tracking to projection. The report presents this methodology and associated reliability growth concepts.

  19. Stability of interstellar clouds containing magnetic fields

    International Nuclear Information System (INIS)

    Langer, W.D.; and Bell Laboratories, Crawford Hill Laboratory, Holmdel, NJ)

    1978-01-01

    The stability of interstellar clouds against gravitational collapse and fragmentation in the presence of magnetic fields is investigated. A magnetic field can provide pressure support against collapse if it is strongly coupled to the neutral gas; this coupling is mediated by ion-neutral collisions in the gas. The time scale for the growth of perturbations in the gas is found to be a sensitive function of the fractional ion abundance of the gas. For a relatively large fractional ion abundance, corresponding to strong coupling, the collapse of the gas is retarded. Star formation is inhibited in dense clouds and the collapse time for diffuse clouds cn exceed the limit on their lifetime set by disruptive processes. For a small fractional ion abundance, the magnetic fields do not inhibit collapse and the distribution of the masses of collapsing fragments are likely to be quite different in regions of differing ion abundance. The solutions also predict the existence of large-scale density waves corresponding to two gravitational-magnetoacoustic modes. The conditions which best support these modes correspond to those found in the giant molecular clouds

  20. Tharsis Limb Cloud

    Science.gov (United States)

    2005-01-01

    [figure removed for brevity, see original site] Annotated image of Tharsis Limb Cloud 7 September 2005 This composite of red and blue Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) daily global images acquired on 6 July 2005 shows an isolated water ice cloud extending more than 30 kilometers (more than 18 miles) above the martian surface. Clouds such as this are common in late spring over the terrain located southwest of the Arsia Mons volcano. Arsia Mons is the dark, oval feature near the limb, just to the left of the 'T' in the 'Tharsis Montes' label. The dark, nearly circular feature above the 'S' in 'Tharsis' is the volcano, Pavonis Mons, and the other dark circular feature, above and to the right of 's' in 'Montes,' is Ascraeus Mons. Illumination is from the left/lower left. Season: Northern Autumn/Southern Spring

  1. Transition to the Cloud

    DEFF Research Database (Denmark)

    Hedman, Jonas; Xiao, Xiao

    2016-01-01

    The rising of cloud computing has dramatically changed the way software companies provide and distribute their IT product and related services over the last decades. Today, most software is bought offthe-shelf and distributed over the Internet. This transition is greatly influencing how software...... companies operate. In this paper, we present a case study of an ERP vendor for SMB (small and mediumsize business) in making a transition towards a cloud-based business model. Through the theoretical lens of ecosystem, we are able to analyze the evolution of the vendor and its business network as a whole......, and find that the relationship between vendor and Value-added-Reseller (VAR) is greatly affected. We conclude by presenting critical issues and challenges for managing such cloud transition....

  2. Cloud Computing: A study of cloud architecture and its patterns

    OpenAIRE

    Mandeep Handa,; Shriya Sharma

    2015-01-01

    Cloud computing is a general term for anything that involves delivering hosted services over the Internet. Cloud computing is a paradigm shift following the shift from mainframe to client–server in the early 1980s. Cloud computing can be defined as accessing third party software and services on web and paying as per usage. It facilitates scalability and virtualized resources over Internet as a service providing cost effective and scalable solution to customers. Cloud computing has...

  3. Ash cloud aviation advisories

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, T.J.; Ellis, J.S. [Lawrence Livermore National Lab., CA (United States); Schalk, W.W.; Nasstrom, J.S. [EG and G, Inc., Pleasanton, CA (United States)

    1992-06-25

    During the recent (12--22 June 1991) Mount Pinatubo volcano eruptions, the US Air Force Global Weather Central (AFGWC) requested assistance of the US Department of Energy`s Atmospheric Release Advisory Capability (ARAC) in creating volcanic ash cloud aviation advisories for the region of the Philippine Islands. Through application of its three-dimensional material transport and diffusion models using AFGWC meteorological analysis and forecast wind fields ARAC developed extensive analysis and 12-hourly forecast ash cloud position advisories extending to 48 hours for a period of five days. The advisories consisted of ``relative`` ash cloud concentrations in ten layers (surface-5,000 feet, 5,000--10,000 feet and every 10,000 feet to 90,000 feet). The ash was represented as a log-normal size distribution of 10--200 {mu}m diameter solid particles. Size-dependent ``ashfall`` was simulated over time as the eruption clouds dispersed. Except for an internal experimental attempt to model one of the Mount Redoubt, Alaska, eruptions (12/89), ARAC had no prior experience in modeling volcanic eruption ash hazards. For the cataclysmic eruption of 15--16 June, the complex three-dimensional atmospheric structure of the region produced dramatically divergent ash cloud patterns. The large eruptions (> 7--10 km) produced ash plume clouds with strong westward transport over the South China Sea, Southeast Asia, India and beyond. The low-level eruptions (< 7 km) and quasi-steady-state venting produced a plume which generally dispersed to the north and east throughout the support period. Modeling the sequence of eruptions presented a unique challenge. Although the initial approach proved viable, further refinement is necessary and possible. A distinct need exists to quantify eruptions consistently such that ``relative`` ash concentrations relate to specific aviation hazard categories.

  4. Cloud Collaboration: Cloud-Based Instruction for Business Writing Class

    Science.gov (United States)

    Lin, Charlie; Yu, Wei-Chieh Wayne; Wang, Jenny

    2014-01-01

    Cloud computing technologies, such as Google Docs, Adobe Creative Cloud, Dropbox, and Microsoft Windows Live, have become increasingly appreciated to the next generation digital learning tools. Cloud computing technologies encourage students' active engagement, collaboration, and participation in their learning, facilitate group work, and support…

  5. Cloud blueprints for integrating and managing cloud federations

    NARCIS (Netherlands)

    Papazoglou, M.; Heisel, M.

    2012-01-01

    Contemporary cloud technologies face insurmountable obstacles. They follow a pull-based, producer-centric trajectory to development where cloud consumers have to ‘squeeze and bolt’ applications onto cloud APIs. They also introduce a monolithic SaaS/PaaS/IaaS stack where a one-size-fits-all mentality

  6. Opaque cloud detection

    Science.gov (United States)

    Roskovensky, John K [Albuquerque, NM

    2009-01-20

    A method of detecting clouds in a digital image comprising, for an area of the digital image, determining a reflectance value in at least three discrete electromagnetic spectrum bands, computing a first ratio of one reflectance value minus another reflectance value and the same two values added together, computing a second ratio of one reflectance value and another reflectance value, choosing one of the reflectance values, and concluding that an opaque cloud exists in the area if the results of each of the two computing steps and the choosing step fall within three corresponding predetermined ranges.

  7. Storm and cloud dynamics

    CERN Document Server

    Cotton, William R

    1992-01-01

    This book focuses on the dynamics of clouds and of precipitating mesoscale meteorological systems. Clouds and precipitating mesoscale systems represent some of the most important and scientifically exciting weather systems in the world. These are the systems that produce torrential rains, severe winds including downburst and tornadoes, hail, thunder and lightning, and major snow storms. Forecasting such storms represents a major challenge since they are too small to be adequately resolved by conventional observing networks and numerical prediction models.Key Features* Key Highlight

  8. Detailed Information Security in Cloud Computing

    OpenAIRE

    Pavel Valerievich Ivonin

    2013-01-01

    The object of research in this article is technology of public clouds, structure and security system of clouds. Problems of information security in clouds are considered, elements of security system in public clouds are described.

  9. Cloud Based Applications and Platforms (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Brodt-Giles, D.

    2014-05-15

    Presentation to the Cloud Computing East 2014 Conference, where we are highlighting our cloud computing strategy, describing the platforms on the cloud (including Smartgrid.gov), and defining our process for implementing cloud based applications.

  10. Relationships among cloud occurrence frequency, overlap, and effective thickness derived from CALIPSO and CloudSat merged cloud vertical profiles

    Science.gov (United States)

    Kato, Seiji; Sun-Mack, Sunny; Miller, Walter F.; Rose, Fred G.; Chen, Yan; Minnis, Patrick; Wielicki, Bruce A.

    2010-01-01

    A cloud frequency of occurrence matrix is generated using merged cloud vertical profiles derived from the satellite-borne Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and cloud profiling radar. The matrix contains vertical profiles of cloud occurrence frequency as a function of the uppermost cloud top. It is shown that the cloud fraction and uppermost cloud top vertical profiles can be related by a cloud overlap matrix when the correlation length of cloud occurrence, which is interpreted as an effective cloud thickness, is introduced. The underlying assumption in establishing the above relation is that cloud overlap approaches random overlap with increasing distance separating cloud layers and that the probability of deviating from random overlap decreases exponentially with distance. One month of Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) and CloudSat data (July 2006) support these assumptions, although the correlation length sometimes increases with separation distance when the cloud top height is large. The data also show that the correlation length depends on cloud top hight and the maximum occurs when the cloud top height is 8 to 10 km. The cloud correlation length is equivalent to the decorrelation distance introduced by Hogan and Illingworth (2000) when cloud fractions of both layers in a two-cloud layer system are the same. The simple relationships derived in this study can be used to estimate the top-of-atmosphere irradiance difference caused by cloud fraction, uppermost cloud top, and cloud thickness vertical profile differences.

  11. A reliability simulation language for reliability analysis

    International Nuclear Information System (INIS)

    Deans, N.D.; Miller, A.J.; Mann, D.P.

    1986-01-01

    The results of work being undertaken to develop a Reliability Description Language (RDL) which will enable reliability analysts to describe complex reliability problems in a simple, clear and unambiguous way are described. Component and system features can be stated in a formal manner and subsequently used, along with control statements to form a structured program. The program can be compiled and executed on a general-purpose computer or special-purpose simulator. (DG)

  12. Securing virtual and cloud environments

    CSIR Research Space (South Africa)

    Carroll, M

    2012-01-01

    Full Text Available targets such as reduced costs, scalability, flexibility, capacity utilisation, higher efficiencies and mobility. Many of these benefits are achieved through the utilisation of technologies such as cloud computing and virtualisation. In many instances cloud...

  13. Beam Induced Pressure Rise at RHIC

    CERN Document Server

    Zhang, S Y; Bai, Mei; Blaskiewicz, Michael; Cameron, Peter; Drees, Angelika; Fischer, Wolfram; Gullotta, Justin; He, Ping; Hseuh Hsiao Chaun; Huang, Haixin; Iriso, Ubaldo; Lee, Roger C; Litvinenko, Vladimir N; MacKay, William W; Nicoletti, Tony; Oerter, Brian; Peggs, Steve; Pilat, Fulvia Caterina; Ptitsyn, Vadim; Roser, Thomas; Satogata, Todd; Smart, Loralie; Snydstrup, Louis; Thieberger, Peter; Trbojevic, Dejan; Wang, Lanfa; Wei, Jie; Zeno, Keith

    2005-01-01

    Beam induced pressure rise in RHIC warm sections is currently one of the machine intensity and luminosity limits. This pressure rise is mainly due to electron cloud effects. The RHIC warm section electron cloud is associated with longer bunch spacings compared with other machines, and is distributed non-uniformly around the ring. In addition to the countermeasures for normal electron cloud, such as the NEG coated pipe, solenoids, beam scrubbing, bunch gaps, and larger bunch spacing, other studies and beam tests toward the understanding and counteracting RHIC warm electron cloud are of interest. These include the ion desorption studies and the test of anti-grazing ridges. For high bunch intensities and the shortest bunch spacings, pressure rises at certain locations in the cryogenic region have been observed during the past two runs. Beam studies are planned for the current 2005 run and the results will be reported.

  14. FAME-C: cloud property retrieval using synergistic AATSR and MERIS observations

    Directory of Open Access Journals (Sweden)

    C. K. Carbajal Henken

    2014-11-01

    Full Text Available A newly developed daytime cloud property retrieval algorithm, FAME-C (Freie Universität Berlin AATSR MERIS Cloud, is presented. Synergistic observations from the Advanced Along-Track Scanning Radiometer (AATSR and the Medium Resolution Imaging Spectrometer (MERIS, both mounted on the polar-orbiting Environmental Satellite (Envisat, are used for cloud screening. For cloudy pixels two main steps are carried out in a sequential form. First, a cloud optical and microphysical property retrieval is performed using an AATSR near-infrared and visible channel. Cloud phase, cloud optical thickness, and effective radius are retrieved, and subsequently cloud water path is computed. Second, two cloud top height products are retrieved based on independent techniques. For cloud top temperature, measurements in the AATSR infrared channels are used, while for cloud top pressure, measurements in the MERIS oxygen-A absorption channel are used. Results from the cloud optical and microphysical property retrieval serve as input for the two cloud top height retrievals. Introduced here are the AATSR and MERIS forward models and auxiliary data needed in FAME-C. Also, the optimal estimation method, which provides uncertainty estimates of the retrieved property on a pixel basis, is presented. Within the frame of the European Space Agency (ESA Climate Change Initiative (CCI project, the first global cloud property retrievals have been conducted for the years 2007–2009. For this time period, verification efforts are presented, comparing, for four selected regions around the globe, FAME-C cloud optical and microphysical properties to cloud optical and microphysical properties derived from measurements of the Moderate Resolution Imaging Spectroradiometer (MODIS on the Terra satellite. The results show a reasonable agreement between the cloud optical and microphysical property retrievals. Biases are generally smallest for marine stratocumulus clouds: −0.28, 0.41 μm and

  15. Electron Cloud Parameterization Studies in the LHC

    CERN Document Server

    Dominguez, O; Baglin, V; Bregliozzi, G; Jimenez, J M; Metral, E; Rumolo, G; Schulte, D; Zimmermann, F

    2011-01-01

    During LHC beam commissioning with 150, 75 and 50-ns bunch spacing, important electron-cloud effects, like pressure rise, cryogenic heat load, beam instabilities or emittance growth, were observed. The main strategy to combat the LHC electron cloud, defined about ten years ago, relies on the surface conditioning arising from the chamber-surface bombardment with cloud electrons. In a standard model, the conditioning state of the beam-pipe surface is characterized by three parameters: 1. most importantly, the secondary emission yield δmax; 2. the incident electron energy at which the yield is maximum, ε_max; and 3. the probability of elastic reflection of low-energy primary electrons hitting the chamber wall, R. Since at the LHC no in-situ secondary-yield measurements are available, we compare the relative local pressure-rise measurements taken for different beam configurations against simulations in which surface parameters are scanned. This benchmarking of measurements and simulations is used to infer the s...

  16. Detection of hydrogen sulfide above the clouds in Uranus's atmosphere

    Science.gov (United States)

    Irwin, Patrick G. J.; Toledo, Daniel; Garland, Ryan; Teanby, Nicholas A.; Fletcher, Leigh N.; Orton, Glenn A.; Bézard, Bruno

    2018-04-01

    Visible-to-near-infrared observations indicate that the cloud top of the main cloud deck on Uranus lies at a pressure level of between 1.2 bar and 3 bar. However, its composition has never been unambiguously identified, although it is widely assumed to be composed primarily of either ammonia or hydrogen sulfide (H2S) ice. Here, we present evidence of a clear detection of gaseous H2S above this cloud deck in the wavelength region 1.57-1.59 μm with a mole fraction of 0.4-0.8 ppm at the cloud top. Its detection constrains the deep bulk sulfur/nitrogen abundance to exceed unity (>4.4-5.0 times the solar value) in Uranus's bulk atmosphere, and places a lower limit on the mole fraction of H2S below the observed cloud of (1.0 -2.5 ) ×1 0-5. The detection of gaseous H2S at these pressure levels adds to the weight of evidence that the principal constituent of 1.2-3-bar cloud is likely to be H2S ice.

  17. Detection of hydrogen sulfide above the clouds in Uranus's atmosphere

    Science.gov (United States)

    Irwin, Patrick G. J.; Toledo, Daniel; Garland, Ryan; Teanby, Nicholas A.; Fletcher, Leigh N.; Orton, Glenn A.; Bézard, Bruno

    2018-05-01

    Visible-to-near-infrared observations indicate that the cloud top of the main cloud deck on Uranus lies at a pressure level of between 1.2 bar and 3 bar. However, its composition has never been unambiguously identified, although it is widely assumed to be composed primarily of either ammonia or hydrogen sulfide (H2S) ice. Here, we present evidence of a clear detection of gaseous H2S above this cloud deck in the wavelength region 1.57-1.59 μm with a mole fraction of 0.4-0.8 ppm at the cloud top. Its detection constrains the deep bulk sulfur/nitrogen abundance to exceed unity (>4.4-5.0 times the solar value) in Uranus's bulk atmosphere, and places a lower limit on the mole fraction of H2S below the observed cloud of (1.0 -2.5 ) ×1 0-5. The detection of gaseous H2S at these pressure levels adds to the weight of evidence that the principal constituent of 1.2-3-bar cloud is likely to be H2S ice.

  18. Efficient Resource Management in Cloud Computing

    OpenAIRE

    Rushikesh Shingade; Amit Patil; Shivam Suryawanshi; M. Venkatesan

    2015-01-01

    Cloud computing, one of the widely used technology to provide cloud services for users who are charged for receiving services. In the aspect of a maximum number of resources, evaluating the performance of Cloud resource management policies are difficult to optimize efficiently. There are different simulation toolkits available for simulation and modelling the Cloud computing environment like GridSim CloudAnalyst, CloudSim, GreenCloud, CloudAuction etc. In proposed Efficient Resource Manage...

  19. Cluster analysis of midlatitude oceanic cloud regimes: mean properties and temperature sensitivity

    Directory of Open Access Journals (Sweden)

    N. D. Gordon

    2010-07-01

    temperature advection in the warm and cold subsets to have near-median values in three layers of the troposphere. Across all of the seven clusters, we find that cloud fraction is smaller and cloud optical thickness is mostly larger for the warm subset. Cloud-top pressure is higher for the three low-level cloud regimes and lower for the cirrus regime. The net upwelling radiation flux at the top of the atmosphere is larger for the warm subset in every cluster except cirrus, and larger when averaged over all clusters. This implies that the direct response of midlatitude oceanic clouds to increasing temperature acts as a negative feedback on the climate system. Note that the cloud response to atmospheric dynamical changes produced by global warming, which we do not consider in this study, may differ, and the total cloud feedback may be positive.

  20. Cloud computing basics for librarians.

    Science.gov (United States)

    Hoy, Matthew B

    2012-01-01

    "Cloud computing" is the name for the recent trend of moving software and computing resources to an online, shared-service model. This article briefly defines cloud computing, discusses different models, explores the advantages and disadvantages, and describes some of the ways cloud computing can be used in libraries. Examples of cloud services are included at the end of the article. Copyright © Taylor & Francis Group, LLC

  1. Cloud Computing Security: A Survey

    OpenAIRE

    Khalil, Issa; Khreishah, Abdallah; Azeem, Muhammad

    2014-01-01

    Cloud computing is an emerging technology paradigm that migrates current technological and computing concepts into utility-like solutions similar to electricity and water systems. Clouds bring out a wide range of benefits including configurable computing resources, economic savings, and service flexibility. However, security and privacy concerns are shown to be the primary obstacles to a wide adoption of clouds. The new concepts that clouds introduce, such as multi-tenancy, resource sharing a...

  2. Database security in the cloud

    OpenAIRE

    Sakhi, Imal

    2012-01-01

    The aim of the thesis is to get an overview of the database services available in cloud computing environment, investigate the security risks associated with it and propose the possible countermeasures to minimize the risks. The thesis also analyzes two cloud database service providers namely; Amazon RDS and Xeround. The reason behind choosing these two providers is because they are currently amongst the leading cloud database providers and both provide relational cloud databases which makes ...

  3. QUALITY ASSURANCE FOR CLOUD COMPUTING

    OpenAIRE

    Sumaira Aslam; Hina Shahid

    2016-01-01

    Cloud computing is a greatest and latest thing. Marketers for lots of big companies are all using cloud computing terms in their marketing campaign to make them seem them impressive so, that they can get clients and customers. Cloud computing is overall the philosophy and design concept and it is much more complicated and yet much simpler. The basic underlined thing that cloud computing do is to separate the applications from operating systems from the software from the hardware that runs eve...

  4. Security Dynamics of Cloud Computing

    OpenAIRE

    Khan, Khaled M.

    2009-01-01

    This paper explores various dimensions of cloud computing security. It argues that security concerns of cloud computing need to be addressed from the perspective of individual stakeholder. Security focuses of cloud computing are essentially different in terms of its characteristics and business model. Conventional way of viewing as well as addressing security such as ‘bolting-in’ on the top of cloud computing may not work well. The paper attempts to portray the security spectrum necessary for...

  5. Green Cloud on the Horizon

    Science.gov (United States)

    Ali, Mufajjul

    This paper proposes a Green Cloud model for mobile Cloud computing. The proposed model leverage on the current trend of IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service), and look at new paradigm called "Network as a Service" (NaaS). The Green Cloud model proposes various Telco's revenue generating streams and services with the CaaS (Cloud as a Service) for the near future.

  6. Reusability Framework for Cloud Computing

    OpenAIRE

    Singh, Sukhpal; Singh, Rishideep

    2012-01-01

    Cloud based development is a challenging task for several software engineering projects, especially for those which needs development with reusability. Present time of cloud computing is allowing new professional models for using the software development. The expected upcoming trend of computing is assumed to be this cloud computing because of speed of application deployment, shorter time to market, and lower cost of operation. Until Cloud Co mputing Reusability Model is considered a fundamen...

  7. Integrated Model to Assess Cloud Deployment Effectiveness When Developing an IT-strategy

    Science.gov (United States)

    Razumnikov, S.; Prankevich, D.

    2016-04-01

    Developing an IT-strategy of cloud deployment is a complex issue since even the stage of its formation necessitates revealing what applications will be the best possible to meet the requirements of a company business-strategy, evaluate reliability and safety of cloud providers and analyze staff satisfaction. A system of criteria, as well an integrated model to assess cloud deployment effectiveness is offered. The model makes it possible to identify what applications being at the disposal of a company, as well as new tools to be deployed are reliable and safe enough for implementation in the cloud environment. The data on practical use of the procedure to assess cloud deployment effectiveness by a provider of telecommunication services is presented. The model was used to calculate values of integral indexes of services to be assessed, then, ones, meeting the criteria and answering the business-strategy of a company, were selected.

  8. Improving Estimates of Cloud Radiative Forcing over Greenland

    Science.gov (United States)

    Wang, W.; Zender, C. S.

    2014-12-01

    Multiple driving mechanisms conspire to increase melt extent and extreme melt events frequency in the Arctic: changing heat transport, shortwave radiation (SW), and longwave radiation (LW). Cloud Radiative Forcing (CRF) of Greenland's surface is amplified by a dry atmosphere and by albedo feedback, making its contribution to surface melt even more variable in time and space. Unfortunately accurate cloud observations and thus CRF estimates are hindered by Greenland's remoteness, harsh conditions, and low contrast between surface and cloud reflectance. In this study, cloud observations from satellites and reanalyses are ingested into and evaluated within a column radiative transfer model. An improved CRF dataset is obtained by correcting systematic discrepancies derived from sensitivity experiments. First, we compare the surface radiation budgets from the Column Radiation Model (CRM) driven by different cloud datasets, with surface observations from Greenland Climate Network (GC-Net). In clear skies, CRM-estimated surface radiation driven by water vapor profiles from both AIRS and MODIS during May-Sept 2010-2012 are similar, stable, and reliable. For example, although AIRS water vapor path exceeds MODIS by 1.4 kg/m2 on a daily average, the overall absolute difference in downwelling SW is CRM estimates are within 20 W/m2 range of GC-Net downwelling SW. After calibrating CRM in clear skies, the remaining differences between CRM and observed surface radiation are primarily attributable to differences in cloud observations. We estimate CRF using cloud products from MODIS and from MERRA. The SW radiative forcing of thin clouds is mainly controlled by cloud water path (CWP). As CWP increases from near 0 to 200 g/m2, the net surface SW drops from over 100 W/m2 to 30 W/m2 almost linearly, beyond which it becomes relatively insensitive to CWP. The LW is dominated by cloud height. For clouds at all altitudes, the lower the clouds, the greater the LW forcing. By applying

  9. iCloud standard guide

    CERN Document Server

    Alfi, Fauzan

    2013-01-01

    An easy-to-use guide, filled with tutorials that will teach you how to set up and use iCloud, and profit from all of its marvellous features.This book is for anyone with basic knowledge of computers and mobile operations. Prior knowledge of cloud computing or iCloud is not expected.

  10. Coherent Radiation of Electron Cloud

    International Nuclear Information System (INIS)

    Heifets, S.

    2004-01-01

    The electron cloud in positron storage rings is pinched when a bunch passes by. For short bunches, the radiation due to acceleration of electrons of the cloud is coherent. Detection of such radiation can be used to measure the density of the cloud. The estimate of the power and the time structure of the radiated signal is given in this paper

  11. Understanding and Monitoring Cloud Services

    NARCIS (Netherlands)

    Drago, Idilio

    2013-01-01

    Cloud services have changed the way computing power is delivered to customers. The advantages of the cloud model have fast resulted in powerful providers. However, this success has not come without problems. Cloud providers have been related to major failures, including outages and performance

  12. Research on cloud computing solutions

    Directory of Open Access Journals (Sweden)

    Liudvikas Kaklauskas

    2015-07-01

    Full Text Available Cloud computing can be defined as a new style of computing in which dynamically scala-ble and often virtualized resources are provided as a services over the Internet. Advantages of the cloud computing technology include cost savings, high availability, and easy scalability. Voas and Zhang adapted six phases of computing paradigms, from dummy termi-nals/mainframes, to PCs, networking computing, to grid and cloud computing. There are four types of cloud computing: public cloud, private cloud, hybrid cloud and community. The most common and well-known deployment model is Public Cloud. A Private Cloud is suited for sensitive data, where the customer is dependent on a certain degree of security.According to the different types of services offered, cloud computing can be considered to consist of three layers (services models: IaaS (infrastructure as a service, PaaS (platform as a service, SaaS (software as a service. Main cloud computing solutions: web applications, data hosting, virtualization, database clusters and terminal services. The advantage of cloud com-puting is the ability to virtualize and share resources among different applications with the objective for better server utilization and without a clustering solution, a service may fail at the moment the server crashes.DOI: 10.15181/csat.v2i2.914

  13. The Basics of Cloud Computing

    Science.gov (United States)

    Kaestner, Rich

    2012-01-01

    Most school business officials have heard the term "cloud computing" bandied about and may have some idea of what the term means. In fact, they likely already leverage a cloud-computing solution somewhere within their district. But what does cloud computing really mean? This brief article puts a bit of definition behind the term and helps one…

  14. Towards trustworthy health platform cloud

    NARCIS (Netherlands)

    Deng, M.; Nalin, M.; Petkovic, M.; Baroni, I.; Marco, A.; Jonker, W.; Petkovic, M.

    2012-01-01

    To address today’s major concerns of health service providers regarding security, resilience and data protection when moving on the cloud, we propose an approach to build a trustworthy healthcare platform cloud, based on a trustworthy cloud infrastructure. This paper first highlights the main

  15. A View from the Clouds

    Science.gov (United States)

    Chudnov, Daniel

    2010-01-01

    Cloud computing is definitely a thing now, but it's not new and it's not even novel. Back when people were first learning about the Internet in the 1990s, every diagram that one saw showing how the Internet worked had a big cloud in the middle. That cloud represented the diverse links, routers, gateways, and protocols that passed traffic around in…

  16. Trusting Privacy in the Cloud

    NARCIS (Netherlands)

    Prüfer, J.O.

    2014-01-01

    Cloud computing technologies have the potential to increase innovation and economic growth considerably. But many users worry that data in the cloud can be accessed by others, thereby damaging the data owner. Consequently, they do not use cloud technologies up to the efficient level. I design an

  17. Securing the Cloud Cloud Computer Security Techniques and Tactics

    CERN Document Server

    Winkler, Vic (JR)

    2011-01-01

    As companies turn to cloud computing technology to streamline and save money, security is a fundamental concern. Loss of certain control and lack of trust make this transition difficult unless you know how to handle it. Securing the Cloud discusses making the move to the cloud while securing your peice of it! The cloud offers felxibility, adaptability, scalability, and in the case of security-resilience. This book details the strengths and weaknesses of securing your company's information with different cloud approaches. Attacks can focus on your infrastructure, communications network, data, o

  18. AceCloud: Molecular Dynamics Simulations in the Cloud.

    Science.gov (United States)

    Harvey, M J; De Fabritiis, G

    2015-05-26

    We present AceCloud, an on-demand service for molecular dynamics simulations. AceCloud is designed to facilitate the secure execution of large ensembles of simulations on an external cloud computing service (currently Amazon Web Services). The AceCloud client, integrated into the ACEMD molecular dynamics package, provides an easy-to-use interface that abstracts all aspects of interaction with the cloud services. This gives the user the experience that all simulations are running on their local machine, minimizing the learning curve typically associated with the transition to using high performance computing services.

  19. VMware private cloud computing with vCloud director

    CERN Document Server

    Gallagher, Simon

    2013-01-01

    It's All About Delivering Service with vCloud Director Empowered by virtualization, companies are not just moving into the cloud, they're moving into private clouds for greater security, flexibility, and cost savings. However, this move involves more than just infrastructure. It also represents a different business model and a new way to provide services. In this detailed book, VMware vExpert Simon Gallagher makes sense of private cloud computing for IT administrators. From basic cloud theory and strategies for adoption to practical implementation, he covers all the issues. You'll lea

  20. Evaluation of Satellite-Based Upper Troposphere Cloud Top Height Retrievals in Multilayer Cloud Conditions During TC4

    Science.gov (United States)

    Chang, Fu-Lung; Minnis, Patrick; Ayers, J. Kirk; McGill, Matthew J.; Palikonda, Rabindra; Spangenberg, Douglas A.; Smith, William L., Jr.; Yost, Christopher R.

    2010-01-01

    Upper troposphere cloud top heights (CTHs), restricted to cloud top pressures (CTPs) less than 500 hPa, inferred using four satellite retrieval methods applied to Twelfth Geostationary Operational Environmental Satellite (GOES-12) data are evaluated using measurements during the July August 2007 Tropical Composition, Cloud and Climate Coupling Experiment (TC4). The four methods are the single-layer CO2-absorption technique (SCO2AT), a modified CO2-absorption technique (MCO2AT) developed for improving both single-layered and multilayered cloud retrievals, a standard version of the Visible Infrared Solar-infrared Split-window Technique (old VISST), and a new version of VISST (new VISST) recently developed to improve cloud property retrievals. They are evaluated by comparing with ER-2 aircraft-based Cloud Physics Lidar (CPL) data taken during 9 days having extensive upper troposphere cirrus, anvil, and convective clouds. Compared to the 89% coverage by upper tropospheric clouds detected by the CPL, the SCO2AT, MCO2AT, old VISST, and new VISST retrieved CTPs less than 500 hPa in 76, 76, 69, and 74% of the matched pixels, respectively. Most of the differences are due to subvisible and optically thin cirrus clouds occurring near the tropopause that were detected only by the CPL. The mean upper tropospheric CTHs for the 9 days are 14.2 (+/- 2.1) km from the CPL and 10.7 (+/- 2.1), 12.1 (+/- 1.6), 9.7 (+/- 2.9), and 11.4 (+/- 2.8) km from the SCO2AT, MCO2AT, old VISST, and new VISST, respectively. Compared to the CPL, the MCO2AT CTHs had the smallest mean biases for semitransparent high clouds in both single-layered and multilayered situations whereas the new VISST CTHs had the smallest mean biases when upper clouds were opaque and optically thick. The biases for all techniques increased with increasing numbers of cloud layers. The transparency of the upper layer clouds tends to increase with the numbers of cloud layers.