WorldWideScience

Sample records for cloud models constrained

  1. Constraining the models' response of tropical low clouds to SST forcings using CALIPSO observations

    Science.gov (United States)

    Cesana, G.; Del Genio, A. D.; Ackerman, A. S.; Brient, F.; Fridlind, A. M.; Kelley, M.; Elsaesser, G.

    2017-12-01

    Low-cloud response to a warmer climate is still pointed out as being the largest source of uncertainty in the last generation of climate models. To date there is no consensus among the models on whether the tropical low cloudiness would increase or decrease in a warmer climate. In addition, it has been shown that - depending on their climate sensitivity - the models either predict deeper or shallower low clouds. Recently, several relationships between inter-model characteristics of the present-day climate and future climate changes have been highlighted. These so-called emergent constraints aim to target relevant model improvements and to constrain models' projections based on current climate observations. Here we propose to use - for the first time - 10 years of CALIPSO cloud statistics to assess the ability of the models to represent the vertical structure of tropical low clouds for abnormally warm SST. We use a simulator approach to compare observations and simulations and focus on the low-layered clouds (i.e. z fraction. Vertically, the clouds deepen namely by decreasing the cloud fraction in the lowest levels and increasing it around the top of the boundary-layer. This feature is coincident with an increase of the high-level cloud fraction (z > 6.5km). Although the models' spread is large, the multi-model mean captures the observed variations but with a smaller amplitude. We then employ the GISS model to investigate how changes in cloud parameterizations affect the response of low clouds to warmer SSTs on the one hand; and how they affect the variations of the model's cloud profiles with respect to environmental parameters on the other hand. Finally, we use CALIPSO observations to constrain the model by determining i) what set of parameters allows reproducing the observed relationships and ii) what are the consequences on the cloud feedbacks. These results point toward process-oriented constraints of low-cloud responses to surface warming and environmental

  2. Modeling Optical and Radiative Properties of Clouds Constrained with CARDEX Observations

    Science.gov (United States)

    Mishra, S. K.; Praveen, P. S.; Ramanathan, V.

    2013-12-01

    Carbonaceous aerosols (CA) have important effects on climate by directly absorbing solar radiation and indirectly changing cloud properties. These particles tend to be a complex mixture of graphitic carbon and organic compounds. The graphitic component, called as elemental carbon (EC), is characterized by significant absorption of solar radiation. Recent studies showed that organic carbon (OC) aerosols absorb strongly near UV region, and this faction is known as Brown Carbon (BrC). The indirect effect of CA can occur in two ways, first by changing the thermal structure of the atmosphere which further affects dynamical processes governing cloud life cycle; secondly, by acting as cloud condensation nuclei (CCN) that can change cloud radiative properties. In this work, cloud optical properties have been numerically estimated by accounting for CAEDEX (Cloud Aerosol Radiative Forcing Dynamics Experiment) observed cloud parameters and the physico-chemical and optical properties of aerosols. The aerosol inclusions in the cloud drop have been considered as core shell structure with core as EC and shell comprising of ammonium sulfate, ammonium nitrate, sea salt and organic carbon (organic acids, OA and brown carbon, BrC). The EC/OC ratio of the inclusion particles have been constrained based on observations. Moderate and heavy pollution events have been decided based on the aerosol number and BC concentration. Cloud drop's co-albedo at 550nm was found nearly identical for pure EC sphere inclusions and core-shell inclusions with all non-absorbing organics in the shell. However, co-albedo was found to increase for the drop having all BrC in the shell. The co-albedo of a cloud drop was found to be the maximum for all aerosol present as interstitial compare to 50% and 0% inclusions existing as interstitial aerosols. The co-albedo was found to be ~ 9.87e-4 for the drop with 100% inclusions existing as interstitial aerosols externally mixed with micron size mineral dust with 2

  3. Lidar Penetration Depth Observations for Constraining Cloud Longwave Feedbacks

    Science.gov (United States)

    Vaillant de Guelis, T.; Chepfer, H.; Noel, V.; Guzman, R.; Winker, D. M.; Kay, J. E.; Bonazzola, M.

    2017-12-01

    Satellite-borne active remote sensing Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations [CALIPSO; Winker et al., 2010] and CloudSat [Stephens et al., 2002] provide direct measurements of the cloud vertical distribution, with a very high vertical resolution. The penetration depth of the laser of the lidar Z_Opaque is directly linked to the LongWave (LW) Cloud Radiative Effect (CRE) at Top Of Atmosphere (TOA) [Vaillant de Guélis et al., in review]. In addition, this measurement is extremely stable in time making it an excellent observational candidate to verify and constrain the cloud LW feedback mechanism [Chepfer et al., 2014]. In this work, we present a method to decompose the variations of the LW CRE at TOA using cloud properties observed by lidar [GOCCP v3.0; Guzman et al., 2017]. We decompose these variations into contributions due to changes in five cloud properties: opaque cloud cover, opaque cloud altitude, thin cloud cover, thin cloud altitude, and thin cloud emissivity [Vaillant de Guélis et al., in review]. We apply this method, in the real world, to the CRE variations of CALIPSO 2008-2015 record, and, in climate model, to LMDZ6 and CESM simulations of the CRE variations of 2008-2015 period and of the CRE difference between a warm climate and the current climate. In climate model simulations, the same cloud properties as those observed by CALIOP are extracted from the CFMIP Observation Simulator Package (COSP) [Bodas-Salcedo et al., 2011] lidar simulator [Chepfer et al., 2008], which mimics the observations that would be performed by the lidar on board CALIPSO satellite. This method, when applied on multi-model simulations of current and future climate, could reveal the altitude of cloud opacity level observed by lidar as a strong constrain for cloud LW feedback, since the altitude feedback mechanism is physically explainable and the altitude of cloud opacity accurately observed by lidar.

  4. Managing Deadline-constrained Bag-of-Tasks Jobs on Hybrid Clouds

    OpenAIRE

    Wang, Bo; Song, Ying; Sun, Yuzhong; Liu, Jun

    2016-01-01

    Outsourcing jobs to a public cloud is a cost-effective way to address the problem of satisfying the peak resource demand when the local cloud has insufficient resources. In this paper, we study on managing deadline-constrained bag-of-tasks jobs on hybrid clouds. We present a binary nonlinear programming (BNP) problem to model the hybrid cloud management where the utilization of physical machines (PMs) in the local cloud/cluster is maximized when the local resources are enough to satisfy the d...

  5. Challenges in constraining anthropogenic aerosol effects on cloud radiative forcing using present-day spatiotemporal variability.

    Science.gov (United States)

    Ghan, Steven; Wang, Minghuai; Zhang, Shipeng; Ferrachat, Sylvaine; Gettelman, Andrew; Griesfeller, Jan; Kipling, Zak; Lohmann, Ulrike; Morrison, Hugh; Neubauer, David; Partridge, Daniel G; Stier, Philip; Takemura, Toshihiko; Wang, Hailong; Zhang, Kai

    2016-05-24

    A large number of processes are involved in the chain from emissions of aerosol precursor gases and primary particles to impacts on cloud radiative forcing. Those processes are manifest in a number of relationships that can be expressed as factors dlnX/dlnY driving aerosol effects on cloud radiative forcing. These factors include the relationships between cloud condensation nuclei (CCN) concentration and emissions, droplet number and CCN concentration, cloud fraction and droplet number, cloud optical depth and droplet number, and cloud radiative forcing and cloud optical depth. The relationship between cloud optical depth and droplet number can be further decomposed into the sum of two terms involving the relationship of droplet effective radius and cloud liquid water path with droplet number. These relationships can be constrained using observations of recent spatial and temporal variability of these quantities. However, we are most interested in the radiative forcing since the preindustrial era. Because few relevant measurements are available from that era, relationships from recent variability have been assumed to be applicable to the preindustrial to present-day change. Our analysis of Aerosol Comparisons between Observations and Models (AeroCom) model simulations suggests that estimates of relationships from recent variability are poor constraints on relationships from anthropogenic change for some terms, with even the sign of some relationships differing in many regions. Proxies connecting recent spatial/temporal variability to anthropogenic change, or sustained measurements in regions where emissions have changed, are needed to constrain estimates of anthropogenic aerosol impacts on cloud radiative forcing.

  6. CloudSat-Constrained Cloud Ice Water Path and Cloud Top Height Retrievals from MHS 157 and 183.3 GHz Radiances

    Science.gov (United States)

    Gong, J.; Wu, D. L.

    2014-01-01

    Ice water path (IWP) and cloud top height (ht) are two of the key variables in determining cloud radiative and thermodynamical properties in climate models. Large uncertainty remains among IWP measurements from satellite sensors, in large part due to the assumptions made for cloud microphysics in these retrievals. In this study, we develop a fast algorithm to retrieve IWP from the 157, 183.3+/-3 and 190.3 GHz radiances of the Microwave Humidity Sounder (MHS) such that the MHS cloud ice retrieval is consistent with CloudSat IWP measurements. This retrieval is obtained by constraining the empirical forward models between collocated and coincident measurements of CloudSat IWP and MHS cloud-induced radiance depression (Tcir) at these channels. The empirical forward model is represented by a lookup table (LUT) of Tcir-IWP relationships as a function of ht and the frequency channel.With ht simultaneously retrieved, the IWP is found to be more accurate. The useful range of the MHS IWP retrieval is between 0.5 and 10 kg/sq m, and agrees well with CloudSat in terms of the normalized probability density function (PDF). Compared to the empirical model, current operational radiative transfer models (RTMs) still have significant uncertainties in characterizing the observed Tcir-IWP relationships. Therefore, the empirical LUT method developed here remains an effective approach to retrieving ice cloud properties from the MHS-like microwave channels.

  7. Cloud Computing, Tieto Cloud Server Model

    OpenAIRE

    Suikkanen, Saara

    2013-01-01

    The purpose of this study is to find out what is cloud computing. To be able to make wise decisions when moving to cloud or considering it, companies need to understand what cloud is consists of. Which model suits best to they company, what should be taken into account before moving to cloud, what is the cloud broker role and also SWOT analysis of cloud? To be able to answer customer requirements and business demands, IT companies should develop and produce new service models. IT house T...

  8. Constrained CPn models

    International Nuclear Information System (INIS)

    Latorre, J.I.; Luetken, C.A.

    1988-11-01

    We construct a large new class of two dimensional sigma models with Kaehler target spaces which are algebraic manifolds realized as complete interactions in weighted CP n spaces. They are N=2 superconformally symmetric and particular choices of constraints give Calabi-Yau target spaces which are nontrivial string vacua. (orig.)

  9. Constraining the instantaneous aerosol influence on cloud albedo

    Energy Technology Data Exchange (ETDEWEB)

    Gryspeerdt, Edward; Quaas, Johannes; Ferrachat, Sylvaine; Gettelman, Andrew; Ghan, Steven; Lohmann, Ulrike; Morrison, Hugh; Neubauer, David; Partridge, Daniel G.; Stier, Philip; Takemura, Toshihiko; Wang, Hailong; Wang, Minghuai; Zhang, Kai

    2017-04-26

    Much of the uncertainty in estimates of the anthropogenic forcing of climate change comes from uncertainties in the instantaneous effect of aerosols on cloud albedo, known as the Twomey effect or the radiative forcing from aerosol–cloud interactions (RFaci), a component of the total or effective radiative forcing. Because aerosols serving as cloud condensation nuclei can have a strong influence on the cloud droplet number concentration (Nd), previous studies have used the sensitivity of the Nd to aerosol properties as a constraint on the strength of the RFaci. However, recent studies have suggested that relationships between aerosol and cloud properties in the present-day climate may not be suitable for determining the sensitivity of the Nd to anthropogenic aerosol perturbations. Using an ensemble of global aerosol–climate models, this study demonstrates how joint histograms between Nd and aerosol properties can account for many of the issues raised by previous studies. It shows that if the anthropogenic contribution to the aerosol is known, the RFaci can be diagnosed to within 20% of its actual value. The accuracy of different aerosol proxies for diagnosing the RFaci is investigated, confirming that using the aerosol optical depth significantly underestimates the strength of the aerosol–cloud interactions in satellite data.

  10. Constraining the instantaneous aerosol influence on cloud albedo.

    Science.gov (United States)

    Gryspeerdt, Edward; Quaas, Johannes; Ferrachat, Sylvaine; Gettelman, Andrew; Ghan, Steven; Lohmann, Ulrike; Morrison, Hugh; Neubauer, David; Partridge, Daniel G; Stier, Philip; Takemura, Toshihiko; Wang, Hailong; Wang, Minghuai; Zhang, Kai

    2017-05-09

    Much of the uncertainty in estimates of the anthropogenic forcing of climate change comes from uncertainties in the instantaneous effect of aerosols on cloud albedo, known as the Twomey effect or the radiative forcing from aerosol-cloud interactions (RFaci), a component of the total or effective radiative forcing. Because aerosols serving as cloud condensation nuclei can have a strong influence on the cloud droplet number concentration ( N d ), previous studies have used the sensitivity of the N d to aerosol properties as a constraint on the strength of the RFaci. However, recent studies have suggested that relationships between aerosol and cloud properties in the present-day climate may not be suitable for determining the sensitivity of the N d to anthropogenic aerosol perturbations. Using an ensemble of global aerosol-climate models, this study demonstrates how joint histograms between N d and aerosol properties can account for many of the issues raised by previous studies. It shows that if the anthropogenic contribution to the aerosol is known, the RFaci can be diagnosed to within 20% of its actual value. The accuracy of different aerosol proxies for diagnosing the RFaci is investigated, confirming that using the aerosol optical depth significantly underestimates the strength of the aerosol-cloud interactions in satellite data.

  11. Models of bright storm clouds and related dark ovals in Saturn's Storm Alley as constrained by 2008 Cassini/VIMS spectra

    Science.gov (United States)

    Sromovsky, L. A.; Baines, K. H.; Fry, P. M.

    2018-03-01

    A 5° latitude band on Saturn centered near planetocentric latitude 36°S is known as "Storm Alley" because it has been for several extended periods a site of frequent lightning activity and associated thunderstorms, first identified by Porco et al. (2005). The thunderstorms appeared as bright clouds at short and long continuum wavelengths, and over a period of a week or so transformed into dark ovals (Dyudina et al., 2007). The ovals were found to be dark over a wide spectral range, which led Baines et al. (2009) to suggest the possibility that a broadband absorber such as soot produced by lightning could play a significant role in darkening the clouds relative to their surroundings. Here we show that an alternative explanation, which is that the clouds are less reflective because of reduced optical depth, provides an excellent fit to near infrared spectra of similar features obtained by the Cassini Visual and Infrared Mapping Spectrometer (VIMS) in 2008, and leads to a plausible scenario for cloud evolution. We find that the background clouds and the oval clouds are both dominated by the optical properties of a ubiquitous upper cloud layer, which has the same particle size in both regions, but about half the optical depth and physical thickness in the dark oval regions. The dark oval regions are also marked by enhanced emissions in the 5-μm window region, a result of lower optical depth of the deep cloud layer near 3.1-3.8 bar, presumably composed of ammonium hydrosulfide (NH4SH). The bright storm clouds completely block this deep thermal emission with a thick layer of ammonia (NH3) clouds extending from the middle of the main visible cloud layer probably as deep as the 1.7-bar NH3 condensation level. Other condensates might also be present at higher pressures, but are obscured by the NH3 cloud. The strong 3-μm spectral absorption that was displayed by Saturn's Great Storm of 2010-2011 (Sromovsky et al., 2013) is weaker in these storms because the contrast is

  12. How accurately can the instantaneous aerosol effect on cloud albedo be constrained?

    Science.gov (United States)

    Gryspeerdt, E.; Quaas, J.; Ferrachat, S.; Gettelman, A.; Ghan, S. J.; Lohmann, U.; Neubauer, D.; Morrison, H.; Partridge, D.; Stier, P.; Takemura, T.; Wang, H.; Wang, M.; Zhang, K.

    2017-12-01

    Aerosol-cloud interactions are the most uncertain component of the anthropogenic radiative forcing, with a significant fraction of this uncertainty coming from uncertainty in the radiative forcing due to instantaneous changes in cloud albedo (the RFaci). Aerosols can have a strong influence on the cloud droplet number concentration (CDNC), so previous studies have used the sensitivity of CDNC to aerosol properties as a method of estimating the RFaci. However, recent studies have suggested that this sensitivity is unsuitable as a constraint on the RFaci, as it differs in the present day and pre-industrial atmosphere. This would place significant limits on our ability to constrain the RFaci from satellite observations. In this study, a selection of global aerosol-climate models are used to investigate the suitability of various aerosol proxies and methods for calculating the RFaci from present day data. A linear-regression based sensitivity of CDNC to aerosol perturbations can lead to large errors in the diagnosed RFaci, as can the use of the aerosol optical depth (AOD) as an aerosol proxy. However, we show that if suitable choices of aerosol proxy are made and the anthropogenic aerosol contribution is known, it is possible to diagnose the anthropogenic change in CDNC, and so the RFaci, using present day aerosol-cloud relationships.

  13. Analysis of multi cloud storage applications for resource constrained mobile devices

    Directory of Open Access Journals (Sweden)

    Rajeev Kumar Bedi

    2016-09-01

    Full Text Available Cloud storage, which can be a surrogate for all physical hardware storage devices, is a term which gives a reflection of an enormous advancement in engineering (Hung et al., 2012. However, there are many issues that need to be handled when accessing cloud storage on resource constrained mobile devices due to inherent limitations of mobile devices as limited storage capacity, processing power and battery backup (Yeo et al., 2014. There are many multi cloud storage applications available, which handle issues faced by single cloud storage applications. In this paper, we are providing analysis of different multi cloud storage applications developed for resource constrained mobile devices to check their performance on the basis of parameters as battery consumption, CPU usage, data usage and time consumed by using mobile phone device Sony Xperia ZL (smart phone on WiFi network. Lastly, conclusion and open research challenges in these multi cloud storage apps are discussed.

  14. Enterprise Cloud Adoption - Cloud Maturity Assessment Model

    OpenAIRE

    Conway, Gerry; Doherty, Eileen; Carcary, Marian; Crowley, Catherine

    2017-01-01

    The introduction and use of cloud computing by an organization has the promise of significant benefits that include reduced costs, improved services, and a pay-per-use model. Organizations that successfully harness these benefits will potentially have a distinct competitive edge, due to their increased agility and flexibility to rapidly respond to an ever changing and complex business environment. However, as cloud technology is a relatively new ph...

  15. Constraining Aerosol-Cloud-Precipitation Interactions of Orographic Mixed-Phase Clouds with Trajectory Budgets

    Science.gov (United States)

    Glassmeier, F.; Lohmann, U.

    2016-12-01

    Orographic precipitation is prone to strong aerosol-cloud-precipitation interactions because the time for precipitation development is limited to the ascending section of mountain flow. At the same time, cloud microphysical development is constraint by the strong dynamical forcing of the orography. In this contribution, we discuss how changes in the amount and composition of droplet- and ice-forming aerosols influence precipitation in idealized simulations of stratiform orographic mixed-phase clouds. We find that aerosol perturbations trigger compensating responses of different precipitation formation pathways. The effect of aerosols is thus buffered. We explain this buffering by the requirement to fulfill aerosol-independent dynamical constraints. For our simulations, we use the regional atmospheric model COSMO-ART-M7 in a 2D setup with a bell-shaped mountain. The model is coupled to a 2-moment warm and cold cloud microphysics scheme. Activation and freezing rates are parameterized based on prescribed aerosol fields that are varied in number, size and composition. Our analysis is based on the budget of droplet water along trajectories of cloud parcels. The budget equates condensation as source term with precipitation formation from autoconversion, accretion, riming and the Wegener-Bergeron-Findeisen process as sink terms. Condensation, and consequently precipitation formation, is determined by dynamics and largely independent of the aerosol conditions. An aerosol-induced change in the number of droplets or crystals perturbs the droplet budget by affecting precipitation formation processes. We observe that this perturbation triggers adjustments in liquid and ice water content that re-equilibrate the budget. As an example, an increase in crystal number triggers a stronger glaciation of the cloud and redistributes precipitation formation from collision-coalescence to riming and from riming to vapor deposition. We theoretically confirm the dominant effect of water

  16. Mathematical Modeling of Constrained Hamiltonian Systems

    NARCIS (Netherlands)

    Schaft, A.J. van der; Maschke, B.M.

    1995-01-01

    Network modelling of unconstrained energy conserving physical systems leads to an intrinsic generalized Hamiltonian formulation of the dynamics. Constrained energy conserving physical systems are directly modelled as implicit Hamiltonian systems with regard to a generalized Dirac structure on the

  17. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...... to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number...

  18. Evaluating and constraining ice cloud parameterizations in CAM5 using aircraft measurements from the SPARTICUS campaign

    Directory of Open Access Journals (Sweden)

    K. Zhang

    2013-05-01

    Full Text Available This study uses aircraft measurements of relative humidity and ice crystal size distribution collected during the SPARTICUS (Small PARTicles In CirrUS field campaign to evaluate and constrain ice cloud parameterizations in the Community Atmosphere Model version 5. About 200 h of data were collected during the campaign between January and June 2010, providing the longest aircraft measurements available so far for cirrus clouds in the midlatitudes. The probability density function (PDF of ice crystal number concentration (Ni derived from the high-frequency (1 Hz measurements features a strong dependence on ambient temperature. As temperature decreases from −35 °C to −62 °C, the peak in the PDF shifts from 10–20 L−1 to 200–1000 L−1, while Ni shows a factor of 6–7 increase. Model simulations are performed with two different ice nucleation schemes for pure ice-phase clouds. One of the schemes can reproduce a clear increase of Ni with decreasing temperature by using either an observation-based ice nuclei spectrum or a classical-theory-based spectrum with a relatively low (5–10% maximum freezing ratio for dust aerosols. The simulation with the other scheme, which assumes a high maximum freezing ratio (100%, shows much weaker temperature dependence of Ni. Simulations are also performed to test empirical parameters related to water vapor deposition and the autoconversion of ice crystals to snow. Results show that a value between 0.05 and 0.1 for the water vapor deposition coefficient, and 250 μm for the critical diameter that distinguishes ice crystals from snow, can produce good agreement between model simulation and the SPARTICUS measurements in terms of Ni and effective radius. The climate impact of perturbing these parameters is also discussed.

  19. Cloud Robotics Model

    OpenAIRE

    Mester, Gyula

    2015-01-01

    Cloud Robotics was born from the merger of service robotics and cloud technologies. It allows robots to benefit from the powerful computational, storage, and communications resources of modern data centres. Cloud robotics allows robots to take advantage of the rapid increase in data transfer rates to offload tasks without hard real time requirements. Cloud Robotics has rapidly gained momentum with initiatives by companies such as Google, Willow Garage and Gostai as well as more than a dozen a...

  20. Worldwide data sets constrain the water vapor uptake coefficient in cloud formation.

    Science.gov (United States)

    Raatikainen, Tomi; Nenes, Athanasios; Seinfeld, John H; Morales, Ricardo; Moore, Richard H; Lathem, Terry L; Lance, Sara; Padró, Luz T; Lin, Jack J; Cerully, Kate M; Bougiatioti, Aikaterini; Cozic, Julie; Ruehl, Christopher R; Chuang, Patrick Y; Anderson, Bruce E; Flagan, Richard C; Jonsson, Haflidi; Mihalopoulos, Nikos; Smith, James N

    2013-03-05

    Cloud droplet formation depends on the condensation of water vapor on ambient aerosols, the rate of which is strongly affected by the kinetics of water uptake as expressed by the condensation (or mass accommodation) coefficient, αc. Estimates of αc for droplet growth from activation of ambient particles vary considerably and represent a critical source of uncertainty in estimates of global cloud droplet distributions and the aerosol indirect forcing of climate. We present an analysis of 10 globally relevant data sets of cloud condensation nuclei to constrain the value of αc for ambient aerosol. We find that rapid activation kinetics (αc > 0.1) is uniformly prevalent. This finding resolves a long-standing issue in cloud physics, as the uncertainty in water vapor accommodation on droplets is considerably less than previously thought.

  1. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  2. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  3. Models of Flux Tubes from Constrained Relaxation

    Indian Academy of Sciences (India)

    tribpo

    J. Astrophys. Astr. (2000) 21, 299 302. Models of Flux Tubes from Constrained Relaxation. Α. Mangalam* & V. Krishan†, Indian Institute of Astrophysics, Koramangala,. Bangalore 560 034, India. *e mail: mangalam @ iiap. ernet. in. † e mail: vinod@iiap.ernet.in. Abstract. We study the relaxation of a compressible plasma to ...

  4. Modeling of Cloud/Radiation Processes for Cirrus Cloud Formation

    National Research Council Canada - National Science Library

    Liou, K

    1997-01-01

    This technical report includes five reprints and pre-prints of papers associated with the modeling of cirrus cloud and radiation processes as well as remote sensing of cloud optical and microphysical...

  5. Terrestrial Sagnac delay constraining modified gravity models

    Science.gov (United States)

    Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.

    2018-04-01

    Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.

  6. A constrained supersymmetric left-right model

    Energy Technology Data Exchange (ETDEWEB)

    Hirsch, Martin [AHEP Group, Instituto de Física Corpuscular - C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, Apartado 22085, E-46071 València (Spain); Krauss, Manuel E. [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Opferkuch, Toby [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Porod, Werner [Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Staub, Florian [Theory Division, CERN,1211 Geneva 23 (Switzerland)

    2016-03-02

    We present a supersymmetric left-right model which predicts gauge coupling unification close to the string scale and extra vector bosons at the TeV scale. The subtleties in constructing a model which is in agreement with the measured quark masses and mixing for such a low left-right breaking scale are discussed. It is shown that in the constrained version of this model radiative breaking of the gauge symmetries is possible and a SM-like Higgs is obtained. Additional CP-even scalars of a similar mass or even much lighter are possible. The expected mass hierarchies for the supersymmetric states differ clearly from those of the constrained MSSM. In particular, the lightest down-type squark, which is a mixture of the sbottom and extra vector-like states, is always lighter than the stop. We also comment on the model’s capability to explain current anomalies observed at the LHC.

  7. Deadline-constrained workflow scheduling algorithms for Infrastructure as a Service Clouds

    NARCIS (Netherlands)

    Abrishami, S.; Naghibzadeh, M.; Epema, D.H.J.

    2013-01-01

    The advent of Cloud computing as a new model of service provisioning in distributed systems encourages researchers to investigate its benefits and drawbacks on executing scientific applications such as workflows. One of the most challenging problems in Clouds is workflow scheduling, i.e., the

  8. Business model elements impacting cloud computing adoption

    DEFF Research Database (Denmark)

    Bogataj, Kristina; Pucihar, Andreja; Sudzina, Frantisek

    The paper presents a proposed research framework for identification of business model elements impacting Cloud Computing Adoption. We provide a definition of main Cloud Computing characteristics, discuss previous findings on factors impacting Cloud Computing Adoption, and investigate technology a...

  9. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  10. Online constrained model-based reinforcement learning

    CSIR Research Space (South Africa)

    Van Niekerk, B

    2017-08-01

    Full Text Available Constrained Model-based Reinforcement Learning Benjamin van Niekerk School of Computer Science University of the Witwatersrand South Africa Andreas Damianou∗ Amazon.com Cambridge, UK Benjamin Rosman Council for Scientific and Industrial Research, and School... MULTIPLE SHOOTING Using direct multiple shooting (Bock and Plitt, 1984), problem (1) can be transformed into a structured non- linear program (NLP). First, the time horizon [t0, t0 + T ] is partitioned into N equal subintervals [tk, tk+1] for k = 0...

  11. Constraining supergravity models from gluino production

    International Nuclear Information System (INIS)

    Barbieri, R.; Gamberini, G.; Giudice, G.F.; Ridolfi, G.

    1988-01-01

    The branching ratios for gluino decays g tilde → qanti qΧ, g tilde → gΧ into a stable undetected neutralino are computed as functions of the relevant parameters of the underlying supergravity theory. A simple way of constraining supergravity models from gluino production emerges. The effectiveness of hadronic versus e + e - colliders in the search for supersymmetry can be directly compared. (orig.)

  12. Synergistic multi-sensor and multi-frequency retrieval of cloud ice water path constrained by CloudSat collocations

    International Nuclear Information System (INIS)

    Islam, Tanvir; Srivastava, Prashant K.

    2015-01-01

    The cloud ice water path (IWP) is one of the major parameters that have a strong influence on earth's radiation budget. Onboard satellite sensors are recognized as valuable tools to measure the IWP in a global scale. Albeit, active sensors such as the Cloud Profiling Radar (CPR) onboard the CloudSat satellite has better capability to measure the ice water content profile, thus, its vertical integral, IWP, than any passive microwave (MW) or infrared (IR) sensors. In this study, we investigate the retrieval of IWP from MW and IR sensors, including AMSU-A, MHS, and HIRS instruments on-board the N19 satellite, such that the retrieval is consistent with the CloudSat IWP estimates. This is achieved through the collocations between the passive satellite measurements and CloudSat scenes. Potential benefit of synergistic multi-sensor multi-frequency retrieval is investigated. Two modeling approaches are explored for the IWP retrieval – generalized linear model (GLM) and neural network (NN). The investigation has been carried out over both ocean and land surface types. The MW/IR synergy is found to be retrieved more accurate IWP than the individual AMSU-A, MHS, or HIRS measurements. Both GLM and NN approaches have been able to exploit the synergistic retrievals. - Highlights: • MW/IR synergy is investigated for IWP retrieval. • The IWP retrieval is modeled using CloudSat collocations. • Two modeling approaches are explored – GLM and ANN. • MW/IR synergy performs better than the MW or IR only retrieval

  13. Scheduling Multilevel Deadline-Constrained Scientific Workflows on Clouds Based on Cost Optimization

    Directory of Open Access Journals (Sweden)

    Maciej Malawski

    2015-01-01

    Full Text Available This paper presents a cost optimization model for scheduling scientific workflows on IaaS clouds such as Amazon EC2 or RackSpace. We assume multiple IaaS clouds with heterogeneous virtual machine instances, with limited number of instances per cloud and hourly billing. Input and output data are stored on a cloud object store such as Amazon S3. Applications are scientific workflows modeled as DAGs as in the Pegasus Workflow Management System. We assume that tasks in the workflows are grouped into levels of identical tasks. Our model is specified using mathematical programming languages (AMPL and CMPL and allows us to minimize the cost of workflow execution under deadline constraints. We present results obtained using our model and the benchmark workflows representing real scientific applications in a variety of domains. The data used for evaluation come from the synthetic workflows and from general purpose cloud benchmarks, as well as from the data measured in our own experiments with Montage, an astronomical application, executed on Amazon EC2 cloud. We indicate how this model can be used for scenarios that require resource planning for scientific workflows and their ensembles.

  14. Personal lifelong user model clouds

    DEFF Research Database (Denmark)

    Dolog, Peter; Kay, Judy; Kummerfeld, Bob

    This paper explores an architecture for very long term user modelling, based upon personal user model clouds. These ensure that the individual's applications can access their model whenever it is needed. At the same time, the user can control the use of their user model. So, they can ensure...... which combines both. Finally we discuss implications of our approach for consistency and freshness of the user model information....... it is accessed only when and where they wish, by applications that they wish. We consider the challenges of representing user models so that they can be reused by multiple applications. We indicate potential synergies between distributed and centralised user modelling architectures, proposing an architecture...

  15. Spectral shifting strongly constrains molecular cloud disruption by radiation pressure on dust

    Science.gov (United States)

    Reissl, Stefan; Klessen, Ralf S.; Mac Low, Mordecai-Mark; Pellegrini, Eric W.

    2018-03-01

    Aim. We aim to test the hypothesis that radiation pressure from young star clusters acting on dust is the dominant feedback agent disrupting the largest star-forming molecular clouds and thus regulating the star-formation process. Methods: We performed multi-frequency, 3D, radiative transfer calculations including both scattering and absorption and re-emission to longer wavelengths for model clouds with masses of 104-107 M⊙, containing embedded clusters with star formation efficiencies of 0.009-91%, and varying maximum grain sizes up to 200 μm. We calculated the ratio between radiative and gravitational forces to determine whether radiation pressure can disrupt clouds. Results: We find that radiation pressure acting on dust almost never disrupts star-forming clouds. Ultraviolet and optical photons from young stars to which the cloud is optically thick do not scatter much. Instead, they quickly get absorbed and re-emitted by the dust at thermal wavelengths. As the cloud is typically optically thin to far-infrared radiation, it promptly escapes, depositing little momentum in the cloud. The resulting spectrum is more narrowly peaked than the corresponding Planck function, and exhibits an extended tail at longer wavelengths. As the opacity drops significantly across the sub-mm and mm wavelength regime, the resulting radiative force is even smaller than for the corresponding single-temperature blackbody. We find that the force from radiation pressure falls below the strength of gravitational attraction by an order of magnitude or more for either Milky Way or moderate starbust conditions. Only for unrealistically large maximum grain sizes, and star formation efficiencies far exceeding 50% do we find that the strength of radiation pressure can exceed gravity. Conclusions: We conclude that radiation pressure acting on dust does not disrupt star-forming molecular clouds in any Local Group galaxies. Radiation pressure thus appears unlikely to regulate the star

  16. A Categorisation of Cloud Computing Business Models

    OpenAIRE

    Chang, Victor; Bacigalupo, David; Wills, Gary; De Roure, David

    2010-01-01

    This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. We classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In-House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government funding; (7) Venture Capitals; and (8) Entertainment and Social Networking. U...

  17. Cloud forcing: A modeling perspective

    International Nuclear Information System (INIS)

    Potter, G.L.; Mobely, R.L.; Drach, R.S.; Corsetti, T.G.; Williams, D.N.; Slingo, J.M.

    1990-11-01

    Radiation fields from a perpetual July integration of a T106 version of the ECMWF operational model are used as surrogate observations of the radiation budget at the top of the atmosphere to illustrate various difficulties that modellers might face when trying to reconcile cloud radiation forcings derived from satellite observations with model-generated ones. Differences between the so-called Methods 1 and 2 of Cess and Potter (1987) and a variant Method 3 are addressed. Method 1 is shown to be the least robust of all methods, due to potential uncertainties related to persistent cloudiness, length of the period over which clear-sky conditions are looked for, biases in retrieved clear-sky quantities due to an insufficient sampling of the diurnal cycle. We advocate the use of Method 2 as the only unambiguous one to produce consistent radiative diagnostics for intercomparing model results. Impact of the three methods on the derived sensitivities and cloud feedbacks following an imposed change in sea surface temperature (used as a surrogate climate change) is discussed. 17 refs., 12 figs., 1 tab

  18. Reflected stochastic differential equation models for constrained animal movement

    Science.gov (United States)

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  19. Constrained optimization via simulation models for new product innovation

    Science.gov (United States)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  20. Modeling Incoherent Electron Cloud Effects

    International Nuclear Information System (INIS)

    Vay, Jean-Luc; Benedetto, E.; Fischer, W.; Franchetti, G.; Ohmi, K.; Schulte, D.; Sonnad, K.; Tomas, R.; Vay, J.-L.; Zimmermann, F.; Rumolo, G.; Pivi, M.; Raubenheimer, T.

    2007-01-01

    Incoherent electron effects could seriously limit the beam lifetime in proton or ion storage rings, such as LHC, SPS, or RHIC, or blow up the vertical emittance of positron beams, e.g., at the B factories or in linear-collider damping rings. Different approaches to modeling these effects each have their own merits and drawbacks. We describe several simulation codes which simplify the descriptions of the beam-electron interaction and of the accelerator structure in various different ways, and present results for a toy model of the SPS. In addition, we present evidence that for positron beams the interplay of incoherent electron-cloud effects and synchrotron radiation can lead to a significant increase in vertical equilibrium emittance. The magnitude of a few incoherent e+e- scattering processes is also estimated. Options for future code development are reviewed

  1. Seismic waveform modeling over cloud

    Science.gov (United States)

    Luo, Cong; Friederich, Wolfgang

    2016-04-01

    With the fast growing computational technologies, numerical simulation of seismic wave propagation achieved huge successes. Obtaining the synthetic waveforms through numerical simulation receives an increasing amount of attention from seismologists. However, computational seismology is a data-intensive research field, and the numerical packages usually come with a steep learning curve. Users are expected to master considerable amount of computer knowledge and data processing skills. Training users to use the numerical packages, correctly access and utilize the computational resources is a troubled task. In addition to that, accessing to HPC is also a common difficulty for many users. To solve these problems, a cloud based solution dedicated on shallow seismic waveform modeling has been developed with the state-of-the-art web technologies. It is a web platform integrating both software and hardware with multilayer architecture: a well designed SQL database serves as the data layer, HPC and dedicated pipeline for it is the business layer. Through this platform, users will no longer need to compile and manipulate various packages on the local machine within local network to perform a simulation. By providing users professional access to the computational code through its interfaces and delivering our computational resources to the users over cloud, users can customize the simulation at expert-level, submit and run the job through it.

  2. Security model for VM in cloud

    Science.gov (United States)

    Kanaparti, Venkataramana; Naveen K., R.; Rajani, S.; Padmvathamma, M.; Anitha, C.

    2013-03-01

    Cloud computing is a new approach emerged to meet ever-increasing demand for computing resources and to reduce operational costs and Capital Expenditure for IT services. As this new way of computation allows data and applications to be stored away from own corporate server, it brings more issues in security such as virtualization security, distributed computing, application security, identity management, access control and authentication. Even though Virtualization forms the basis for cloud computing it poses many threats in securing cloud. As most of Security threats lies at Virtualization layer in cloud we proposed this new Security Model for Virtual Machine in Cloud (SMVC) in which every process is authenticated by Trusted-Agent (TA) in Hypervisor as well as in VM. Our proposed model is designed to with-stand attacks by unauthorized process that pose threat to applications related to Data Mining, OLAP systems, Image processing which requires huge resources in cloud deployed on one or more VM's.

  3. Security Management Model in Cloud Computing Environment

    OpenAIRE

    Ahmadpanah, Seyed Hossein

    2016-01-01

    In the cloud computing environment, cloud virtual machine (VM) will be more and more the number of virtual machine security and management faced giant Challenge. In order to address security issues cloud computing virtualization environment, this paper presents a virtual machine based on efficient and dynamic deployment VM security management model state migration and scheduling, study of which virtual machine security architecture, based on AHP (Analytic Hierarchy Process) virtual machine de...

  4. MODEL FOR SEMANTICALLY RICH POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    F. Poux

    2017-10-01

    Full Text Available This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  5. Model for Semantically Rich Point Cloud Data

    Science.gov (United States)

    Poux, F.; Neuville, R.; Hallot, P.; Billen, R.

    2017-10-01

    This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  6. Multi-scale Modeling of Arctic Clouds

    Science.gov (United States)

    Hillman, B. R.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.

  7. Evaluating and Improving Cloud Processes in the Multi-Scale Modeling Framework

    Energy Technology Data Exchange (ETDEWEB)

    Ackerman, Thomas P. [Univ. of Washington, Seattle, WA (United States)

    2015-03-01

    The research performed under this grant was intended to improve the embedded cloud model in the Multi-scale Modeling Framework (MMF) for convective clouds by using a 2-moment microphysics scheme rather than the single moment scheme used in all the MMF runs to date. The technical report and associated documents describe the results of testing the cloud resolving model with fixed boundary conditions and evaluation of model results with data. The overarching conclusion is that such model evaluations are problematic because errors in the forcing fields control the results so strongly that variations in parameterization values cannot be usefully constrained

  8. Slow logarithmic relaxation in models with hierarchically constrained dynamics

    OpenAIRE

    Brey, J. J.; Prados, A.

    2000-01-01

    A general kind of models with hierarchically constrained dynamics is shown to exhibit logarithmic anomalous relaxation, similarly to a variety of complex strongly interacting materials. The logarithmic behavior describes most of the decay of the response function.

  9. Modeling microwave/electron-cloud interaction

    International Nuclear Information System (INIS)

    Mattes, M; Sorolla, E; Zimmermann, F

    2013-01-01

    Starting from the separate codes BI-RME and ECLOUD or PyECLOUD, we are developing a novel joint simulation tool, which models the combined effect of a charged particle beam and of microwaves on an electron cloud. Possible applications include the degradation of microwave transmission in telecommunication satellites by electron clouds; the microwave-transmission techniques being used in particle accelerators for the purpose of electroncloud diagnostics; the microwave emission by the electron cloud itself in the presence of a magnetic field; and the possible suppression of electron-cloud formation in an accelerator by injecting microwaves of suitable amplitude and frequency. A few early simulation results are presented. (author)

  10. Constrained KP models as integrable matrix hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Ferreira, L.A.; Gomes, J.F.; Zimerman, A.H.

    1997-01-01

    We formulate the constrained KP hierarchy (denoted by cKP K+1,M ) as an affine [cflx sl](M+K+1) matrix integrable hierarchy generalizing the Drinfeld endash Sokolov hierarchy. Using an algebraic approach, including the graded structure of the generalized Drinfeld endash Sokolov hierarchy, we are able to find several new universal results valid for the cKP hierarchy. In particular, our method yields a closed expression for the second bracket obtained through Dirac reduction of any untwisted affine Kac endash Moody current algebra. An explicit example is given for the case [cflx sl](M+K+1), for which a closed expression for the general recursion operator is also obtained. We show how isospectral flows are characterized and grouped according to the semisimple non-regular element E of sl(M+K+1) and the content of the center of the kernel of E. copyright 1997 American Institute of Physics

  11. Constraining the Dust Opacity Law in Three Small and Isolated Molecular Clouds

    Energy Technology Data Exchange (ETDEWEB)

    Webb, K. A.; Thanjavur, K. [Department of Physics and Astronomy, 3800 Finnerty Road, University of Victoria, Victoria, BC, V8P 5C2 (Canada); Di Francesco, J. [National Research Council of Canada, Herzberg Institute of Astrophysics, 5071 West Saanich Road, Victoria, BC, V9E 2E7 (Canada); Sadavoy, S. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Launhardt, R.; Vicente, J. Abreu; Kainulainen, J. [Max-Planck-Institut für Astronomy, Königstuhl 17, D-69117, Heidelberg (Germany); Shirley, Y. [Steward Observatory, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Stutz, A., E-mail: kawebb@uvic.ca [Departmento de Astronomìa, Facultad Ciencias Físicas y Matemáticas, Universidad de Concepción, Av. Esteban Iturra s/n Barro Universitario, Casilla 160-C, Concepción (Chile)

    2017-11-01

    Density profiles of isolated cores derived from thermal dust continuum emission rely on models of dust properties, such as mass opacity, that are poorly constrained. With complementary measures from near-infrared extinction maps, we can assess the reliability of commonly used dust models. In this work, we compare Herschel -derived maps of the optical depth with equivalent maps derived from CFHT WIRCAM near-infrared observations for three isolated cores: CB 68, L 429, and L 1552. We assess the dust opacities provided from four models: OH1a, OH5a, Orm1, and Orm4. Although the consistency of the models differs between the three sources, the results suggest that the optical properties of dust in the envelopes of the cores are best described by either silicate and bare graphite grains (e.g., Orm1) or carbonaceous grains with some coagulation and either thin or no ice mantles (e.g., OH5a). None of the models, however, individually produced the most consistent optical depth maps for every source. The results suggest that either the dust in the cores is not well-described by any one dust property model, the application of the dust models cannot be extended beyond the very center of the cores, or more complex SED fitting functions are necessary.

  12. Constrained bayesian inference of project performance models

    OpenAIRE

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  13. The simplified models approach to constraining supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Genessis [Institut fuer Theoretische Physik, Karlsruher Institut fuer Technologie (KIT), Wolfgang-Gaede-Str. 1, 76131 Karlsruhe (Germany); Kulkarni, Suchita [Laboratoire de Physique Subatomique et de Cosmologie, Universite Grenoble Alpes, CNRS IN2P3, 53 Avenue des Martyrs, 38026 Grenoble (France)

    2015-07-01

    The interpretation of the experimental results at the LHC are model dependent, which implies that the searches provide limited constraints on scenarios such as supersymmetry (SUSY). The Simplified Models Spectra (SMS) framework used by ATLAS and CMS collaborations is useful to overcome this limitation. SMS framework involves a small number of parameters (all the properties are reduced to the mass spectrum, the production cross section and the branching ratio) and hence is more generic than presenting results in terms of soft parameters. In our work, the SMS framework was used to test Natural SUSY (NSUSY) scenario. To accomplish this task, two automated tools (SModelS and Fastlim) were used to decompose the NSUSY parameter space in terms of simplified models and confront the theoretical predictions against the experimental results. The achievement of both, just as the strengths and limitations, are here expressed for the NSUSY scenario.

  14. Constraining composite Higgs models using LHC data

    Science.gov (United States)

    Banerjee, Avik; Bhattacharyya, Gautam; Kumar, Nilanjana; Ray, Tirtha Sankar

    2018-03-01

    We systematically study the modifications in the couplings of the Higgs boson, when identified as a pseudo Nambu-Goldstone boson of a strong sector, in the light of LHC Run 1 and Run 2 data. For the minimal coset SO(5)/SO(4) of the strong sector, we focus on scenarios where the standard model left- and right-handed fermions (specifically, the top and bottom quarks) are either in 5 or in the symmetric 14 representation of SO(5). Going beyond the minimal 5 L - 5 R representation, to what we call here the `extended' models, we observe that it is possible to construct more than one invariant in the Yukawa sector. In such models, the Yukawa couplings of the 125 GeV Higgs boson undergo nontrivial modifications. The pattern of such modifications can be encoded in a generic phenomenological Lagrangian which applies to a wide class of such models. We show that the presence of more than one Yukawa invariant allows the gauge and Yukawa coupling modifiers to be decorrelated in the `extended' models, and this decorrelation leads to a relaxation of the bound on the compositeness scale ( f ≥ 640 GeV at 95% CL, as compared to f ≥ 1 TeV for the minimal 5 L - 5 R representation model). We also study the Yukawa coupling modifications in the context of the next-to-minimal strong sector coset SO(6)/SO(5) for fermion-embedding up to representations of dimension 20. While quantifying our observations, we have performed a detailed χ 2 fit using the ATLAS and CMS combined Run 1 and available Run 2 data.

  15. Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.

    Science.gov (United States)

    Giedt, Joel; Thomas, Anthony W; Young, Ross D

    2009-11-13

    Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.

  16. Provable Data Possession of Resource-constrained Mobile Devices in Cloud Computing

    OpenAIRE

    Jian Yang; Haihang Wang; Jian Wang; Chengxiang Tan; Dingguo Yu

    2011-01-01

    Benefited from cloud storage services, users can save their cost of buying expensive storage and application servers, as well as deploying and maintaining applications. Meanwhile they lost the physical control of their data. So effective methods are needed to verify the correctness of the data stored at cloud servers, which are the research issues the Provable Data Possession (PDP) faced. The most important features in PDP are: 1) supporting for public, unlimited numbers of times of verificat...

  17. Southeast Atlantic Cloud Properties in a Multivariate Statistical Model - How Relevant is Air Mass History for Local Cloud Properties?

    Science.gov (United States)

    Fuchs, Julia; Cermak, Jan; Andersen, Hendrik

    2017-04-01

    This study aims at untangling the impacts of external dynamics and local conditions on cloud properties in the Southeast Atlantic (SEA) by combining satellite and reanalysis data using multivariate statistics. The understanding of clouds and their determinants at different scales is important for constraining the Earth's radiative budget, and thus prominent in climate-system research. In this study, SEA stratocumulus cloud properties are observed not only as the result of local environmental conditions but also as affected by external dynamics and spatial origins of air masses entering the study area. In order to assess to what extent cloud properties are impacted by aerosol concentration, air mass history, and meteorology, a multivariate approach is conducted using satellite observations of aerosol and cloud properties (MODIS, SEVIRI), information on aerosol species composition (MACC) and meteorological context (ERA-Interim reanalysis). To account for the often-neglected but important role of air mass origin, information on air mass history based on HYSPLIT modeling is included in the statistical model. This multivariate approach is intended to lead to a better understanding of the physical processes behind observed stratocumulus cloud properties in the SEA.

  18. Learning a constrained conditional random field for enhanced segmentation of fallen trees in ALS point clouds

    Science.gov (United States)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2018-06-01

    In this study, we present a method for improving the quality of automatic single fallen tree stem segmentation in ALS data by applying a specialized constrained conditional random field (CRF). The entire processing pipeline is composed of two steps. First, short stem segments of equal length are detected and a subset of them is selected for further processing, while in the second step the chosen segments are merged to form entire trees. The first step is accomplished using the specialized CRF defined on the space of segment labelings, capable of finding segment candidates which are easier to merge subsequently. To achieve this, the CRF considers not only the features of every candidate individually, but incorporates pairwise spatial interactions between adjacent segments into the model. In particular, pairwise interactions include a collinearity/angular deviation probability which is learned from training data as well as the ratio of spatial overlap, whereas unary potentials encode a learned probabilistic model of the laser point distribution around each segment. Each of these components enters the CRF energy with its own balance factor. To process previously unseen data, we first calculate the subset of segments for merging on a grid of balance factors by minimizing the CRF energy. Then, we perform the merging and rank the balance configurations according to the quality of their resulting merged trees, obtained from a learned tree appearance model. The final result is derived from the top-ranked configuration. We tested our approach on 5 plots from the Bavarian Forest National Park using reference data acquired in a field inventory. Compared to our previous segment selection method without pairwise interactions, an increase in detection correctness and completeness of up to 7 and 9 percentage points, respectively, was observed.

  19. Physics constrained nonlinear regression models for time series

    International Nuclear Information System (INIS)

    Majda, Andrew J; Harlim, John

    2013-01-01

    A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data. (paper)

  20. OPTIMIZED PARTICLE SWARM OPTIMIZATION BASED DEADLINE CONSTRAINED TASK SCHEDULING IN HYBRID CLOUD

    Directory of Open Access Journals (Sweden)

    Dhananjay Kumar

    2016-01-01

    Full Text Available Cloud Computing is a dominant way of sharing of computing resources that can be configured and provisioned easily. Task scheduling in Hybrid cloud is a challenge as it suffers from producing the best QoS (Quality of Service when there is a high demand. In this paper a new resource allocation algorithm, to find the best External Cloud provider when the intermediate provider’s resources aren’t enough to satisfy the customer’s demand is proposed. The proposed algorithm called Optimized Particle Swarm Optimization (OPSO combines the two metaheuristic algorithms namely Particle Swarm Optimization and Ant Colony Optimization (ACO. These metaheuristic algorithms are used for the purpose of optimization in the search space of the required solution, to find the best resource from the pool of resources and to obtain maximum profit even when the number of tasks submitted for execution is very high. This optimization is performed to allocate job requests to internal and external cloud providers to obtain maximum profit. It helps to improve the system performance by improving the CPU utilization, and handle multiple requests at the same time. The simulation result shows that an OPSO yields 0.1% - 5% profit to the intermediate cloud provider compared with standard PSO and ACO algorithms and it also increases the CPU utilization by 0.1%.

  1. New photoionization models of intergalactic clouds

    Science.gov (United States)

    Donahue, Megan; Shull, J. M.

    1991-01-01

    New photoionization models of optically thin low-density intergalactic gas at constant pressure, photoionized by QSOs, are presented. All ion stages of H, He, C, N, O, Si, and Fe, plus H2 are modeled, and the column density ratios of clouds at specified values of the ionization parameter of n sub gamma/n sub H and cloud metallicity are predicted. If Ly-alpha clouds are much cooler than the previously assumed value, 30,000 K, the ionization parameter must be very low, even with the cooling contribution of a trace component of molecules. If the clouds cool below 6000 K, their final equilibrium must be below 3000 K, owing to the lack of a stable phase between 6000 and 3000 K. If it is assumed that the clouds are being irradiated by an EUV power-law continuum typical of WSOs, with J0 = 10 exp -21 ergs/s sq cm Hz, typical cloud thicknesses along the line of sight that are much smaller than would be expected from shocks, thermal instabilities, or gravitational collapse are derived.

  2. Reconciliation of the cloud computing model with US federal electronic health record regulations.

    Science.gov (United States)

    Schweitzer, Eugene J

    2012-01-01

    Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing.

  3. Reconciliation of the cloud computing model with US federal electronic health record regulations

    Science.gov (United States)

    2011-01-01

    Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing. PMID:21727204

  4. Comparison of cloud optical depth and cloud mask applying BRDF model-based background surface reflectance

    Science.gov (United States)

    Kim, H. W.; Yeom, J. M.; Woo, S. H.

    2017-12-01

    Over the thin cloud region, satellite can simultaneously detect the reflectance from thin clouds and land surface. Since the mixed reflectance is not the exact cloud information, the background surface reflectance should be eliminated to accurately distinguish thin cloud such as cirrus. In the previous research, Kim et al (2017) was developed the cloud masking algorithm using the Geostationary Ocean Color Imager (GOCI), which is one of significant instruments for Communication, Ocean, and Meteorology Satellite (COMS). Although GOCI has 8 spectral channels including visible and near infra-red spectral ranges, the cloud masking has quantitatively reasonable result when comparing with MODIS cloud mask (Collection 6 MYD35). Especially, we noticed that this cloud masking algorithm is more specialized in thin cloud detections through the validation with Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) data. Because this cloud masking method was concentrated on eliminating background surface effects from the top-of-atmosphere (TOA) reflectance. Applying the difference between TOA reflectance and the bi-directional reflectance distribution function (BRDF) model-based background surface reflectance, cloud areas both thick cloud and thin cloud can be discriminated without infra-red channels which were mostly used for detecting clouds. Moreover, when the cloud mask result was utilized as the input data when simulating BRDF model and the optimized BRDF model-based surface reflectance was used for the optimized cloud masking, the probability of detection (POD) has higher value than POD of the original cloud mask. In this study, we examine the correlation between cloud optical depth (COD) and its cloud mask result. Cloud optical depths mostly depend on the cloud thickness, the characteristic of contents, and the size of cloud contents. COD ranges from less than 0.1 for thin clouds to over 1000 for the huge cumulus due to scattering by droplets. With

  5. A Review of Cloud Business Models and Sustainability

    OpenAIRE

    Chang, Victor; Wills, Gary; De Roure, David

    2010-01-01

    This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. Using the Jericho Forum's Cloud Cube Model (CCM), we classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In-House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government Funding; (7) Venture Capita...

  6. Macroscopic modelization of the cloud elasticity*

    Directory of Open Access Journals (Sweden)

    Etancelin J.-M.

    2013-12-01

    Full Text Available In order to achieve its promise of providing information technologies (IT on demand, cloud computing needs to rely on a mathematical model capable of directing IT on and off according to a demand pattern to provide a true elasticity. This article provides a first method to reach this goal using a “fluid type” partial differential equations model. On the one hand it examines the question of service time optimization for the simultaneous satisfaction of the cloud consumer and provider. On the other hand it tries to model a way to deliver resources according to the real time capacity of the cloud that depends on parameters such as burst requests and application timeouts. All these questions are illustrated via an implicit finite volume scheme.

  7. Cloud condensation nuclei in Western Colorado: Observations and model predictions

    Science.gov (United States)

    Ward, Daniel Stewart

    Variations in the warm cloud-active portion of atmospheric aerosols, or cloud condensation nuclei (CCN), have been shown to impact cloud droplet number concentration and subsequently cloud and precipitation processes. This issue carries special significance in western Colorado where a significant portion of the region's water resources is supplied by precipitation from winter season, orographic clouds, which are particularly sensitive to variations in CCN. Temporal and spatial variations in CCN in western Colorado were investigated using a combination of observations and a new method for modeling CCN. As part of the Inhibition of Snowfall by Pollution Aerosols (ISPA-III) field campaign, total particle and CCN number concentration were measured for a 24-day period in Mesa Verde National Park, climatologically upwind of the San Juan Mountains. These data were combined with CCN observations from Storm Peak Lab (SPL) in northwestern Colorado and from the King Air platform, flying north to south along the Western Slope. Altogether, the sampled aerosols were characteristic of a rural continental environment and the cloud-active portion varied slowly in time, and little in space. Estimates of the is hygroscopicity parameter indicated consistently low aerosol hygroscopicity typical of organic aerosol species. The modeling approach included the addition of prognostic CCN to the Regional Atmospheric Modeling System (RAMS). The RAMS droplet activation scheme was altered using parcel model simulations to include variations in aerosol hygroscopicity, represented by K. Analysis of the parcel model output and a supplemental sensitivity study showed that model CCN will be sensitive to changes in aerosol hygroscopicity, but only for conditions of low supersaturation or small particle sizes. Aerosol number, size distribution median radius, and hygroscopicity (represented by the K parameter) in RAMS were constrained by nudging to forecasts of these quantities from the Weather

  8. Modelling cloud data for prototype manufacturing

    NARCIS (Netherlands)

    Liu, G.H.; Wong, Y.S.; Zhang, Y.F.; Loh, H.T.

    2003-01-01

    In this paper, the authors have developed a novel method to integrate reverse engineering (RE) and rapid prototyping (RP). Unorganised cloud data are directly sliced and modelled with two-dimensional (2D) cross-sections. Based on such a 2D CAD model, the data points are directly converted into RP

  9. Frequency Constrained ShiftCP Modeling of Neuroimaging Data

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai; Madsen, Kristoffer H.

    2011-01-01

    The shift invariant multi-linear model based on the CandeComp/PARAFAC (CP) model denoted ShiftCP has proven useful for the modeling of latency changes in trial based neuroimaging data[17]. In order to facilitate component interpretation we presently extend the shiftCP model such that the extracted...... components can be constrained to pertain to predefined frequency ranges such as alpha, beta and gamma activity. To infer the number of components in the model we propose to apply automatic relevance determination by imposing priors that define the range of variation of each component of the shiftCP model...

  10. Coupled fvGCM-GCE Modeling System, 3D Cloud-Resolving Model and Cloud Library

    Science.gov (United States)

    Tao, Wei-Kuo

    2005-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud- resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. A seed fund is available at NASA Goddard to build a MMF based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM). A prototype MMF in being developed and production runs will be conducted at the beginning of 2005. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes, ( 2 ) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), (3) A cloud library generated by Goddard MMF, and 3D GCE model, and (4) A brief discussion on the GCE model on developing a global cloud simulator.

  11. Modeling constrained sintering of bi-layered tubular structures

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye; Kothanda Ramachandran, Dhavanesan; Ni, De Wei

    2015-01-01

    Constrained sintering of tubular bi-layered structures is being used in the development of various technologies. Densification mismatch between the layers making the tubular bi-layer can generate stresses, which may create processing defects. An analytical model is presented to describe the densi...... and thermo-mechanical analysis. Results from the analytical model are found to agree well with finite element simulations as well as measurements from sintering experiment....

  12. Chemical equilibrium models of interstellar gas clouds

    International Nuclear Information System (INIS)

    Freeman, A.

    1982-10-01

    This thesis contains work which helps towards our understanding of the chemical processes and astrophysical conditions in interstellar clouds, across the whole range of cloud types. The object of the exercise is to construct a mathematical model representing a large system of two-body chemical reactions in order to deduce astrophysical parameters and predict molecular abundances and chemical pathways. Comparison with observations shows that this type of model is valid but also indicates that our knowledge of some chemical reactions is incomplete. (author)

  13. Constraining new physics models with isotope shift spectroscopy

    Science.gov (United States)

    Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias

    2017-07-01

    Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.

  14. Dark matter scenarios in a constrained model with Dirac gauginos

    CERN Document Server

    Goodsell, Mark D.; Müller, Tobias; Porod, Werner; Staub, Florian

    2015-01-01

    We perform the first analysis of Dark Matter scenarios in a constrained model with Dirac Gauginos. The model under investigation is the Constrained Minimal Dirac Gaugino Supersymmetric Standard model (CMDGSSM) where the Majorana mass terms of gauginos vanish. However, $R$-symmetry is broken in the Higgs sector by an explicit and/or effective $B_\\mu$-term. This causes a mass splitting between Dirac states in the fermion sector and the neutralinos, which provide the dark matter candidate, become pseudo-Dirac states. We discuss two scenarios: the universal case with all scalar masses unified at the GUT scale, and the case with non-universal Higgs soft-terms. We identify different regions in the parameter space which fullfil all constraints from the dark matter abundance, the limits from SUSY and direct dark matter searches and the Higgs mass. Most of these points can be tested with the next generation of direct dark matter detection experiments.

  15. Determination of clouds in MSG data for the validation of clouds in a regional climate model

    OpenAIRE

    Huckle, Roger

    2009-01-01

    Regional climate models (e.g. CLM) can help to asses the influence of the antropogenic climate change on the different regions of the earth. Validation of these models is very important. Satellite data are of great benefit, as data on a global scale and high temporal resolution is available. In this thesis a cloud detection and object based cloud classification for Meteosat Second Generation (MSG) was developed and used to validate CLM clouds. Results show sometimes too many clouds in the CLM.

  16. Parameterization of clouds and radiation in climate models

    Energy Technology Data Exchange (ETDEWEB)

    Roeckner, E. [Max Planck Institute for Meterology, Hamburg (Germany)

    1995-09-01

    Clouds are a very important, yet poorly modeled element in the climate system. There are many potential cloud feedbacks, including those related to cloud cover, height, water content, phase change, and droplet concentration and size distribution. As a prerequisite to studying the cloud feedback issue, this research reports on the simulation and validation of cloud radiative forcing under present climate conditions using the ECHAM general circulation model and ERBE top-of-atmosphere radiative fluxes.

  17. An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud

    Directory of Open Access Journals (Sweden)

    Thanh Dinh

    2016-06-01

    Full Text Available This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud.

  18. Trust Model to Enhance Security and Interoperability of Cloud Environment

    Science.gov (United States)

    Li, Wenjuan; Ping, Lingdi

    Trust is one of the most important means to improve security and enable interoperability of current heterogeneous independent cloud platforms. This paper first analyzed several trust models used in large and distributed environment and then introduced a novel cloud trust model to solve security issues in cross-clouds environment in which cloud customer can choose different providers' services and resources in heterogeneous domains can cooperate. The model is domain-based. It divides one cloud provider's resource nodes into the same domain and sets trust agent. It distinguishes two different roles cloud customer and cloud server and designs different strategies for them. In our model, trust recommendation is treated as one type of cloud services just like computation or storage. The model achieves both identity authentication and behavior authentication. The results of emulation experiments show that the proposed model can efficiently and safely construct trust relationship in cross-clouds environment.

  19. Constrained convex minimization via model-based excessive gap

    OpenAIRE

    Tran Dinh, Quoc; Cevher, Volkan

    2014-01-01

    We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct new primal-dual methods with optimal convergence rates on the objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-function selection strategy, our framework subsumes the augmented Lagrangian, and alternating methods as special cases, where our rates apply.

  20. Business Process as a Service Model Based Business and IT Cloud Alignment as a Cloud Offering

    OpenAIRE

    Robert Woitsch; Wilfrid Utz

    2015-01-01

    Cloud computing proved to offer flexible IT solutions. Although large enterprises may benefit from this technology, SMEs are falling behind in cloud usage due to missing ITcompetence and hence lose the ability to efficiently adapt their IT to their business needs. This paper introduces the project idea of the H2020 project CloudSocket, by elaborating the idea of Business Processes as a Service (BPaaS), where concept models and semantics are applied to align business processes with Cloud deplo...

  1. Toward Cognitively Constrained Models of Language Processing: A Review

    Directory of Open Access Journals (Sweden)

    Margreet Vogelzang

    2017-09-01

    Full Text Available Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained computational models, which simulate the cognitive processes involved in language processing. The theoretical claims implemented in cognitive models interact with general architectural constraints such as memory limitations. This way, it generates new predictions that can be tested in experiments, thus generating new data that can give rise to new theoretical insights. This theory-model-experiment cycle is a promising method for investigating aspects of language processing that are difficult to investigate with more traditional experimental techniques. This review specifically examines the language processing models of Lewis and Vasishth (2005, Reitter et al. (2011, and Van Rij et al. (2010, all implemented in the cognitive architecture Adaptive Control of Thought—Rational (Anderson et al., 2004. These models are all limited by the assumptions about cognitive capacities provided by the cognitive architecture, but use different linguistic approaches. Because of this, their comparison provides insight into the extent to which assumptions about general cognitive resources influence concretely implemented models of linguistic competence. For example, the sheer speed and accuracy of human language processing is a current challenge in the field of cognitive modeling, as it does not seem to adhere to the same memory and processing capacities that have been found in other cognitive processes. Architecture-based cognitive models of language processing may be able to make explicit which language-specific resources are needed to acquire and process natural language. The review sheds light on cognitively constrained models of language processing from two angles: we

  2. Thin Cloud Detection Method by Linear Combination Model of Cloud Image

    Science.gov (United States)

    Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.

    2018-04-01

    The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.

  3. A Few Expanding Integrable Models, Hamiltonian Structures and Constrained Flows

    International Nuclear Information System (INIS)

    Zhang Yufeng

    2011-01-01

    Two kinds of higher-dimensional Lie algebras and their loop algebras are introduced, for which a few expanding integrable models including the coupling integrable couplings of the Broer-Kaup (BK) hierarchy and the dispersive long wave (DLW) hierarchy as well as the TB hierarchy are obtained. From the reductions of the coupling integrable couplings, the corresponding coupled integrable couplings of the BK equation, the DLW equation, and the TB equation are obtained, respectively. Especially, the coupling integrable coupling of the TB equation reduces to a few integrable couplings of the well-known mKdV equation. The Hamiltonian structures of the coupling integrable couplings of the three kinds of soliton hierarchies are worked out, respectively, by employing the variational identity. Finally, we decompose the BK hierarchy of evolution equations into x-constrained flows and t n -constrained flows whose adjoint representations and the Lax pairs are given. (general)

  4. Constraining mass-diameter relations from hydrometeor images and cloud radar reflectivities in tropical continental and oceanic convective anvils

    Science.gov (United States)

    Fontaine, E.; Schwarzenboeck, A.; Delanoë, J.; Wobrock, W.; Leroy, D.; Dupuy, R.; Gourbeyre, C.; Protat, A.

    2014-10-01

    In this study the density of ice hydrometeors in tropical clouds is derived from a combined analysis of particle images from 2-D-array probes and associated reflectivities measured with a Doppler cloud radar on the same research aircraft. Usually, the mass-diameter m(D) relationship is formulated as a power law with two unknown coefficients (pre-factor, exponent) that need to be constrained from complementary information on hydrometeors, where absolute ice density measurement methods do not apply. Here, at first an extended theoretical study of numerous hydrometeor shapes simulated in 3-D and arbitrarily projected on a 2-D plan allowed to constrain the exponent βof the m(D) relationship from the exponent σ of the surface-diameterS(D)relationship, which is likewise written as a power law. Since S(D) always can be determined for real data from 2-D optical array probes or other particle imagers, the evolution of the m(D) exponent can be calculated. After that, the pre-factor α of m(D) is constrained from theoretical simulations of the radar reflectivities matching the measured reflectivities along the aircraft trajectory. The study was performed as part of the Megha-Tropiques satellite project, where two types of mesoscale convective systems (MCS) were investigated: (i) above the African continent and (ii) above the Indian Ocean. For the two data sets, two parameterizations are derived to calculate the vertical variability of m(D) coefficients α and β as a function of the temperature. Originally calculated (with T-matrix) and also subsequently parameterized m(D) relationships from this study are compared to other methods (from literature) of calculating m(D) in tropical convection. The significant benefit of using variable m(D) relations instead of a single m(D) relationship is demonstrated from the impact of all these m(D) relations on Z-CWC (Condensed Water Content) and Z-CWC-T-fitted parameterizations.

  5. Hard x-ray Morphological and Spectral Studies of the Galactic Center Molecular Cloud SGR B2: Constraining Past SGR A* Flaring Activity

    DEFF Research Database (Denmark)

    Zhang, Shuo; Hailey, Charles J.; Mori, Kaya

    2015-01-01

    In 2013, NuSTAR observed the Sgr B2 region and for the first time resolved its hard X-ray emission on subarcminute scales. Two prominent features are detected above 10 keV:. a newly emerging cloud, G0.66-0.13, and the central 90 '' radius region containing two compact cores, Sgr B2(M) and Sgr B2(N......), surrounded by diffuse emission. It is inconclusive whether the remaining level of Sgr. B2 emission is still decreasing or has reached a constant background level. A decreasing X-ray emission can be best explained by the X-ray reflection nebula scenario, where the cloud reprocesses a past giant outburst from...... Sgr A*. In the X-ray reflection nebula (XRN) scenario, the 3-79 keV Sgr. B2 spectrum allows us to self-consistently test the XRN model using both the Fe K alpha line and the continuum emission. The peak luminosity of the past Sgr A* outburst is constrained to L3-79keV∼5 x 1038 ergs s-1. A newly...

  6. FIRST PRISMATIC BUILDING MODEL RECONSTRUCTION FROM TOMOSAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    Y. Sun

    2016-06-01

    Full Text Available This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007 and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.

  7. Observational Constraints for Modeling Diffuse Molecular Clouds

    Science.gov (United States)

    Federman, S. R.

    2014-02-01

    Ground-based and space-borne observations of diffuse molecular clouds suggest a number of areas where further improvements to modeling efforts is warranted. I will highlight those that have the widest applicability. The range in CO fractionation caused by selective isotope photodissociation, in particular the large 12C16O/13C16O ratios observed toward stars in Ophiuchus, is not reproduced well by current models. Our ongoing laboratory measurements of oscillator strengths and predissociation rates for Rydberg transitions in CO isotopologues may help clarify the situtation. The CH+ abundance continues to draw attention. Small scale structure seen toward ζ Per may provide additional constraints on the possible synthesis routes. The connection between results from optical transitions and those from radio and sub-millimeter wave transitions requires further effort. A study of OH+ and OH toward background stars reveals that these species favor different environments. This brings to focus the need to model each cloud along the line of sight separately, and to allow the physical conditions to vary within an individual cloud, in order to gain further insight into the chemistry. Now that an extensive set of data on molecular excitation is available, the models should seek to reproduce these data to place further constraints on the modeling results.

  8. Petri net modeling of encrypted information flow in federated cloud

    Science.gov (United States)

    Khushk, Abdul Rauf; Li, Xiaozhong

    2017-08-01

    Solutions proposed and developed for the cost-effective cloud systems suffer from a combination of secure private clouds and less secure public clouds. Need to locate applications within different clouds poses a security risk to the information flow of the entire system. This study addresses this by assigning security levels of a given lattice to the entities of a federated cloud system. A dynamic flow sensitive security model featuring Bell-LaPadula procedures is explored that tracks and authenticates the secure information flow in federated clouds. Additionally, a Petri net model is considered as a case study to represent the proposed system and further validate the performance of the said system.

  9. Shallow layer modelling of dense gas clouds

    Energy Technology Data Exchange (ETDEWEB)

    Ott, S.; Nielsen, M.

    1996-11-01

    The motivation for making shallow layer models is that they can deal with the dynamics of gravity driven flow in complex terrain at a modest computational cost compared to 3d codes. The main disadvantage is that the air-cloud interactions still have to be added `by hand`, where 3d models inherit the correct dynamics from the fundamental equations. The properties of the inviscid shallow water equations are discussed, focusing on existence and uniqueness of solutions. It is demonstrated that breaking waves and fronts pose severe problems, that can only be overcome if the hydrostatic approximation is given up and internal friction is added to the model. A set of layer integrated equations is derived starting from the Navier-Stokes equations. The various steps in the derivation are accompanied by plausibility arguments. These form the scientific basis of the model. The principle of least action is introduced as a means of generating consistent models, and as a tool for making discrete equations for numerical models, which automatically obey conservation laws. A numerical model called SLAM (Shallow LAyer Model) is presented. SLAM has some distinct features compared to other shallow layer models: A Lagrangian, moving grid; Explicit account for the turbulent kinetic energy budget; The entrainment rate is estimated on the basis of the local turbulent kinetic energy; Non-hydrostatic pressure; and Numerical methods respect conservation laws even for coarse grids. Thorney Island trial 8 is used as a reference case model tuning. The model reproduces the doughnut shape of the cloud and yield concentrations in reasonable agreement with observations, even when a small number of cells (e.g. 16) is used. It is concluded that lateral exchange of matter within the cloud caused by shear is important, and that the model should be improved on this point. (au) 16 ills., 38 refs.

  10. Constraining viscous dark energy models with the latest cosmological data

    Science.gov (United States)

    Wang, Deng; Yan, Yang-Jie; Meng, Xin-He

    2017-10-01

    Based on the assumption that the dark energy possessing bulk viscosity is homogeneously and isotropically permeated in the universe, we propose three new viscous dark energy (VDE) models to characterize the accelerating universe. By constraining these three models with the latest cosmological observations, we find that they just deviate very slightly from the standard cosmological model and can alleviate effectively the current H_0 tension between the local observation by the Hubble Space Telescope and the global measurement by the Planck Satellite. Interestingly, we conclude that a spatially flat universe in our VDE model with cosmic curvature is still supported by current data, and the scale invariant primordial power spectrum is strongly excluded at least at the 5.5σ confidence level in the three VDE models as the Planck result. We also give the 95% upper limits of the typical bulk viscosity parameter η in the three VDE scenarios.

  11. Constraining viscous dark energy models with the latest cosmological data

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Deng [Nankai University, Theoretical Physics Division, Chern Institute of Mathematics, Tianjin (China); Yan, Yang-Jie; Meng, Xin-He [Nankai University, Department of Physics, Tianjin (China)

    2017-10-15

    Based on the assumption that the dark energy possessing bulk viscosity is homogeneously and isotropically permeated in the universe, we propose three new viscous dark energy (VDE) models to characterize the accelerating universe. By constraining these three models with the latest cosmological observations, we find that they just deviate very slightly from the standard cosmological model and can alleviate effectively the current H{sub 0} tension between the local observation by the Hubble Space Telescope and the global measurement by the Planck Satellite. Interestingly, we conclude that a spatially flat universe in our VDE model with cosmic curvature is still supported by current data, and the scale invariant primordial power spectrum is strongly excluded at least at the 5.5σ confidence level in the three VDE models as the Planck result. We also give the 95% upper limits of the typical bulk viscosity parameter η in the three VDE scenarios. (orig.)

  12. Mesoscale meteorological model based on radioactive explosion cloud simulation

    International Nuclear Information System (INIS)

    Zheng Yi; Zhang Yan; Ying Chuntong

    2008-01-01

    In order to simulate nuclear explosion and dirty bomb radioactive cloud movement and concentration distribution, mesoscale meteorological model RAMS was used. Particles-size, size-active distribution and gravitational fallout in the cloud were considered. The results show that the model can simulate the 'mushroom' clouds of explosion. Three-dimension fluid field and radioactive concentration field were received. (authors)

  13. Modeling and Security in Cloud Ecosystems

    Directory of Open Access Journals (Sweden)

    Eduardo B. Fernandez

    2016-04-01

    Full Text Available Clouds do not work in isolation but interact with other clouds and with a variety of systems either developed by the same provider or by external entities with the purpose to interact with them; forming then an ecosystem. A software ecosystem is a collection of software systems that have been developed to coexist and evolve together. The stakeholders of such a system need a variety of models to give them a perspective of the possibilities of the system, to evaluate specific quality attributes, and to extend the system. A powerful representation when building or using software ecosystems is the use of architectural models, which describe the structural aspects of such a system. These models have value for security and compliance, are useful to build new systems, can be used to define service contracts, find where quality factors can be monitored, and to plan further expansion. We have described a cloud ecosystem in the form of a pattern diagram where its components are patterns and reference architectures. A pattern is an encapsulated solution to a recurrent problem. We have recently expanded these models to cover fog systems and containers. Fog Computing is a highly-virtualized platform that provides compute, storage, and networking services between end devices and Cloud Computing Data Centers; a Software Container provides an execution environment for applications sharing a host operating system, binaries, and libraries with other containers. We intend to use this architecture to answer a variety of questions about the security of this system as well as a reference to design interacting combinations of heterogeneous components. We defined a metamodel to relate security concepts which is being expanded.

  14. Cloud Shade by Dynamic Logistic Modeling

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Badescu, V.; Paulescu, M.

    2014-01-01

    Roč. 41, č. 6 (2014), s. 1174-1188 ISSN 0266-4763 R&D Projects: GA MŠk LD12009 Grant - others:European Cooperation in Science and Technology(XE) COST ES1002 Institutional support: RVO:67985807 Keywords : clouds * random process * sunshine number * Markovian logistic regression model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.417, year: 2014

  15. Prediction of cloud droplet number in a general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-04-01

    We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.

  16. A stochastic cloud model for cloud and ozone retrievals from UV measurements

    International Nuclear Information System (INIS)

    Efremenko, Dmitry S.; Schüssler, Olena; Doicu, Adrian; Loyola, Diego

    2016-01-01

    The new generation of satellite instruments provides measurements in and around the Oxygen A-band on a global basis and with a relatively high spatial resolution. These data are commonly used for the determination of cloud properties. A stochastic model and radiative transfer model, previously developed by the authors, is used as the forward model component in retrievals of cloud parameters and ozone total and partial columns. The cloud retrieval algorithm combines local and global optimization routines, and yields a retrieval accuracy of about 1% and a fast computational time. Retrieved parameters are the cloud optical thickness and the cloud-top height. It was found that the use of the independent pixel approximation instead of the stochastic cloud model leads to large errors in the retrieved cloud parameters, as well as, in the retrieved ozone height resolved partial columns. The latter can be reduced by using the stochastic cloud model to compute the optimal value of the regularization parameter in the framework of Tikhonov regularization. - Highlights: • A stochastic radiative transfer model for retrieving clouds/ozone is designed. • Errors of independent pixel approximation (IPA) for O3 total column are small. • The error of IPA for ozone profile retrieval may become large. • The use of stochastic model reduces the error of ozone profile retrieval.

  17. Volcano and ship tracks indicate excessive aerosol-induced cloud water increases in a climate model.

    Science.gov (United States)

    Toll, Velle; Christensen, Matthew; Gassó, Santiago; Bellouin, Nicolas

    2017-12-28

    Aerosol-cloud interaction is the most uncertain mechanism of anthropogenic radiative forcing of Earth's climate, and aerosol-induced cloud water changes are particularly poorly constrained in climate models. By combining satellite retrievals of volcano and ship tracks in stratocumulus clouds, we compile a unique observational dataset and confirm that liquid water path (LWP) responses to aerosols are bidirectional, and on average the increases in LWP are closely compensated by the decreases. Moreover, the meteorological parameters controlling the LWP responses are strikingly similar between the volcano and ship tracks. In stark contrast to observations, there are substantial unidirectional increases in LWP in the Hadley Centre climate model, because the model accounts only for the decreased precipitation efficiency and not for the enhanced entrainment drying. If the LWP increases in the model were compensated by the decreases as the observations suggest, its indirect aerosol radiative forcing in stratocumulus regions would decrease by 45%.

  18. An inexact fuzzy-chance-constrained air quality management model.

    Science.gov (United States)

    Xu, Ye; Huang, Guohe; Qin, Xiaosheng

    2010-07-01

    Regional air pollution is a major concern for almost every country because it not only directly relates to economic development, but also poses significant threats to environment and public health. In this study, an inexact fuzzy-chance-constrained air quality management model (IFAMM) was developed for regional air quality management under uncertainty. IFAMM was formulated through integrating interval linear programming (ILP) within a fuzzy-chance-constrained programming (FCCP) framework and could deal with uncertainties expressed as not only possibilistic distributions but also discrete intervals in air quality management systems. Moreover, the constraints with fuzzy variables could be satisfied at different confidence levels such that various solutions with different risk and cost considerations could be obtained. The developed model was applied to a hypothetical case of regional air quality management. Six abatement technologies and sulfur dioxide (SO2) emission trading under uncertainty were taken into consideration. The results demonstrated that IFAMM could help decision-makers generate cost-effective air quality management patterns, gain in-depth insights into effects of the uncertainties, and analyze tradeoffs between system economy and reliability. The results also implied that the trading scheme could achieve lower total abatement cost than a nontrading one.

  19. A stratiform cloud parameterization for general circulation models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in general circulation models (GCMs) is widely recognized as a major limitation in applying these models to predictions of global climate change. The purpose of this project is to develop in GCMs a stratiform cloud parameterization that expresses clouds in terms of bulk microphysical properties and their subgrid variability. Various clouds variables and their interactions are summarized. Precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  20. Security Issues Model on Cloud Computing: A Case of Malaysia

    OpenAIRE

    Komeil Raisian; Jamaiah Yahaya

    2015-01-01

    By developing the cloud computing, viewpoint of many people regarding the infrastructure architectures, software distribution and improvement model changed significantly. Cloud computing associates with the pioneering deployment architecture, which could be done through grid calculating, effectiveness calculating and autonomic calculating. The fast transition towards that, has increased the worries regarding a critical issue for the effective transition of cloud computing. From the security v...

  1. Variability in modeled cloud feedback tied to differences in the climatological spatial pattern of clouds

    Science.gov (United States)

    Siler, Nicholas; Po-Chedley, Stephen; Bretherton, Christopher S.

    2018-02-01

    Despite the increasing sophistication of climate models, the amount of surface warming expected from a doubling of atmospheric CO_2 (equilibrium climate sensitivity) remains stubbornly uncertain, in part because of differences in how models simulate the change in global albedo due to clouds (the shortwave cloud feedback). Here, model differences in the shortwave cloud feedback are found to be closely related to the spatial pattern of the cloud contribution to albedo (α) in simulations of the current climate: high-feedback models exhibit lower (higher) α in regions of warm (cool) sea-surface temperatures, and therefore predict a larger reduction in global-mean α as temperatures rise and warm regions expand. The spatial pattern of α is found to be strongly predictive (r=0.84) of a model's global cloud feedback, with satellite observations indicating a most-likely value of 0.58± 0.31 Wm^{-2} K^{-1} (90% confidence). This estimate is higher than the model-average cloud feedback of 0.43 Wm^{-2} K^{-1}, with half the range of uncertainty. The observational constraint on climate sensitivity is weaker but still significant, suggesting a likely value of 3.68 ± 1.30 K (90% confidence), which also favors the upper range of model estimates. These results suggest that uncertainty in model estimates of the global cloud feedback may be substantially reduced by ensuring a realistic distribution of clouds between regions of warm and cool SSTs in simulations of the current climate.

  2. Treatment of cloud radiative effects in general circulation models

    Energy Technology Data Exchange (ETDEWEB)

    Wang, W.C.; Dudek, M.P.; Liang, X.Z.; Ding, M. [State Univ. of New York, Albany, NY (United States)] [and others

    1996-04-01

    We participate in the Atmospheric Radiation Measurement (ARM) program with two objectives: (1) to improve the general circulation model (GCM) cloud/radiation treatment with a focus on cloud verticle overlapping and layer cloud optical properties, and (2) to study the effects of cloud/radiation-climate interaction on GCM climate simulations. This report summarizes the project progress since the Fourth ARM Science Team meeting February 28-March 4, 1994, in Charleston, South Carolina.

  3. Model of E-Cloud Instability in the Fermilab Recycler

    Energy Technology Data Exchange (ETDEWEB)

    Balbekov, V. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)

    2015-06-24

    Simple model of electron cloud is developed in the paper to explain e-cloud instability of bunched proton beam in the Fermilab Recycler. The cloud is presented as an immobile snake in strong vertical magnetic field. The instability is treated as an amplification of the bunch injection errors from the batch head to its tail. Nonlinearity of the e-cloud field is taken into account. Results of calculations are compared with experimental data demonstrating good correlation.

  4. Model Based Business and IT Cloud Alignment as a Cloud Offering

    OpenAIRE

    Robert Woitsch; Wilfrid Utz

    2015-01-01

    Cloud computing proved to offer flexible IT solutions. Although large enterprises may benefit from this technology by educating their IT departments, SMEs are dramatically falling behind in cloud usage and hence lose the ability to efficiently adapt their IT to their business needs. This paper introduces the project idea of the H2020 project CloudSocket, by elaborating the idea of Business Processes as a Service, where concept models and semantics are applied to align business pro...

  5. A modeling perspective on cloud radiative forcing

    International Nuclear Information System (INIS)

    Potter, G.L.; Corsetti, L.; Slingo, J.M.

    1993-02-01

    Radiation fields from a perpetual July integration of a T106 version of the ECM-WF operational model are used to identify the most appropriate way to diagnose cloud radiative forcing in a general circulation model, for the purposes of intercomparison between models. Differences between the Methods I and II of Cess and Potter (1987) and a variant method are addressed. Method I is shown to be the least robust of all methods, due to the potential uncertainties related to persistent cloudiness, length of the sampling period and biases in retrieved clear-sky quantities due to insufficient sampling of the diurnal cycle. Method II is proposed as an unambiguous way to produce consistent radiative diagnostics for intercomparing model results. The impact of the three methods on the derived sensitivities and cloud feedbacks following an imposed change in sea surface temperature is discussed. The sensitivity of the results to horizontal resolution is considered by using the diagnostics from parallel integrations with T21 version of the model

  6. A model of the microphysical evolution of a cloud

    International Nuclear Information System (INIS)

    Zinn, J.

    1994-01-01

    The earth's weather and climate are influenced strongly by phenomena associated with clouds. Therefore, a general circulation model (GCM) that models the evolution of weather and climate must include an accurate physical model of the clouds. This paper describes efforts to develop a suitable cloud model. It concentrates on the microphysical processes that determine the evolution of droplet and ice crystal size distributions, precipitation rates, total and condensed water content, and radiative extinction coefficients

  7. MODELING DUST IN THE MAGELLANIC CLOUDS

    Energy Technology Data Exchange (ETDEWEB)

    Zonca, Alberto; Casu, Silvia; Mulas, Giacomo; Aresu, Giambattista [INAF—Osservatorio Astronomico di Cagliari, Via della Scienza 5, I-09047 Selargius (Italy); Cecchi-Pestellini, Cesare, E-mail: azonca@oa-cagliari.inaf.it, E-mail: silvia@oa-cagliari.inaf.it, E-mail: gmulas@oa-cagliari.inaf.it, E-mail: garesu@oa-cagliari.inaf.it, E-mail: cecchi-pestellini@astropa.inaf.it [INAF—Osservatorio Astronomico di Palermo, P.za Parlamento 1, I-90134 Palermo (Italy)

    2015-09-01

    We model the extinction profiles observed in the Small and Large Magellanic clouds with a synthetic population of dust grains consisting of core-mantle particles and a collection of free-flying polycyclic aromatic hydrocarbons (PAHs). All different flavors of the extinction curves observed in the Magellanic Clouds (MCs) can be described by the present model, which has been previously (successfully) applied to a large sample of diffuse and translucent lines of sight in the Milky Way. We find that in the MCs the extinction produced by classical grains is generally larger than absorption by PAHs. Within this model, the nonlinear far-UV rise is accounted for by PAHs, whose presence in turn is always associated with a gap in the size distribution of classical particles. This hints either at a physical connection between (e.g., a common cause for) PAHs and the absence of middle-sized dust particles or the need for an additional component in the model that can account for the nonlinear far-UV rise without contributing to the UV bump at ∼217 nm such as, e.g., nanodiamonds.

  8. MODELING DUST IN THE MAGELLANIC CLOUDS

    International Nuclear Information System (INIS)

    Zonca, Alberto; Casu, Silvia; Mulas, Giacomo; Aresu, Giambattista; Cecchi-Pestellini, Cesare

    2015-01-01

    We model the extinction profiles observed in the Small and Large Magellanic clouds with a synthetic population of dust grains consisting of core-mantle particles and a collection of free-flying polycyclic aromatic hydrocarbons (PAHs). All different flavors of the extinction curves observed in the Magellanic Clouds (MCs) can be described by the present model, which has been previously (successfully) applied to a large sample of diffuse and translucent lines of sight in the Milky Way. We find that in the MCs the extinction produced by classical grains is generally larger than absorption by PAHs. Within this model, the nonlinear far-UV rise is accounted for by PAHs, whose presence in turn is always associated with a gap in the size distribution of classical particles. This hints either at a physical connection between (e.g., a common cause for) PAHs and the absence of middle-sized dust particles or the need for an additional component in the model that can account for the nonlinear far-UV rise without contributing to the UV bump at ∼217 nm such as, e.g., nanodiamonds

  9. Analysis of albedo versus cloud fraction relationships in liquid water clouds using heuristic models and large eddy simulation

    Science.gov (United States)

    Feingold, Graham; Balsells, Joseph; Glassmeier, Franziska; Yamaguchi, Takanobu; Kazil, Jan; McComiskey, Allison

    2017-07-01

    The relationship between the albedo of a cloudy scene A and cloud fraction fc is studied with the aid of heuristic models of stratocumulus and cumulus clouds. Existing work has shown that scene albedo increases monotonically with increasing cloud fraction but that the relationship varies from linear to superlinear. The reasons for these differences in functional dependence are traced to the relationship between cloud deepening and cloud widening. When clouds deepen with no significant increase in fc (e.g., in solid stratocumulus), the relationship between A and fc is linear. When clouds widen as they deepen, as in cumulus cloud fields, the relationship is superlinear. A simple heuristic model of a cumulus cloud field with a power law size distribution shows that the superlinear A-fc behavior is traced out either through random variation in cloud size distribution parameters or as the cloud field oscillates between a relative abundance of small clouds (steep slopes on a log-log plot) and a relative abundance of large clouds (flat slopes). Oscillations of this kind manifest in large eddy simulation of trade wind cumulus where the slope and intercept of the power law fit to the cloud size distribution are highly correlated. Further analysis of the large eddy model-generated cloud fields suggests that cumulus clouds grow larger and deeper as their underlying plumes aggregate; this is followed by breakup of large plumes and a tendency to smaller clouds. The cloud and thermal size distributions oscillate back and forth approximately in unison.

  10. A Reputation-Based Identity Management Model for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Lifa Wu

    2015-01-01

    Full Text Available In the field of cloud computing, most research on identity management has concentrated on protecting user data. However, users typically leave a trail when they access cloud services, and the resulting user traceability can potentially lead to the leakage of sensitive user information. Meanwhile, malicious users can do harm to cloud providers through the use of pseudonyms. To solve these problems, we introduce a reputation mechanism and design a reputation-based identity management model for cloud computing. In the model, pseudonyms are generated based on a reputation signature so as to guarantee the untraceability of pseudonyms, and a mechanism that calculates user reputation is proposed, which helps cloud service providers to identify malicious users. Analysis verifies that the model can ensure that users access cloud services anonymously and that cloud providers assess the credibility of users effectively without violating user privacy.

  11. A Model for Comparing Free Cloud Platforms

    Directory of Open Access Journals (Sweden)

    Radu LIXANDROIU

    2014-01-01

    Full Text Available VMware, VirtualBox, Virtual PC and other popular desktop virtualization applications are used only by a few users of IT techniques. This article attempts to make a comparison model for choosing the best cloud platform. Many virtualization applications such as VMware (VMware Player, Oracle VirtualBox and Microsoft Virtual PC are free for home users. The main goal of the virtualization software is that it allows users to run multiple operating systems simultane-ously on one virtual environment, using one computer desktop.

  12. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...

  13. Gluon field strength correlation functions within a constrained instanton model

    International Nuclear Information System (INIS)

    Dorokhov, A.E.; Esaibegyan, S.V.; Maximov, A.E.; Mikhailov, S.V.

    2000-01-01

    We suggest a constrained instanton (CI) solution in the physical QCD vacuum which is described by large-scale vacuum field fluctuations. This solution decays exponentially at large distances. It is stable only if the interaction of the instanton with the background vacuum field is small and additional constraints are introduced. The CI solution is explicitly constructed in the ansatz form, and the two-point vacuum correlator of the gluon field strengths is calculated in the framework of the effective instanton vacuum model. At small distances the results are qualitatively similar to the single instanton case; in particular, the D 1 invariant structure is small, which is in agreement with the lattice calculations. (orig.)

  14. Bilevel Fuzzy Chance Constrained Hospital Outpatient Appointment Scheduling Model

    Directory of Open Access Journals (Sweden)

    Xiaoyang Zhou

    2016-01-01

    Full Text Available Hospital outpatient departments operate by selling fixed period appointments for different treatments. The challenge being faced is to improve profit by determining the mix of full time and part time doctors and allocating appointments (which involves scheduling a combination of doctors, patients, and treatments to a time period in a department optimally. In this paper, a bilevel fuzzy chance constrained model is developed to solve the hospital outpatient appointment scheduling problem based on revenue management. In the model, the hospital, the leader in the hierarchy, decides the mix of the hired full time and part time doctors to maximize the total profit; each department, the follower in the hierarchy, makes the decision of the appointment scheduling to maximize its own profit while simultaneously minimizing surplus capacity. Doctor wage and demand are considered as fuzzy variables to better describe the real-life situation. Then we use chance operator to handle the model with fuzzy parameters and equivalently transform the appointment scheduling model into a crisp model. Moreover, interactive algorithm based on satisfaction is employed to convert the bilevel programming into a single level programming, in order to make it solvable. Finally, the numerical experiments were executed to demonstrate the efficiency and effectiveness of the proposed approaches.

  15. Sampling from stochastic reservoir models constrained by production data

    Energy Technology Data Exchange (ETDEWEB)

    Hegstad, Bjoern Kaare

    1997-12-31

    When a petroleum reservoir is evaluated, it is important to forecast future production of oil and gas and to assess forecast uncertainty. This is done by defining a stochastic model for the reservoir characteristics, generating realizations from this model and applying a fluid flow simulator to the realizations. The reservoir characteristics define the geometry of the reservoir, initial saturation, petrophysical properties etc. This thesis discusses how to generate realizations constrained by production data, that is to say, the realizations should reproduce the observed production history of the petroleum reservoir within the uncertainty of these data. The topics discussed are: (1) Theoretical framework, (2) History matching, forecasting and forecasting uncertainty, (3) A three-dimensional test case, (4) Modelling transmissibility multipliers by Markov random fields, (5) Up scaling, (6) The link between model parameters, well observations and production history in a simple test case, (7) Sampling the posterior using optimization in a hierarchical model, (8) A comparison of Rejection Sampling and Metropolis-Hastings algorithm, (9) Stochastic simulation and conditioning by annealing in reservoir description, and (10) Uncertainty assessment in history matching and forecasting. 139 refs., 85 figs., 1 tab.

  16. Impact of cloud microphysics on cloud-radiation interactions in the CSU general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, L.D.; Randall, D.A.

    1995-04-01

    Our ability to study and quantify the impact of cloud-radiation interactions in studying global scale climate variations strongly relies upon the ability of general circulation models (GCMs) to simulate the coupling between the spatial and temporal variations of the model-generated cloudiness and atmospheric moisture budget components. In particular, the ability of GCMs to reproduce the geographical distribution of the sources and sinks of the planetary radiation balance depends upon their representation of the formation and dissipation of cloudiness in conjunction with cloud microphysics processes, and the fractional amount and optical characteristics of cloudiness in conjunction with the mass of condensate stored in the atmosphere. A cloud microphysics package which encompasses five prognostic variables for the mass of water vapor, cloud water, cloud ice, rain, and snow has been implemented in the Colorado State University General Circulation Model (CSU GCM) to simulate large-scale condensation processes. Convection interacts with the large-scale environment through the detrainment of cloud water and cloud ice at the top of cumulus towers. The cloud infrared emissivity and cloud optical depth of the model-generated cloudiness are interactive and depend upon the mass of cloud water and cloud ice suspended in the atmosphere. The global atmospheric moisture budget and planetary radiation budget of the CSU GCM obtained from a perpetual January simulation are discussed. Geographical distributions of the atmospheric moisture species are presented. Global maps of the top-of-atmosphere outgoing longwave radiation and planetary albedo are compared against Earth Radiation Budget Experiment (ERBE) satellite data.

  17. Deployment Models: Towards Eliminating Security Concerns From Cloud Computing

    OpenAIRE

    Zhao, Gansen; Chunming, Rong; Jaatun, Martin Gilje; Sandnes, Frode Eika

    2010-01-01

    Cloud computing has become a popular choice as an alternative to investing new IT systems. When making decisions on adopting cloud computing related solutions, security has always been a major concern. This article summarizes security concerns in cloud computing and proposes five service deployment models to ease these concerns. The proposed models provide different security related features to address different requirements and scenarios and can serve as reference models for deployment. D...

  18. Cloud Computing Adoption Business Model Factors: Does Enterprise Size Matter?

    OpenAIRE

    Bogataj Habjan, Kristina; Pucihar, Andreja

    2017-01-01

    This paper presents the results of research investigating the impact of business model factors on cloud computing adoption. The introduced research model consists of 40 cloud computing business model factors, grouped into eight factor groups. Their impact and importance for cloud computing adoption were investigated among enterpirses in Slovenia. Furthermore, differences in opinion according to enterprise size were investigated. Research results show no statistically significant impacts of in...

  19. A stratiform cloud parameterization for General Circulation Models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  20. Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares

    Science.gov (United States)

    Orr, Jeb S.

    2012-01-01

    A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed

  1. Constraining statistical-model parameters using fusion and spallation reactions

    Directory of Open Access Journals (Sweden)

    Charity Robert J.

    2011-10-01

    Full Text Available The de-excitation of compound nuclei has been successfully described for several decades by means of statistical models. However, such models involve a large number of free parameters and ingredients that are often underconstrained by experimental data. We show how the degeneracy of the model ingredients can be partially lifted by studying different entrance channels for de-excitation, which populate different regions of the parameter space of the compound nucleus. Fusion reactions, in particular, play an important role in this strategy because they fix three out of four of the compound-nucleus parameters (mass, charge and total excitation energy. The present work focuses on fission and intermediate-mass-fragment emission cross sections. We prove how equivalent parameter sets for fusion-fission reactions can be resolved using another entrance channel, namely spallation reactions. Intermediate-mass-fragment emission can be constrained in a similar way. An interpretation of the best-fit IMF barriers in terms of the Wigner energies of the nascent fragments is discussed.

  2. Route Assessment for Unmanned Aerial Vehicle Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Xixia Sun

    2014-01-01

    Full Text Available An integrated route assessment approach based on cloud model is proposed in this paper, where various sources of uncertainties are well kept and modeled by cloud theory. Firstly, a systemic criteria framework incorporating models for scoring subcriteria is developed. Then, the cloud model is introduced to represent linguistic variables, and survivability probability histogram of each route is converted into normal clouds by cloud transformation, enabling both randomness and fuzziness in the assessment environment to be managed simultaneously. Finally, a new way to measure the similarity between two normal clouds satisfying reflexivity, symmetry, transitivity, and overlapping is proposed. Experimental results demonstrate that the proposed route assessment approach outperforms fuzzy logic based assessment approach with regard to feasibility, reliability, and consistency with human thinking.

  3. A Constraint Model for Constrained Hidden Markov Models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2009-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we extend HMMs with constraints and show how the familiar Viterbi algorithm can be generalized, based on constraint solving ...

  4. Explicit prediction of ice clouds in general circulation models

    Science.gov (United States)

    Kohler, Martin

    1999-11-01

    Although clouds play extremely important roles in the radiation budget and hydrological cycle of the Earth, there are large quantitative uncertainties in our understanding of their generation, maintenance and decay mechanisms, representing major obstacles in the development of reliable prognostic cloud water schemes for General Circulation Models (GCMs). Recognizing their relative neglect in the past, both observationally and theoretically, this work places special focus on ice clouds. A recent version of the UCLA - University of Utah Cloud Resolving Model (CRM) that includes interactive radiation is used to perform idealized experiments to study ice cloud maintenance and decay mechanisms under various conditions in term of: (1) background static stability, (2) background relative humidity, (3) rate of cloud ice addition over a fixed initial time-period and (4) radiation: daytime, nighttime and no-radiation. Radiation is found to have major effects on the life-time of layer-clouds. Optically thick ice clouds decay significantly slower than expected from pure microphysical crystal fall-out (taucld = 0.9--1.4 h as opposed to no-motion taumicro = 0.5--0.7 h). This is explained by the upward turbulent fluxes of water induced by IR destabilization, which partially balance the downward transport of water by snowfall. Solar radiation further slows the ice-water decay by destruction of the inversion above cloud-top and the resulting upward transport of water. Optically thin ice clouds, on the other hand, may exhibit even longer life-times (>1 day) in the presence of radiational cooling. The resulting saturation mixing ratio reduction provides for a constant cloud ice source. These CRM results are used to develop a prognostic cloud water scheme for the UCLA-GCM. The framework is based on the bulk water phase model of Ose (1993). The model predicts cloud liquid water and cloud ice separately, and which is extended to split the ice phase into suspended cloud ice (predicted

  5. Architecting the cloud design decisions for cloud computing service models (SaaS, PaaS, and IaaS)

    CERN Document Server

    Kavis, Michael J

    2014-01-01

    An expert guide to selecting the right cloud service model for your business Cloud computing is all the rage, allowing for the delivery of computing and storage capacity to a diverse community of end-recipients. However, before you can decide on a cloud model, you need to determine what the ideal cloud service model is for your business. Helping you cut through all the haze, Architecting the Cloud is vendor neutral and guides you in making one of the most critical technology decisions that you will face: selecting the right cloud service model(s) based on a combination of both business and tec

  6. A fast infrared radiative transfer model for overlapping clouds

    International Nuclear Information System (INIS)

    Niu Jianguo; Yang Ping; Huang Hunglung; Davies, James E.; Li Jun; Baum, Bryan A.; Hu, Yong X.

    2007-01-01

    A fast infrared radiative transfer model (FIRTM2) appropriate for application to both single-layered and overlapping cloud situations is developed for simulating the outgoing infrared spectral radiance at the top of the atmosphere (TOA). In FIRTM2 a pre-computed library of cloud reflectance and transmittance values is employed to account for one or two cloud layers, whereas the background atmospheric optical thickness due to gaseous absorption can be computed from a clear-sky radiative transfer model. FIRTM2 is applicable to three atmospheric conditions: (1) clear-sky (2) single-layered ice or water cloud, and (3) two simultaneous cloud layers in a column (e.g., ice cloud overlying water cloud). Moreover, FIRTM2 outputs the derivatives (i.e., Jacobians) of the TOA brightness temperature with respect to cloud optical thickness and effective particle size. Sensitivity analyses have been carried out to assess the performance of FIRTM2 for two spectral regions, namely the longwave (LW) band (587.3-1179.5 cm -1 ) and the short-to-medium wave (SMW) band (1180.1-2228.9 cm -1 ). The assessment is carried out in terms of brightness temperature differences (BTD) between FIRTM2 and the well-known discrete ordinates radiative transfer model (DISORT), henceforth referred to as BTD (F-D). The BTD (F-D) values for single-layered clouds are generally less than 0.8 K. For the case of two cloud layers (specifically ice cloud over water cloud), the BTD (F-D) values are also generally less than 0.8 K except for the SMW band for the case of a very high altitude (>15 km) cloud comprised of small ice particles. Note that for clear-sky atmospheres, FIRTM2 reduces to the clear-sky radiative transfer model that is incorporated into FIRTM2, and the errors in this case are essentially those of the clear-sky radiative transfer model

  7. Dark matter in a constrained E6 inspired SUSY model

    International Nuclear Information System (INIS)

    Athron, P.; Harries, D.; Nevzorov, R.; Williams, A.G.

    2016-01-01

    We investigate dark matter in a constrained E 6 inspired supersymmetric model with an exact custodial symmetry and compare with the CMSSM. The breakdown of E 6 leads to an additional U(1) N symmetry and a discrete matter parity. The custodial and matter symmetries imply there are two stable dark matter candidates, though one may be extremely light and contribute negligibly to the relic density. We demonstrate that a predominantly Higgsino, or mixed bino-Higgsino, neutralino can account for all of the relic abundance of dark matter, while fitting a 125 GeV SM-like Higgs and evading LHC limits on new states. However we show that the recent LUX 2016 limit on direct detection places severe constraints on the mixed bino-Higgsino scenarios that explain all of the dark matter. Nonetheless we still reveal interesting scenarios where the gluino, neutralino and chargino are light and discoverable at the LHC, but the full relic abundance is not accounted for. At the same time we also show that there is a huge volume of parameter space, with a predominantly Higgsino dark matter candidate that explains all the relic abundance, that will be discoverable with XENON1T. Finally we demonstrate that for the E 6 inspired model the exotic leptoquarks could still be light and within range of future LHC searches.

  8. Constrained variability of modeled T:ET ratio across biomes

    Science.gov (United States)

    Fatichi, Simone; Pappas, Christoforos

    2017-07-01

    A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.

  9. Fast optimization of statistical potentials for structurally constrained phylogenetic models

    Directory of Open Access Journals (Sweden)

    Rodrigue Nicolas

    2009-09-01

    Full Text Available Abstract Background Statistical approaches for protein design are relevant in the field of molecular evolutionary studies. In recent years, new, so-called structurally constrained (SC models of protein-coding sequence evolution have been proposed, which use statistical potentials to assess sequence-structure compatibility. In a previous work, we defined a statistical framework for optimizing knowledge-based potentials especially suited to SC models. Our method used the maximum likelihood principle and provided what we call the joint potentials. However, the method required numerical estimations by the use of computationally heavy Markov Chain Monte Carlo sampling algorithms. Results Here, we develop an alternative optimization procedure, based on a leave-one-out argument coupled to fast gradient descent algorithms. We assess that the leave-one-out potential yields very similar results to the joint approach developed previously, both in terms of the resulting potential parameters, and by Bayes factor evaluation in a phylogenetic context. On the other hand, the leave-one-out approach results in a considerable computational benefit (up to a 1,000 fold decrease in computational time for the optimization procedure. Conclusion Due to its computational speed, the optimization method we propose offers an attractive alternative for the design and empirical evaluation of alternative forms of potentials, using large data sets and high-dimensional parameterizations.

  10. Clumpy molecular clouds: A dynamic model self-consistently regulated by T Tauri star formation

    International Nuclear Information System (INIS)

    Norman, C.; Silk, J.

    1980-01-01

    A new model is proposed which can account for the longevity, energetics, and dynamical structure of dark molecular clouds. It seems clear that the kinetic and gravitational energy in macroscopic cloud motions cannot account for the energetic of many molecular clouds. A stellar energy source must evidently be tapped, and infrared observations indicate that one cannot utilize massive stars in dark clouds. Recent observations of a high space density of T Tauri stars in some dark clouds provide the basis for our assertion that high-velocity winds from these low-mass pre--main-sequence stars provide a continuous dynamic input into molecular clouds. The T Tauri winds sweep up shells of gas, the intersections or collisions of which form dense clumps embedded in a more rarefied interclump medium. Observations constrain the clumps to be ram-pressure confined, but at the relatively low Mach numbers, continuous leakage occurs. This mass input into the interclump medium leads to the existence of two phases; a dense, cold phase (clumps of density approx.10 4 --10 5 cm -3 and temperature approx.10 K) and a warm, more diffuse, interclump medium (ICM, of density approx.10 3 --10 4 cm -3 and temperature approx.30 K). Clump collisions lead to coalescence, and the evolution of the mass spectrum of clumps is studied

  11. Implementation of a Novel Educational Modeling Approach for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Sara Ouahabi

    2014-12-01

    Full Text Available The Cloud model is cost-effective because customers pay for their actual usage without upfront costs, and scalable because it can be used more or less depending on the customers’ needs. Due to its advantages, Cloud has been increasingly adopted in many areas, such as banking, e-commerce, retail industry, and academy. For education, cloud is used to manage the large volume of educational resources produced across many universities in the cloud. Keep interoperability between content in an inter-university Cloud is not always easy. Diffusion of pedagogical contents on the Cloud by different E-Learning institutions leads to heterogeneous content which influence the quality of teaching offered by university to teachers and learners. From this reason, comes the idea of using IMS-LD coupled with metadata in the cloud. This paper presents the implementation of our previous educational modeling by combining an application in J2EE with Reload editor that consists of modeling heterogeneous content in the cloud. The new approach that we followed focuses on keeping interoperability between Educational Cloud content for teachers and learners and facilitates the task of identification, reuse, sharing, adapting teaching and learning resources in the Cloud.

  12. The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations

    Science.gov (United States)

    Tao, Wei-Kuo; Li, Xiaowen; Khain, Alexander; Matsui, Toshihisa; Lang, Stephen; Simpson, Joanne

    2008-01-01

    ]. Please see Tao et al. (2007) for more detailed description on aerosol impact on precipitation. Recently, a detailed spectral-bin microphysical scheme was implemented into the Goddard Cumulus Ensemble (GCE) model. Atmospheric aerosols are also described using number density size-distribution functions. A spectral-bin microphysical model is very expensive from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep tropical clouds in the west Pacific warm pool region and summertime convection over a mid-latitude continent with different concentrations of CCN: a low "clean" concentration and a high "dirty" concentration. The impact of atmospheric aerosol concentration on cloud and precipitation will be investigated.

  13. Dispersion modeling by kinematic simulation: Cloud dispersion model

    International Nuclear Information System (INIS)

    Fung, J C H; Perkins, R J

    2008-01-01

    A new technique has been developed to compute mean and fluctuating concentrations in complex turbulent flows (tidal current near a coast and deep ocean). An initial distribution of material is discretized into any small clouds which are advected by a combination of the mean flow and large scale turbulence. The turbulence can be simulated either by kinematic simulation (KS) or direct numerical simulation. The clouds also diffuse relative to their centroids; the statistics for this are obtained from a separate calculation of the growth of individual clouds in small scale turbulence, generated by KS. The ensemble of discrete clouds is periodically re-discretized, to limit the size of the small clouds and prevent overlapping. The model is illustrated with simulations of dispersion in uniform flow, and the results are compared with analytic, steady state solutions. The aim of this study is to understand how pollutants disperses in a turbulent flow through a numerical simulation of fluid particle motion in a random flow field generated by Fourier modes. Although this homogeneous turbulent is rather a 'simple' flow, it represents a building block toward understanding pollutant dispersion in more complex flow. The results presented here are preliminary in nature, but we expect that similar qualitative results should be observed in a genuine turbulent flow.

  14. A comparison of food crispness based on the cloud model.

    Science.gov (United States)

    Wang, Minghui; Sun, Yonghai; Hou, Jumin; Wang, Xia; Bai, Xue; Wu, Chunhui; Yu, Libo; Yang, Jie

    2018-02-01

    The cloud model is a typical model which transforms the qualitative concept into the quantitative description. The cloud model has been used less extensively in texture studies before. The purpose of this study was to apply the cloud model in food crispness comparison. The acoustic signals of carrots, white radishes, potatoes, Fuji apples, and crystal pears were recorded during compression. And three time-domain signal characteristics were extracted, including sound intensity, maximum short-time frame energy, and waveform index. The three signal characteristics and the cloud model were used to compare the crispness of the samples mentioned above. The crispness based on the Ex value of the cloud model, in a descending order, was carrot > potato > white radish > Fuji apple > crystal pear. To verify the results of the acoustic signals, mechanical measurement and sensory evaluation were conducted. The results of the two verification experiments confirmed the feasibility of the cloud model. The microstructures of the five samples were also analyzed. The microstructure parameters were negatively related with crispness (p cloud model method can be used for crispness comparison of different kinds of foods. The method is more accurate than the traditional methods such as mechanical measurement and sensory evaluation. The cloud model method can also be applied to other texture studies extensively. © 2017 Wiley Periodicals, Inc.

  15. Investigating multiple solutions in the constrained minimal supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2014-02-07

    Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.

  16. Business process modeling in the cloud

    OpenAIRE

    Yarahmadi, Aziz

    2014-01-01

    In this study, I have defined the first steps of creating a methodological framework to implement a cloud business application. The term 'cloud' here refers to applying the processing power of a network of computing tools to business solutions in order to move on from legacy systems. I have introduced the hardware and software requirements of cloud computing in business and the procedure by which the business needs will be found, analyzed and recorded as a decision making system. But first we...

  17. Defining Generic Architecture for Cloud Infrastructure as a Service model

    NARCIS (Netherlands)

    Demchenko, Y.; de Laat, C.

    2011-01-01

    Infrastructure as a Service (IaaS) is one of the provisioning models for Clouds as defined in the NIST Clouds definition. Although widely used, current IaaS implementations and solutions doesn’t have common and well defined architecture model. The paper attempts to define a generic architecture for

  18. Defining generic architecture for Cloud IaaS provisioning model

    NARCIS (Netherlands)

    Demchenko, Y.; de Laat, C.; Mavrin, A.; Leymann, F.; Ivanov, I.; van Sinderen, M.; Shishkov, B.

    2011-01-01

    Infrastructure as a Service (IaaS) is one of the provisioning models for Clouds as defined in the NIST Clouds definition. Although widely used, current IaaS implementations and solutions doesn’t have common and well defined architecture model. The paper attempts to define a generic architecture for

  19. Establishing a Cloud Computing Success Model for Hospitals in Taiwan

    Science.gov (United States)

    Lian, Jiunn-Woei

    2017-01-01

    The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services. PMID:28112020

  20. Establishing a Cloud Computing Success Model for Hospitals in Taiwan.

    Science.gov (United States)

    Lian, Jiunn-Woei

    2017-01-01

    The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services.

  1. Establishing a Cloud Computing Success Model for Hospitals in Taiwan

    Directory of Open Access Journals (Sweden)

    Jiunn-Woei Lian PhD

    2017-01-01

    Full Text Available The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services.

  2. Modeling of clouds and radiation for development of parameterizations for general circulation models

    International Nuclear Information System (INIS)

    Westphal, D.; Toon, B.; Jensen, E.; Kinne, S.; Ackerman, A.; Bergstrom, R.; Walker, A.

    1994-01-01

    Atmospheric Radiation Measurement (ARM) Program research at NASA Ames Research Center (ARC) includes radiative transfer modeling, cirrus cloud microphysics, and stratus cloud modeling. These efforts are designed to provide the basis for improving cloud and radiation parameterizations in our main effort: mesoscale cloud modeling. The range of non-convective cloud models used by the ARM modeling community can be crudely categorized based on the number of predicted hydrometers such as cloud water, ice water, rain, snow, graupel, etc. The simplest model has no predicted hydrometers and diagnoses the presence of clouds based on the predicted relative humidity. The vast majority of cloud models have two or more predictive bulk hydrometers and are termed either bulk water (BW) or size-resolving (SR) schemes. This study compares the various cloud models within the same dynamical framework, and compares results with observations rather than climate statistics

  3. Establishing a Cloud Computing Success Model for Hospitals in Taiwan

    OpenAIRE

    Lian, Jiunn-Woei

    2017-01-01

    The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality...

  4. A Review on Broker Based Cloud Service Model

    Directory of Open Access Journals (Sweden)

    Nagarajan Rajganesh

    2016-09-01

    Full Text Available Cloud computing emerged as a utility oriented computing that facilitates resource sharing under pay-as-you-go model. Nowadays, cloud offerings are not limited to range of services and anything can be shared as a service through the Internet. In this work, a detailed literature survey with respect to cloud service discovery and composition has been accounted. A proposed architecture with the inclusion of cloud broker is presented in our work. It focuses the importance of suitable service selection and its ranking towards fulfilling the customer’s service requirements. The proposed cloud broker advocates techniques such as reasoning and decision making capabilities for the improved cloud service selection and composition.

  5. A HARDCORE model for constraining an exoplanet's core size

    Science.gov (United States)

    Suissa, Gabrielle; Chen, Jingjing; Kipping, David

    2018-05-01

    The interior structure of an exoplanet is hidden from direct view yet likely plays a crucial role in influencing the habitability of the Earth analogues. Inferences of the interior structure are impeded by a fundamental degeneracy that exists between any model comprising more than two layers and observations constraining just two bulk parameters: mass and radius. In this work, we show that although the inverse problem is indeed degenerate, there exists two boundary conditions that enables one to infer the minimum and maximum core radius fraction, CRFmin and CRFmax. These hold true even for planets with light volatile envelopes, but require the planet to be fully differentiated and that layers denser than iron are forbidden. With both bounds in hand, a marginal CRF can also be inferred by sampling in-between. After validating on the Earth, we apply our method to Kepler-36b and measure CRFmin = (0.50 ± 0.07), CRFmax = (0.78 ± 0.02), and CRFmarg = (0.64 ± 0.11), broadly consistent with the Earth's true CRF value of 0.55. We apply our method to a suite of hypothetical measurements of synthetic planets to serve as a sensitivity analysis. We find that CRFmin and CRFmax have recovered uncertainties proportional to the relative error on the planetary density, but CRFmarg saturates to between 0.03 and 0.16 once (Δρ/ρ) drops below 1-2 per cent. This implies that mass and radius alone cannot provide any better constraints on internal composition once bulk density constraints hit around a per cent, providing a clear target for observers.

  6. Fuzzy chance constrained linear programming model for scrap charge optimization in steel production

    DEFF Research Database (Denmark)

    Rong, Aiying; Lahdelma, Risto

    2008-01-01

    the uncertainty based on fuzzy set theory and constrain the failure risk based on a possibility measure. Consequently, the scrap charge optimization problem is modeled as a fuzzy chance constrained linear programming problem. Since the constraints of the model mainly address the specification of the product...

  7. Cloud model construct for transaction-based cooperative systems ...

    African Journals Online (AJOL)

    Cloud model construct for transaction-based cooperative systems. ... procure cutting edge Information Technology infrastructure are some of the problems faced ... Results also reveal that credit cooperatives will benefit from the model by taking ...

  8. The emerging role of cloud computing in molecular modelling.

    Science.gov (United States)

    Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W

    2013-07-01

    There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction

    Science.gov (United States)

    Su, X.

    2017-12-01

    A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.

  10. Evaluation of a stratiform cloud parameterization for general circulation models

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States); McCaa, J. [Univ. of Washington, Seattle, WA (United States)

    1996-04-01

    To evaluate the relative importance of horizontal advection of cloud versus cloud formation within the grid cell of a single column model (SCM), we have performed a series of simulations with our SCM driven by a fixed vertical velocity and various rates of horizontal advection.

  11. Toward cognitively constrained models of language processing : A review

    NARCIS (Netherlands)

    Vogelzang, Margreet; Mills, Anne C.; Reitter, David; van Rij, Jacolien; Hendriks, Petra; van Rijn, Hedderik

    2017-01-01

    Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained

  12. Modeling Exoplanetary Haze and Cloud Effects for Transmission Spectroscopy in the TRAPPIST-1 System

    Science.gov (United States)

    Moran, Sarah E.; Horst, Sarah M.; Lewis, Nikole K.; Batalha, Natasha E.; de Wit, Julien

    2018-01-01

    We present theoretical transmission spectra of the planets TRAPPIST-1d, e, f, and g using a version of the CaltecH Inverse ModEling and Retrieval Algorithms (CHIMERA) atmospheric modeling code. We use particle size, aerosol production rates, and aerosol composition inputs from recent laboratory experiments relevant for the TRAPPIST-1 system to constrain cloud and haze behavior and their effects on transmission spectra. We explore these cloud and haze cases for a variety of theoretical atmospheric compositions including hydrogen-, nitrogen-, and carbon dioxide-dominated atmospheres. Then, we demonstrate the feasibility of physically-motivated, laboratory-supported clouds and hazes to obscure spectral features at wavelengths and resolutions relevant to instruments on the Hubble Space Telescope and the upcoming James Webb Space Telescope. Lastly, with laboratory based constraints of haze production rates for terrestrial exoplanets, we constrain possible bulk atmospheric compositions of the TRAPPIST-1 planets based on current observations. We show that continued collection of optical data, beyond the supported wavelength range of the James Webb Telescope, is necessary to explore the full effect of hazes for transmission spectra of exoplanetary atmospheres like the TRAPPIST-1 system.

  13. NASA 3D Models: CloudSat

    Data.gov (United States)

    National Aeronautics and Space Administration — Launched in April 2006, CloudSat monitors the state of the Earth’s atmosphere and weather with a sophisticated radar system. The instrument, jointly developed with...

  14. [Treatment of cloud radiative effects in general circulation models

    International Nuclear Information System (INIS)

    Wang, W.C.

    1993-01-01

    This is a renewal proposal for an on-going project of the Department of Energy (DOE)/Atmospheric Radiation Measurement (ARM) Program. The objective of the ARM Program is to improve the treatment of radiation-cloud in GCMs so that reliable predictions of the timing and magnitude of greenhouse gas-induced global warming and regional responses can be made. The ARM Program supports two research areas: (I) The modeling and analysis of data related to the parameterization of clouds and radiation in general circulation models (GCMs); and (II) the development of advanced instrumentation for both mapping the three-dimensional structure of the atmosphere and high accuracy/precision radiometric observations. The present project conducts research in area (I) and focuses on GCM treatment of cloud life cycle, optical properties, and vertical overlapping. The project has two tasks: (1) Development and Refinement of GCM Radiation-Cloud Treatment Using ARM Data; and (2) Validation of GCM Radiation-Cloud Treatment

  15. A cost modelling system for cloud computing

    OpenAIRE

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh

    2014-01-01

    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  16. Cloud Impacts on Pavement Temperature in Energy Balance Models

    Science.gov (United States)

    Walker, C. L.

    2013-12-01

    Forecast systems provide decision support for end-users ranging from the solar energy industry to municipalities concerned with road safety. Pavement temperature is an important variable when considering vehicle response to various weather conditions. A complex, yet direct relationship exists between tire and pavement temperatures. Literature has shown that as tire temperature increases, friction decreases which affects vehicle performance. Many forecast systems suffer from inaccurate radiation forecasts resulting in part from the inability to model different types of clouds and their influence on radiation. This research focused on forecast improvement by determining how cloud type impacts the amount of shortwave radiation reaching the surface and subsequent pavement temperatures. The study region was the Great Plains where surface solar radiation data were obtained from the High Plains Regional Climate Center's Automated Weather Data Network stations. Road pavement temperature data were obtained from the Meteorological Assimilation Data Ingest System. Cloud properties and radiative transfer quantities were obtained from the Clouds and Earth's Radiant Energy System mission via Aqua and Terra Moderate Resolution Imaging Spectroradiometer satellite products. An additional cloud data set was incorporated from the Naval Research Laboratory Cloud Classification algorithm. Statistical analyses using a modified nearest neighbor approach were first performed relating shortwave radiation variability with road pavement temperature fluctuations. Then statistical associations were determined between the shortwave radiation and cloud property data sets. Preliminary results suggest that substantial pavement forecasting improvement is possible with the inclusion of cloud-specific information. Future model sensitivity testing seeks to quantify the magnitude of forecast improvement.

  17. Prognostic cloud water in the Los Alamos general circulation model

    International Nuclear Information System (INIS)

    Kristjansson, J.E.; Kao, C.Y.J.

    1993-01-01

    Most of today's general circulation models (GCMS) have a greatly simplified treatment of condensation and clouds. Recent observational studies of the earth's radiation budget have suggested cloud-related feedback mechanisms to be of tremendous importance for the issue of global change. Thus, there has arisen an urgent need for improvements in the treatment of clouds in GCMS, especially as the clouds relate to radiation. In the present paper, we investigate the effects of introducing pregnostic cloud water into the Los Alamos GCM. The cloud water field, produced by both stratiform and convective condensation, is subject to 3-dimensional advection and vertical diffusion. The cloud water enters the radiation calculations through the long wave emissivity calculations. Results from several sensitivity simulations show that realistic cloud water and precipitation fields can be obtained with the applied method. Comparisons with observations show that the most realistic results are obtained when more sophisticated schemes for moist convection are introduced at the same time. The model's cold bias is reduced and the zonal winds become stronger, due to more realistic tropical convection

  18. Cloud Computing Adoption Model for Universities to Increase ICT Proficiency

    Directory of Open Access Journals (Sweden)

    Safiya Okai

    2014-08-01

    Full Text Available Universities around the world especially those in developing countries are faced with the problem of delivering the level of information and communications technology (ICT needed to facilitate teaching, learning, research, and development activities ideal in a typical university, which is needed to meet educational needs in-line with advancement in technology and the growing dependence on IT. This is mainly due to the high cost involved in providing and maintaining the needed hardware and software. A technology such as cloud computing that delivers on demand provisioning of IT resources on a pay per use basis can be used to address this problem. Cloud computing promises better delivery of IT services as well as availability whenever and wherever needed at reduced costs with users paying only as much as they consume through the services of cloud service providers. The cloud technology reduces complexity while increasing speed and quality of IT services provided; however, despite these benefits the challenges that come with its adoption have left many sectors especially the higher education skeptical in committing to this technology. This article identifies the reasons for the slow rate of adoption of cloud computing at university level, discusses the challenges faced and proposes a cloud computing adoption model that contains strategic guidelines to overcome the major challenges identified and a roadmap for the successful adoption of cloud computing by universities. The model was tested in one of the universities and found to be both useful and appropriate for adopting cloud computing at university level.

  19. Constraining the interacting dark energy models from weak gravity conjecture and recent observations

    International Nuclear Information System (INIS)

    Chen Ximing; Wang Bin; Pan Nana; Gong Yungui

    2011-01-01

    We examine the effectiveness of the weak gravity conjecture in constraining the dark energy by comparing with observations. For general dark energy models with plausible phenomenological interactions between dark sectors, we find that although the weak gravity conjecture can constrain the dark energy, the constraint is looser than that from the observations.

  20. A Condensation–coalescence Cloud Model for Exoplanetary Atmospheres: Formulation and Test Applications to Terrestrial and Jovian Clouds

    Energy Technology Data Exchange (ETDEWEB)

    Ohno, Kazumasa; Okuzumi, Satoshi [Department of Earth and Planetary Sciences, Tokyo Institute of Technology, Meguro, Tokyo 152-8551 (Japan)

    2017-02-01

    A number of transiting exoplanets have featureless transmission spectra that might suggest the presence of clouds at high altitudes. A realistic cloud model is necessary to understand the atmospheric conditions under which such high-altitude clouds can form. In this study, we present a new cloud model that takes into account the microphysics of both condensation and coalescence. Our model provides the vertical profiles of the size and density of cloud and rain particles in an updraft for a given set of physical parameters, including the updraft velocity and the number density of cloud condensation nuclei (CCNs). We test our model by comparing with observations of trade-wind cumuli on Earth and ammonia ice clouds in Jupiter. For trade-wind cumuli, the model including both condensation and coalescence gives predictions that are consistent with observations, while the model including only condensation overestimates the mass density of cloud droplets by up to an order of magnitude. For Jovian ammonia clouds, the condensation–coalescence model simultaneously reproduces the effective particle radius, cloud optical thickness, and cloud geometric thickness inferred from Voyager observations if the updraft velocity and CCN number density are taken to be consistent with the results of moist convection simulations and Galileo probe measurements, respectively. These results suggest that the coalescence of condensate particles is important not only in terrestrial water clouds but also in Jovian ice clouds. Our model will be useful to understand how the dynamics, compositions, and nucleation processes in exoplanetary atmospheres affect the vertical extent and optical thickness of exoplanetary clouds via cloud microphysics.

  1. A Condensation–coalescence Cloud Model for Exoplanetary Atmospheres: Formulation and Test Applications to Terrestrial and Jovian Clouds

    International Nuclear Information System (INIS)

    Ohno, Kazumasa; Okuzumi, Satoshi

    2017-01-01

    A number of transiting exoplanets have featureless transmission spectra that might suggest the presence of clouds at high altitudes. A realistic cloud model is necessary to understand the atmospheric conditions under which such high-altitude clouds can form. In this study, we present a new cloud model that takes into account the microphysics of both condensation and coalescence. Our model provides the vertical profiles of the size and density of cloud and rain particles in an updraft for a given set of physical parameters, including the updraft velocity and the number density of cloud condensation nuclei (CCNs). We test our model by comparing with observations of trade-wind cumuli on Earth and ammonia ice clouds in Jupiter. For trade-wind cumuli, the model including both condensation and coalescence gives predictions that are consistent with observations, while the model including only condensation overestimates the mass density of cloud droplets by up to an order of magnitude. For Jovian ammonia clouds, the condensation–coalescence model simultaneously reproduces the effective particle radius, cloud optical thickness, and cloud geometric thickness inferred from Voyager observations if the updraft velocity and CCN number density are taken to be consistent with the results of moist convection simulations and Galileo probe measurements, respectively. These results suggest that the coalescence of condensate particles is important not only in terrestrial water clouds but also in Jovian ice clouds. Our model will be useful to understand how the dynamics, compositions, and nucleation processes in exoplanetary atmospheres affect the vertical extent and optical thickness of exoplanetary clouds via cloud microphysics.

  2. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  3. The effects of aerosols on precipitation and dimensions of subtropical clouds: a sensitivity study using a numerical cloud model

    Directory of Open Access Journals (Sweden)

    A. Teller

    2006-01-01

    Full Text Available Numerical experiments were carried out using the Tel-Aviv University 2-D cloud model to investigate the effects of increased concentrations of Cloud Condensation Nuclei (CCN, giant CCN (GCCN and Ice Nuclei (IN on the development of precipitation and cloud structure in mixed-phase sub-tropical convective clouds. In order to differentiate between the contribution of the aerosols and the meteorology, all simulations were conducted with the same meteorological conditions. The results show that under the same meteorological conditions, polluted clouds (with high CCN concentrations produce less precipitation than clean clouds (with low CCN concentrations, the initiation of precipitation is delayed and the lifetimes of the clouds are longer. GCCN enhance the total precipitation on the ground in polluted clouds but they have no noticeable effect on cleaner clouds. The increased rainfall due to GCCN is mainly a result of the increased graupel mass in the cloud, but it only partially offsets the decrease in rainfall due to pollution (increased CCN. The addition of more effective IN, such as mineral dust particles, reduces the total amount of precipitation on the ground. This reduction is more pronounced in clean clouds than in polluted ones. Polluted clouds reach higher altitudes and are wider than clean clouds and both produce wider clouds (anvils when more IN are introduced. Since under the same vertical sounding the polluted clouds produce less rain, more water vapor is left aloft after the rain stops. In our simulations about 3.5 times more water evaporates after the rain stops from the polluted cloud as compared to the clean cloud. The implication is that much more water vapor is transported from lower levels to the mid troposphere under polluted conditions, something that should be considered in climate models.

  4. Modeled Impact of Cirrus Cloud Increases Along Aircraft Flight Paths

    Science.gov (United States)

    Rind, David; Lonergan, P.; Shah, K.

    1999-01-01

    The potential impact of contrails and alterations in the lifetime of background cirrus due to subsonic airplane water and aerosol emissions has been investigated in a set of experiments using the GISS GCM connected to a q-flux ocean. Cirrus clouds at a height of 12-15km, with an optical thickness of 0.33, were input to the model "x" percentage of clear-sky occasions along subsonic aircraft flight paths, where x is varied from .05% to 6%. Two types of experiments were performed: one with the percentage cirrus cloud increase independent of flight density, as long as a certain minimum density was exceeded; the other with the percentage related to the density of fuel expenditure. The overall climate impact was similar with the two approaches, due to the feedbacks of the climate system. Fifty years were run for eight such experiments, with the following conclusions based on the stable results from years 30-50 for each. The experiments show that adding cirrus to the upper troposphere results in a stabilization of the atmosphere, which leads to some decrease in cloud cover at levels below the insertion altitude. Considering then the total effect on upper level cloud cover (above 5 km altitude), the equilibrium global mean temperature response shows that altering high level clouds by 1% changes the global mean temperature by 0.43C. The response is highly linear (linear correlation coefficient of 0.996) for high cloud cover changes between 0. 1% and 5%. The effect is amplified in the Northern Hemisphere, more so with greater cloud cover change. The temperature effect maximizes around 10 km (at greater than 40C warming with a 4.8% increase in upper level clouds), again more so with greater warming. The high cloud cover change shows the flight path influence most clearly with the smallest warming magnitudes; with greater warming, the model feedbacks introduce a strong tropical response. Similarly, the surface temperature response is dominated by the feedbacks, and shows

  5. Climate Model Evaluation using New Datasets from the Clouds and the Earth's Radiant Energy System (CERES)

    Science.gov (United States)

    Loeb, Norman G.; Wielicki, Bruce A.; Doelling, David R.

    2008-01-01

    There are some in the science community who believe that the response of the climate system to anthropogenic radiative forcing is unpredictable and we should therefore call off the quest . The key limitation in climate predictability is associated with cloud feedback. Narrowing the uncertainty in cloud feedback (and therefore climate sensitivity) requires optimal use of the best available observations to evaluate and improve climate model processes and constrain climate model simulations over longer time scales. The Clouds and the Earth s Radiant Energy System (CERES) is a satellite-based program that provides global cloud, aerosol and radiative flux observations for improving our understanding of cloud-aerosol-radiation feedbacks in the Earth s climate system. CERES is the successor to the Earth Radiation Budget Experiment (ERBE), which has widely been used to evaluate climate models both at short time scales (e.g., process studies) and at decadal time scales. A CERES instrument flew on the TRMM satellite and captured the dramatic 1998 El Nino, and four other CERES instruments are currently flying aboard the Terra and Aqua platforms. Plans are underway to fly the remaining copy of CERES on the upcoming NPP spacecraft (mid-2010 launch date). Every aspect of CERES represents a significant improvement over ERBE. While both CERES and ERBE measure broadband radiation, CERES calibration is a factor of 2 better than ERBE. In order to improve the characterization of clouds and aerosols within a CERES footprint, we use coincident higher-resolution imager observations (VIRS, MODIS or VIIRS) to provide a consistent cloud-aerosol-radiation dataset at climate accuracy. Improved radiative fluxes are obtained by using new CERES-derived Angular Distribution Models (ADMs) for converting measured radiances to fluxes. CERES radiative fluxes are a factor of 2 more accurate than ERBE overall, but the improvement by cloud type and at high latitudes can be as high as a factor of 5

  6. The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.

    Science.gov (United States)

    von Davier, Matthias

    2014-02-01

    The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.

  7. A Multilateral Negotiation Model for Cloud Service Market

    Science.gov (United States)

    Yoo, Dongjin; Sim, Kwang Mong

    Trading cloud services between consumers and providers is a complicated issue of cloud computing. Since a consumer can negotiate with multiple providers to acquire the same service and each provider can receive many requests from multiple consumers, to facilitate the trading of cloud services among multiple consumers and providers, a multilateral negotiation model for cloud market is necessary. The contribution of this work is the proposal of a business model supporting a multilateral price negotiation for trading cloud services. The design of proposed systems for cloud service market includes considering a many-to-many negotiation protocol, and price determining factor from service level feature. Two negotiation strategies are implemented: 1) MDA (Market Driven Agent); and 2) adaptive concession making responding to changes of bargaining position are proposed for cloud service market. Empirical results shows that MDA achieved better performance in some cases that the adaptive concession making strategy, it is noted that unlike the MDA, the adaptive concession making strategy does not assume that an agent has information of the number of competitors (e.g., a consumer agent adopting the adaptive concession making strategy need not know the number of consumer agents competing for the same service).

  8. Prognostic cloud water in the Los Alamos general circulation model

    International Nuclear Information System (INIS)

    Kristjansson, J.E.; Kao, C.Y.J.

    1994-01-01

    Most of today's general circulation models (GCMs) have a greatly simplified treatment of condensation and clouds. Recent observational studies of the earth's radiation budget have suggested cloud-related feedback mechanisms to be of tremendous importance for the issue of global change. Thus, an urgent need for improvements in the treatment of clouds in GCMs has arisen, especially as the clouds relate to radiation. In this paper, we investigate the effects of introducing prognostic cloud water into the Los Alamos GCM. The cloud water field, produced by both stratiform and convective condensation, is subject to 3-dimensional advection and vertical diffusion. The cloud water enters the radiation calculations through the longwave emissivity calculations. Results from several sensitivity simulations show that realistic water and precipitation fields can be obtained with the applied method. Comparisons with observations show that the most realistic results are obtained when more sophisticated schemes for moist convection are introduced at the same time. The model's cold bias is reduced and the zonal winds becomes stronger because of more realistic tropical convection

  9. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part II: Multi-layered cloud

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, H; McCoy, R B; Klein, S A; Xie, S; Luo, Y; Avramov, A; Chen, M; Cole, J; Falk, M; Foster, M; Genio, A D; Harrington, J; Hoose, C; Khairoutdinov, M; Larson, V; Liu, X; McFarquhar, G; Poellot, M; Shipway, B; Shupe, M; Sud, Y; Turner, D; Veron, D; Walker, G; Wang, Z; Wolf, A; Xu, K; Yang, F; Zhang, G

    2008-02-27

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a deep, multi-layered, mixed-phase cloud system observed during the ARM Mixed-Phase Arctic Cloud Experiment. This cloud system was associated with strong surface turbulent sensible and latent heat fluxes as cold air flowed over the open Arctic Ocean, combined with a low pressure system that supplied moisture at mid-level. The simulations, performed by 13 single-column and 4 cloud-resolving models, generally overestimate the liquid water path and strongly underestimate the ice water path, although there is a large spread among the models. This finding is in contrast with results for the single-layer, low-level mixed-phase stratocumulus case in Part I of this study, as well as previous studies of shallow mixed-phase Arctic clouds, that showed an underprediction of liquid water path. The overestimate of liquid water path and underestimate of ice water path occur primarily when deeper mixed-phase clouds extending into the mid-troposphere were observed. These results suggest important differences in the ability of models to simulate Arctic mixed-phase clouds that are deep and multi-layered versus shallow and single-layered. In general, models with a more sophisticated, two-moment treatment of the cloud microphysics produce a somewhat smaller liquid water path that is closer to observations. The cloud-resolving models tend to produce a larger cloud fraction than the single-column models. The liquid water path and especially the cloud fraction have a large impact on the cloud radiative forcing at the surface, which is dominated by the longwave flux for this case.

  10. Integrating Cloud-Computing-Specific Model into Aircraft Design

    Science.gov (United States)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  11. Development of a cloud microphysical model and parameterizations to describe the effect of CCN on warm cloud

    Directory of Open Access Journals (Sweden)

    N. Kuba

    2006-01-01

    Full Text Available First, a hybrid cloud microphysical model was developed that incorporates both Lagrangian and Eulerian frameworks to study quantitatively the effect of cloud condensation nuclei (CCN on the precipitation of warm clouds. A parcel model and a grid model comprise the cloud model. The condensation growth of CCN in each parcel is estimated in a Lagrangian framework. Changes in cloud droplet size distribution arising from condensation and coalescence are calculated on grid points using a two-moment bin method in a semi-Lagrangian framework. Sedimentation and advection are estimated in the Eulerian framework between grid points. Results from the cloud model show that an increase in the number of CCN affects both the amount and the area of precipitation. Additionally, results from the hybrid microphysical model and Kessler's parameterization were compared. Second, new parameterizations were developed that estimate the number and size distribution of cloud droplets given the updraft velocity and the number of CCN. The parameterizations were derived from the results of numerous numerical experiments that used the cloud microphysical parcel model. The input information of CCN for these parameterizations is only several values of CCN spectrum (they are given by CCN counter for example. It is more convenient than conventional parameterizations those need values concerned with CCN spectrum, C and k in the equation of N=CSk, or, breadth, total number and median radius, for example. The new parameterizations' predictions of initial cloud droplet size distribution for the bin method were verified by using the aforesaid hybrid microphysical model. The newly developed parameterizations will save computing time, and can effectively approximate components of cloud microphysics in a non-hydrostatic cloud model. The parameterizations are useful not only in the bin method in the regional cloud-resolving model but also both for a two-moment bulk microphysical model and

  12. Numerical simulations of altocumulus with a cloud resolving model

    Energy Technology Data Exchange (ETDEWEB)

    Liu, S.; Krueger, S.K. [Univ. of Utah, Salt Lake City, UT (United States)

    1996-04-01

    Altocumulus and altostratus clouds together cover approximately 22% of the earth`s surface. They play an important role in the earth`s energy budget through their effect on solar and infrared radiation. However, there has been little altocumulus cloud investigation by either modelers or observational programs. Starr and Cox (SC) (1985a,b) simulated an altostratus case as part of the same study in which they modeled a thin layer of cirrus. Although this calculation was originally described as representing altostratus, it probably better represents altocumulus stratiformis. In this paper, we simulate altocumulus cloud with a cloud resolving model (CRM). We simply describe the CRM first. We calculate the same middle-level cloud case as SC to compare our results with theirs. We will look at the role of cloud-scale processes in response to large-scale forcing. We will also discuss radiative effects by simulating diurnal and nocturnal cases. Finally, we discuss the utility of a 1D model by comparing 1D simulations and 2D simulations.

  13. Intelligent Cloud Learning Model for Online Overseas Chinese Education

    Directory of Open Access Journals (Sweden)

    Yidong Chen

    2015-02-01

    Full Text Available With the development of Chinese economy, oversea Chinese education has been paid more and more attention. However, the overseas Chinese education resource is relatively lack because of historical reasons, which hindered further development . How to better share the Chinese education resources and provide intelligent personalized information service for overseas student is a key problem to be solved. In recent years, the rise of cloud computing provides us an opportunity to realize intelligent learning mode. Cloud computing offers some advantages by allowing users to use infrastructure, platforms and software . In this paper we proposed an intelligent cloud learning model based on cloud computing. The learning model can utilize network resources sufficiently to implement resource sharing according to the personal needs of students, and provide a good practicability for online overseas Chinese education.

  14. A PROFICIENT MODEL FOR HIGH END SECURITY IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    R. Bala Chandar

    2014-01-01

    Full Text Available Cloud computing is an inspiring technology due to its abilities like ensuring scalable services, reducing the anxiety of local hardware and software management associated with computing while increasing flexibility and scalability. A key trait of the cloud services is remotely processing of data. Even though this technology had offered a lot of services, there are a few concerns such as misbehavior of server side stored data , out of control of data owner's data and cloud computing does not control the access of outsourced data desired by the data owner. To handle these issues, we propose a new model to ensure the data correctness for assurance of stored data, distributed accountability for authentication and efficient access control of outsourced data for authorization. This model strengthens the correctness of data and helps to achieve the cloud data integrity, supports data owner to have control on their own data through tracking and improves the access control of outsourced data.

  15. Aerosol effects on cloud water amounts were successfully simulated by a global cloud-system resolving model.

    Science.gov (United States)

    Sato, Yousuke; Goto, Daisuke; Michibata, Takuro; Suzuki, Kentaroh; Takemura, Toshihiko; Tomita, Hirofumi; Nakajima, Teruyuki

    2018-03-07

    Aerosols affect climate by modifying cloud properties through their role as cloud condensation nuclei or ice nuclei, called aerosol-cloud interactions. In most global climate models (GCMs), the aerosol-cloud interactions are represented by empirical parameterisations, in which the mass of cloud liquid water (LWP) is assumed to increase monotonically with increasing aerosol loading. Recent satellite observations, however, have yielded contradictory results: LWP can decrease with increasing aerosol loading. This difference implies that GCMs overestimate the aerosol effect, but the reasons for the difference are not obvious. Here, we reproduce satellite-observed LWP responses using a global simulation with explicit representations of cloud microphysics, instead of the parameterisations. Our analyses reveal that the decrease in LWP originates from the response of evaporation and condensation processes to aerosol perturbations, which are not represented in GCMs. The explicit representation of cloud microphysics in global scale modelling reduces the uncertainty of climate prediction.

  16. Can nudging be used to quantify model sensitivities in precipitation and cloud forcing?: NUDGING AND MODEL SENSITIVITIES

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guangxing [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Wan, Hui [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Zhang, Kai [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Ghan, Steven J. [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA

    2016-07-10

    Efficient simulation strategies are crucial for the development and evaluation of high resolution climate models. This paper evaluates simulations with constrained meteorology for the quantification of parametric sensitivities in the Community Atmosphere Model version 5 (CAM5). Two parameters are perturbed as illustrating examples: the convection relaxation time scale (TAU), and the threshold relative humidity for the formation of low-level stratiform clouds (rhminl). Results suggest that the fidelity and computational efficiency of the constrained simulations depend strongly on 3 factors: the detailed implementation of nudging, the mechanism through which the perturbed parameter affects precipitation and cloud, and the magnitude of the parameter perturbation. In the case of a strong perturbation in convection, temperature and/or wind nudging with a 6-hour relaxation time scale leads to non-negligible side effects due to the distorted interactions between resolved dynamics and parameterized convection, while a 1-year free running simulation can satisfactorily capture the annual mean precipitation sensitivity in terms of both global average and geographical distribution. In the case of a relatively weak perturbation the large-scale condensation scheme, results from 1-year free-running simulations are strongly affected by noise associated with internal variability, while nudging winds effectively reduces the noise, and reasonably reproduces the response of precipitation and cloud forcing to parameter perturbation. These results indicate that caution is needed when using nudged simulations to assess precipitation and cloud forcing sensitivities to parameter changes in general circulation models. We also demonstrate that ensembles of short simulations are useful for understanding the evolution of model sensitivities.

  17. CP properties of symmetry-constrained two-Higgs-doublet models

    CERN Document Server

    Ferreira, P M; Nachtmann, O; Silva, Joao P

    2010-01-01

    The two-Higgs-doublet model can be constrained by imposing Higgs-family symmetries and/or generalized CP symmetries. It is known that there are only six independent classes of such symmetry-constrained models. We study the CP properties of all cases in the bilinear formalism. An exact symmetry implies CP conservation. We show that soft breaking of the symmetry can lead to spontaneous CP violation (CPV) in three of the classes.

  18. Modelling ice microphysics of mixed-phase clouds

    Science.gov (United States)

    Ahola, J.; Raatikainen, T.; Tonttila, J.; Romakkaniemi, S.; Kokkola, H.; Korhonen, H.

    2017-12-01

    The low-level Arctic mixed-phase clouds have a significant role for the Arctic climate due to their ability to absorb and reflect radiation. Since the climate change is amplified in polar areas, it is vital to apprehend the mixed-phase cloud processes. From a modelling point of view, this requires a high spatiotemporal resolution to capture turbulence and the relevant microphysical processes, which has shown to be difficult.In order to solve this problem about modelling mixed-phase clouds, a new ice microphysics description has been developed. The recently published large-eddy simulation cloud model UCLALES-SALSA offers a good base for a feasible solution (Tonttila et al., Geosci. Mod. Dev., 10:169-188, 2017). The model includes aerosol-cloud interactions described with a sectional SALSA module (Kokkola et al., Atmos. Chem. Phys., 8, 2469-2483, 2008), which represents a good compromise between detail and computational expense.Newly, the SALSA module has been upgraded to include also ice microphysics. The dynamical part of the model is based on well-known UCLA-LES model (Stevens et al., J. Atmos. Sci., 56, 3963-3984, 1999) which can be used to study cloud dynamics on a fine grid.The microphysical description of ice is sectional and the included processes consist of formation, growth and removal of ice and snow particles. Ice cloud particles are formed by parameterized homo- or heterogeneous nucleation. The growth mechanisms of ice particles and snow include coagulation and condensation of water vapor. Autoconversion from cloud ice particles to snow is parameterized. The removal of ice particles and snow happens by sedimentation and melting.The implementation of ice microphysics is tested by initializing the cloud simulation with atmospheric observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC). The results are compared to the model results shown in the paper of Ovchinnikov et al. (J. Adv. Model. Earth Syst., 6, 223-248, 2014) and they show a good

  19. Cloud-Resolving Model Simulations of Aerosol-Cloud Interactions Triggered by Strong Aerosol Emissions in the Arctic

    Science.gov (United States)

    Wang, H.; Kravitz, B.; Rasch, P. J.; Morrison, H.; Solomon, A.

    2014-12-01

    Previous process-oriented modeling studies have highlighted the dependence of effectiveness of cloud brightening by aerosols on cloud regimes in warm marine boundary layer. Cloud microphysical processes in clouds that contain ice, and hence the mechanisms that drive aerosol-cloud interactions, are more complicated than in warm clouds. Interactions between ice particles and liquid drops add additional levels of complexity to aerosol effects. A cloud-resolving model is used to study aerosol-cloud interactions in the Arctic triggered by strong aerosol emissions, through either geoengineering injection or concentrated sources such as shipping and fires. An updated cloud microphysical scheme with prognostic aerosol and cloud particle numbers is employed. Model simulations are performed in pure super-cooled liquid and mixed-phase clouds, separately, with or without an injection of aerosols into either a clean or a more polluted Arctic boundary layer. Vertical mixing and cloud scavenging of particles injected from the surface is still quite efficient in the less turbulent cold environment. Overall, the injection of aerosols into the Arctic boundary layer can delay the collapse of the boundary layer and increase low-cloud albedo. The pure liquid clouds are more susceptible to the increase in aerosol number concentration than the mixed-phase clouds. Rain production processes are more effectively suppressed by aerosol injection, whereas ice precipitation (snow) is affected less; thus the effectiveness of brightening mixed-phase clouds is lower than for liquid-only clouds. Aerosol injection into a clean boundary layer results in a greater cloud albedo increase than injection into a polluted one, consistent with current knowledge about aerosol-cloud interactions. Unlike previous studies investigating warm clouds, the impact of dynamical feedback due to precipitation changes is small. According to these results, which are dependent upon the representation of ice nucleation

  20. Modeling and analysis of rotating plates by using self sensing active constrained layer damping

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian [Univ. of Macau, Macau (China)

    2012-10-15

    This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics.

  1. Modeling and analysis of rotating plates by using self sensing active constrained layer damping

    International Nuclear Information System (INIS)

    Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian

    2012-01-01

    This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics

  2. Two Models of Magnetic Support for Photoevaporated Molecular Clouds

    International Nuclear Information System (INIS)

    Ryutov, D; Kane, J; Mizuta, A; Pound, M; Remington, B

    2004-01-01

    The thermal pressure inside molecular clouds is insufficient for maintaining the pressure balance at an ablation front at the cloud surface illuminated by nearby UV stars. Most probably, the required stiffness is provided by the magnetic pressure. After surveying existing models of this type, we concentrate on two of them: the model of a quasi-homogeneous magnetic field and the recently proposed model of a ''magnetostatic turbulence''. We discuss observational consequences of the two models, in particular, the structure and the strength of the magnetic field inside the cloud and in the ionized outflow. We comment on the possible role of reconnection events and their observational signatures. We mention laboratory experiments where the most significant features of the models can be tested

  3. Complementarity of flux- and biometric-based data to constrain parameters in a terrestrial carbon model

    Directory of Open Access Journals (Sweden)

    Zhenggang Du

    2015-03-01

    Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also

  4. Modeling, Design, and Implementation of a Cloud Workflow Engine Based on Aneka

    OpenAIRE

    Zhou, Jiantao; Sun, Chaoxin; Fu, Weina; Liu, Jing; Jia, Lei; Tan, Hongyan

    2014-01-01

    This paper presents a Petri net-based model for cloud workflow which plays a key role in industry. Three kinds of parallelisms in cloud workflow are characterized and modeled. Based on the analysis of the modeling, a cloud workflow engine is designed and implemented in Aneka cloud environment. The experimental results validate the effectiveness of our approach of modeling, design, and implementation of cloud workflow.

  5. A Developed Artificial Bee Colony Algorithm Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Ye Jin

    2018-04-01

    Full Text Available The Artificial Bee Colony (ABC algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees’ search formula and changing the scout bees’ updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants.

  6. Clouds, Wind and the Biogeography of Central American Cloud Forests: Remote Sensing, Atmospheric Modeling, and Walking in the Jungle

    Science.gov (United States)

    Lawton, R.; Nair, U. S.

    2011-12-01

    Cloud forests stand at the core of the complex of montane ecosystems that provide the backbone to the multinational Mesoamerican Biological Corridor, which seeks to protect a biodiversity conservation "hotspot" of global significance in an area of rapidly changing land use. Although cloud forests are generally defined by frequent and prolonged immersion in cloud, workers differ in their feelings about "frequent" and "prolonged", and quantitative assessments are rare. Here we focus on the dry season, in which the cloud and mist from orographic cloud plays a critical role in forest water relations, and discuss remote sensing of orographic clouds, and regional and atmospheric modeling at several scales to quantitatively examine the distribution of the atmospheric conditions that characterize cloud forests. Remote sensing using data from GOES reveals diurnal and longer scale patterns in the distribution of dry season orographic clouds in Central America at both regional and local scales. Data from MODIS, used to calculate the base height of orographic cloud banks, reveals not only the geographic distributon of cloud forest sites, but also striking regional variation in the frequency of montane immersion in orographic cloud. At a more local scale, wind is known to have striking effects on forest structure and species distribution in tropical montane ecosystems, both as a general mechanical stress and as the major agent of ecological disturbance. High resolution regional atmospheric modeling using CSU RAMS in the Monteverde cloud forests of Costa Rica provides quantitative information on the spatial distribution of canopy level winds, insight into the spatial structure and local dynamics of cloud forest communities. This information will be useful in not only in local conservation planning and the design of the Mesoamerican Biological Corridor, but also in assessments of the sensitivity of cloud forests to global and regional climate changes.

  7. Fingerprints of a riming event on cloud radar Doppler spectra: observations and modeling

    Directory of Open Access Journals (Sweden)

    H. Kalesse

    2016-03-01

    Full Text Available Radar Doppler spectra measurements are exploited to study a riming event when precipitating ice from a seeder cloud sediment through a supercooled liquid water (SLW layer. The focus is on the "golden sample" case study for this type of analysis based on observations collected during the deployment of the Atmospheric Radiation Measurement Program's (ARM mobile facility AMF2 at Hyytiälä, Finland, during the Biogenic Aerosols – Effects on Clouds and Climate (BAECC field campaign. The presented analysis of the height evolution of the radar Doppler spectra is a state-of-the-art retrieval with profiling cloud radars in SLW layers beyond the traditional use of spectral moments. Dynamical effects are considered by following the particle population evolution along slanted tracks that are caused by horizontal advection of the cloud under wind shear conditions. In the SLW layer, the identified liquid peak is used as an air motion tracer to correct the Doppler spectra for vertical air motion and the ice peak is used to study the radar profiles of rimed particles. A 1-D steady-state bin microphysical model is constrained using the SLW and air motion profiles and cloud top radar observations. The observed radar moment profiles of the rimed snow can be simulated reasonably well by the model, but not without making several assumptions about the ice particle concentration and the relative role of deposition and aggregation. This suggests that in situ observations of key ice properties are needed to complement the profiling radar observations before process-oriented studies can effectively evaluate ice microphysical parameterizations.

  8. Top ten models constrained by b {yields} s{gamma}

    Energy Technology Data Exchange (ETDEWEB)

    Hewett, J.L. [Stanford Univ., CA (United States)

    1994-12-01

    The radiative decay b {yields} s{gamma} is examined in the Standard Model and in nine classes of models which contain physics beyond the Standard Model. The constraints which may be placed on these models from the recent results of the CLEO Collaboration on both inclusive and exclusive radiative B decays is summarized. Reasonable bounds are found for the parameters in some cases.

  9. A fuzzy neural network model to forecast the percent cloud coverage and cloud top temperature maps

    Directory of Open Access Journals (Sweden)

    Y. Tulunay

    2008-12-01

    Full Text Available Atmospheric processes are highly nonlinear. A small group at the METU in Ankara has been working on a fuzzy data driven generic model of nonlinear processes. The model developed is called the Middle East Technical University Fuzzy Neural Network Model (METU-FNN-M. The METU-FNN-M consists of a Fuzzy Inference System (METU-FIS, a data driven Neural Network module (METU-FNN of one hidden layer and several neurons, and a mapping module, which employs the Bezier Surface Mapping technique. In this paper, the percent cloud coverage (%CC and cloud top temperatures (CTT are forecast one month ahead of time at 96 grid locations. The probable influence of cosmic rays and sunspot numbers on cloudiness is considered by using the METU-FNN-M.

  10. The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations

    Science.gov (United States)

    Tao, Wei-Kuo; Li, Xiaowen; Khain, Alexander; Matsui, Toshihisa; Lang, Stephen; Simpson, Joanne

    2012-01-01

    Recently, a detailed spectral-bin microphysical scheme was implemented into the Goddard Cumulus Ensemble (GCE) model. Atmospheric aerosols are also described using number density size-distribution functions. A spectral-bin microphysical model is very expensive from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep tropical clouds in the west Pacific warm pool region and summertime convection over a mid-latitude continent with different concentrations of CCN: a low clean concentration and a high dirty concentration. The impact of atmospheric aerosol concentration on cloud and precipitation will be investigated.

  11. A Scalable Cloud Library Empowering Big Data Management, Diagnosis, and Visualization of Cloud-Resolving Models

    Science.gov (United States)

    Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.

    2015-12-01

    A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a

  12. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  13. The Route to Raindrop Formation in a Shallow Cumulus Cloud Simulated by a Lagrangian Cloud Model

    Science.gov (United States)

    Noh, Yign; Hoffmann, Fabian; Raasch, Siegfried

    2017-11-01

    The mechanism of raindrop formation in a shallow cumulus cloud is investigated using a Lagrangian cloud model (LCM). The analysis is focused on how and under which conditions a cloud droplet grows to a raindrop by tracking the history of individual Lagrangian droplets. It is found that the rapid collisional growth, leading to raindrop formation, is triggered when single droplets with a radius of 20 μm appear in the region near the cloud top, characterized by a large liquid water content, strong turbulence, large mean droplet size, a broad drop size distribution (DSD), and high supersaturations. Raindrop formation easily occurs when turbulence-induced collision enhancement(TICE) is considered, with or without any extra broadening of the DSD by another mechanism (such as entrainment and mixing). In contrast, when TICE is not considered, raindrop formation is severely delayed if no other broadening mechanism is active. The reason leading to the difference is clarified by the additional analysis of idealized box-simulations of the collisional growth process for different DSDs in varied turbulent environments. It is found that TICE does not accelerate the timing of the raindrop formation for individual droplets, but it enhances the collisional growth rate significantly afterward. KMA R & D Program (Korea), DFG (Germany).

  14. Longitudinal Control for Mengshi Autonomous Vehicle via Gauss Cloud Model

    Directory of Open Access Journals (Sweden)

    Hongbo Gao

    2017-12-01

    Full Text Available Dynamic robustness and stability control is a requirement for self-driving of autonomous vehicle. Longitudinal control technique of autonomous vehicle is basic theory and one key complex technique which must have the reliability and precision of vehicle controller. The longitudinal control technique is one of the foundations of the safety and stability of autonomous vehicle control. In our paper, we present a longitudinal control algorithm based on cloud model for Mengshi autonomous vehicle to ensure the dynamic stability and tracking performance of Mengshi autonomous vehicle. The longitudinal control algorithm mainly uses cloud model generator to control the acceleration of the autonomous vehicle to achieve the goal that controls the speed of Mengshi autonomous vehicle. The proposed longitudinal control algorithm based on cloud model is verified by real experiments on Highway driving scene. The experiments results of the acceleration and speed show that the algorithm is validity and stability.

  15. A Local Search Modeling for Constrained Optimum Paths Problems (Extended Abstract

    Directory of Open Access Journals (Sweden)

    Quang Dung Pham

    2009-10-01

    Full Text Available Constrained Optimum Path (COP problems appear in many real-life applications, especially on communication networks. Some of these problems have been considered and solved by specific techniques which are usually difficult to extend. In this paper, we introduce a novel local search modeling for solving some COPs by local search. The modeling features the compositionality, modularity, reuse and strengthens the benefits of Constrained-Based Local Search. We also apply the modeling to the edge-disjoint paths problem (EDP. We show that side constraints can easily be added in the model. Computational results show the significance of the approach.

  16. A constrained rasch model of trace redintegration in serial recall.

    Science.gov (United States)

    Roodenrys, Steven; Miller, Leonie M

    2008-04-01

    The notion that verbal short-term memory tasks, such as serial recall, make use of information in long-term as well as in short-term memory is instantiated in many models of these tasks. Such models incorporate a process in which degraded traces retrieved from a short-term store are reconstructed, or redintegrated (Schweickert, 1993), through the use of information in long-term memory. This article presents a conceptual and mathematical model of this process based on a class of item-response theory models. It is demonstrated that this model provides a better fit to three sets of data than does the multinomial processing tree model of redintegration (Schweickert, 1993) and that a number of conceptual accounts of serial recall can be related to the parameters of the model.

  17. Cloud-Scale Numerical Modeling of the Arctic Boundary Layer

    Science.gov (United States)

    Krueger, Steven K.

    1998-01-01

    The interactions between sea ice, open ocean, atmospheric radiation, and clouds over the Arctic Ocean exert a strong influence on global climate. Uncertainties in the formulation of interactive air-sea-ice processes in global climate models (GCMs) result in large differences between the Arctic, and global, climates simulated by different models. Arctic stratus clouds are not well-simulated by GCMs, yet exert a strong influence on the surface energy budget of the Arctic. Leads (channels of open water in sea ice) have significant impacts on the large-scale budgets during the Arctic winter, when they contribute about 50 percent of the surface fluxes over the Arctic Ocean, but cover only 1 to 2 percent of its area. Convective plumes generated by wide leads may penetrate the surface inversion and produce condensate that spreads up to 250 km downwind of the lead, and may significantly affect the longwave radiative fluxes at the surface and thereby the sea ice thickness. The effects of leads and boundary layer clouds must be accurately represented in climate models to allow possible feedbacks between them and the sea ice thickness. The FIRE III Arctic boundary layer clouds field program, in conjunction with the SHEBA ice camp and the ARM North Slope of Alaska and Adjacent Arctic Ocean site, will offer an unprecedented opportunity to greatly improve our ability to parameterize the important effects of leads and boundary layer clouds in GCMs.

  18. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Directory of Open Access Journals (Sweden)

    Jan Hasenauer

    2014-07-01

    Full Text Available Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  19. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Science.gov (United States)

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  20. The ARM Cloud Radar Simulator for Global Climate Models: Bridging Field Data and Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yuying [Lawrence Livermore National Laboratory, Livermore, California; Xie, Shaocheng [Lawrence Livermore National Laboratory, Livermore, California; Klein, Stephen A. [Lawrence Livermore National Laboratory, Livermore, California; Marchand, Roger [University of Washington, Seattle, Washington; Kollias, Pavlos [Stony Brook University, Stony Brook, New York; Clothiaux, Eugene E. [The Pennsylvania State University, University Park, Pennsylvania; Lin, Wuyin [Brookhaven National Laboratory, Upton, New York; Johnson, Karen [Brookhaven National Laboratory, Upton, New York; Swales, Dustin [CIRES and NOAA/Earth System Research Laboratory, Boulder, Colorado; Bodas-Salcedo, Alejandro [Met Office Hadley Centre, Exeter, United Kingdom; Tang, Shuaiqi [Lawrence Livermore National Laboratory, Livermore, California; Haynes, John M. [Cooperative Institute for Research in the Atmosphere/Colorado State University, Fort Collins, Colorado; Collis, Scott [Argonne National Laboratory, Argonne, Illinois; Jensen, Michael [Brookhaven National Laboratory, Upton, New York; Bharadwaj, Nitin [Pacific Northwest National Laboratory, Richland, Washington; Hardin, Joseph [Pacific Northwest National Laboratory, Richland, Washington; Isom, Bradley [Pacific Northwest National Laboratory, Richland, Washington

    2018-01-01

    Clouds play an important role in Earth’s radiation budget and hydrological cycle. However, current global climate models (GCMs) have had difficulties in accurately simulating clouds and precipitation. To improve the representation of clouds in climate models, it is crucial to identify where simulated clouds differ from real world observations of them. This can be difficult, since significant differences exist between how a climate model represents clouds and what instruments observe, both in terms of spatial scale and the properties of the hydrometeors which are either modeled or observed. To address these issues and minimize impacts of instrument limitations, the concept of instrument “simulators”, which convert model variables into pseudo-instrument observations, has evolved with the goal to improve and to facilitate the comparison of modeled clouds with observations. Many simulators have (and continue to be developed) for a variety of instruments and purposes. A community satellite simulator package, the Cloud Feedback Model Intercomparison Project (CFMIP) Observation Simulator Package (COSP; Bodas-Salcedo et al. 2011), contains several independent satellite simulators and is being widely used in the global climate modeling community to exploit satellite observations for model cloud evaluation (e.g., Klein et al. 2013; Zhang et al. 2010). This article introduces a ground-based cloud radar simulator developed by the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program for comparing climate model clouds with ARM observations from its vertically pointing 35-GHz radars. As compared to CloudSat radar observations, ARM radar measurements occur with higher temporal resolution and finer vertical resolution. This enables users to investigate more fully the detailed vertical structures within clouds, resolve thin clouds, and quantify the diurnal variability of clouds. Particularly, ARM radars are sensitive to low-level clouds, which are

  1. Longitudinal Control for Mengshi Autonomous Vehicle via Cloud Model

    Science.gov (United States)

    Gao, H. B.; Zhang, X. Y.; Li, D. Y.; Liu, Y. C.

    2018-03-01

    Dynamic robustness and stability control is a requirement for self-driving of autonomous vehicle. Longitudinal control method of autonomous is a key technique which has drawn the attention of industry and academe. In this paper, we present a longitudinal control algorithm based on cloud model for Mengshi autonomous vehicle to ensure the dynamic stability and tracking performance of Mengshi autonomous vehicle. An experiments is applied to test the implementation of the longitudinal control algorithm. Empirical results show that if the longitudinal control algorithm based Gauss cloud model are applied to calculate the acceleration, and the vehicles drive at different speeds, a stable longitudinal control effect is achieved.

  2. Mechanisms of diurnal precipitation over the US Great Plains: a cloud resolving model perspective

    Science.gov (United States)

    Lee, Myong-In; Choi, Ildae; Tao, Wei-Kuo; Schubert, Siegfried D.; Kang, In-Sik

    2010-02-01

    The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program’s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer

  3. Mechanisms of Diurnal Precipitation over the United States Great Plains: A Cloud-Resolving Model Simulation

    Science.gov (United States)

    Lee, M.-I.; Choi, I.; Tao, W.-K.; Schubert, S. D.; Kang, I.-K.

    2010-01-01

    The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer heating.

  4. Constraining new physics with collider measurements of Standard Model signatures

    Energy Technology Data Exchange (ETDEWEB)

    Butterworth, Jonathan M. [Department of Physics and Astronomy, University College London,Gower St., London, WC1E 6BT (United Kingdom); Grellscheid, David [IPPP, Department of Physics, Durham University,Durham, DH1 3LE (United Kingdom); Krämer, Michael; Sarrazin, Björn [Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University,Sommerfeldstr. 16, 52056 Aachen (Germany); Yallup, David [Department of Physics and Astronomy, University College London,Gower St., London, WC1E 6BT (United Kingdom)

    2017-03-14

    A new method providing general consistency constraints for Beyond-the-Standard-Model (BSM) theories, using measurements at particle colliders, is presented. The method, ‘Constraints On New Theories Using Rivet’, CONTUR, exploits the fact that particle-level differential measurements made in fiducial regions of phase-space have a high degree of model-independence. These measurements can therefore be compared to BSM physics implemented in Monte Carlo generators in a very generic way, allowing a wider array of final states to be considered than is typically the case. The CONTUR approach should be seen as complementary to the discovery potential of direct searches, being designed to eliminate inconsistent BSM proposals in a context where many (but perhaps not all) measurements are consistent with the Standard Model. We demonstrate, using a competitive simplified dark matter model, the power of this approach. The CONTUR method is highly scaleable to other models and future measurements.

  5. Model Infrastruktur dan Manajemen Platform Server Berbasis Cloud Computing

    Directory of Open Access Journals (Sweden)

    Mulki Indana Zulfa

    2017-11-01

    Full Text Available Cloud computing is a new technology that is still very rapidly growing. This technology makes the Internet as the main media for the management of data and applications remotely. Cloud computing allows users to run an application without having to think about infrastructure and its platforms. Other technical aspects such as memory, storage, backup and restore, can be done very easily. This research is intended to modeling the infrastructure and management of computer platform in computer network of Faculty of Engineering, University of Jenderal Soedirman. The first stage in this research is literature study, by finding out the implementation model in previous research. Then the result will be combined with a new approach to existing resources and try to implement directly on the existing server network. The results showed that the implementation of cloud computing technology is able to replace the existing platform network.

  6. BUSINESS MODELLING AND DATABASE DESIGN IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Mihai-Constantin AVORNICULUI

    2015-04-01

    Full Text Available Electronic commerce is growing constantly from one year to another in the last decade, few are the areas that also register such a growth. It covers the exchanges of computerized data, but also electronic messaging, linear data banks and electronic transfer payment. Cloud computing, a relatively new concept and term, is a model of access services via the internet to distributed systems of configurable calculus resources at request which can be made available quickly with minimum management effort and intervention from the client and the provider. Behind an electronic commerce system in cloud there is a data base which contains the necessary information for the transactions in the system. Using business modelling, we get many benefits, which makes the design of the database used by electronic commerce systems in cloud considerably easier.

  7. Model-as-a-service (MaaS) using the cloud service innovation platform (CSIP)

    Science.gov (United States)

    Cloud infrastructures for modelling activities such as data processing, performing environmental simulations, or conducting model calibrations/optimizations provide a cost effective alternative to traditional high performance computing approaches. Cloud-based modelling examples emerged into the more...

  8. Inference with constrained hidden Markov models in PRISM

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2010-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference. De......_different are integrated. We experimentally validate our approach on the biologically motivated problem of global pairwise alignment.......A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference...

  9. Traffic modelling for Big Data backed telecom cloud

    OpenAIRE

    Via Baraldés, Anna

    2016-01-01

    The objective of this project is to provide traffic models based on new services characteristics. Specifically, we focus on modelling the traffic between origin-destination node pairs (also known as OD pairs) in a telecom network. Two use cases are distinguished: i) traffic generation in the context of simulation, and ii) traffic modelling for prediction in the context of big-data backed telecom cloud systems. To this aim, several machine learning and statistical models and technics are studi...

  10. A Diagnostic PDF Cloud Scheme to Improve Subtropical Low Clouds in NCAR Community Atmosphere Model (CAM5)

    Science.gov (United States)

    Qin, Yi; Lin, Yanluan; Xu, Shiming; Ma, Hsi-Yen; Xie, Shaocheng

    2018-02-01

    Low clouds strongly impact the radiation budget of the climate system, but their simulation in most GCMs has remained a challenge, especially over the subtropical stratocumulus region. Assuming a Gaussian distribution for the subgrid-scale total water and liquid water potential temperature, a new statistical cloud scheme is proposed and tested in NCAR Community Atmospheric Model version 5 (CAM5). The subgrid-scale variance is diagnosed from the turbulent and shallow convective processes in CAM5. The approach is able to maintain the consistency between cloud fraction and cloud condensate and thus alleviates the adjustment needed in the default relative humidity-based cloud fraction scheme. Short-term forecast simulations indicate that low cloud fraction and liquid water content, including their diurnal cycle, are improved due to a proper consideration of subgrid-scale variance over the southeastern Pacific Ocean region. Compared with the default cloud scheme, the new approach produced the mean climate reasonably well with improved shortwave cloud forcing (SWCF) due to more reasonable low cloud fraction and liquid water path over regions with predominant low clouds. Meanwhile, the SWCF bias over the tropical land regions is also alleviated. Furthermore, the simulated marine boundary layer clouds with the new approach extend further offshore and agree better with observations. The new approach is able to obtain the top of atmosphere (TOA) radiation balance with a slightly alleviated double ITCZ problem in preliminary coupled simulations. This study implies that a close coupling of cloud processes with other subgrid-scale physical processes is a promising approach to improve cloud simulations.

  11. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  12. CloudLM: a Cloud-based Language Model for Machine Translation

    Directory of Open Access Journals (Sweden)

    Ferrández-Tordera Jorge

    2016-04-01

    Full Text Available Language models (LMs are an essential element in statistical approaches to natural language processing for tasks such as speech recognition and machine translation (MT. The advent of big data leads to the availability of massive amounts of data to build LMs, and in fact, for the most prominent languages, using current techniques and hardware, it is not feasible to train LMs with all the data available nowadays. At the same time, it has been shown that the more data is used for a LM the better the performance, e.g. for MT, without any indication yet of reaching a plateau. This paper presents CloudLM, an open-source cloud-based LM intended for MT, which allows to query distributed LMs. CloudLM relies on Apache Solr and provides the functionality of state-of-the-art language modelling (it builds upon KenLM, while allowing to query massive LMs (as the use of local memory is drastically reduced, at the expense of slower decoding speed.

  13. The ARM-GCSS Intercomparison Study of Single-Column Models and Cloud System Models

    International Nuclear Information System (INIS)

    Cederwall, R.T.; Rodriques, D.J.; Krueger, S.K.; Randall, D.A.

    1999-01-01

    The Single-Column Model (SCM) Working Group (WC) and the Cloud Working Group (CWG) in the Atmospheric Radiation Measurement (ARM) Program have begun a collaboration with the GEWEX Cloud System Study (GCSS) WGs. The forcing data sets derived from the special ARM radiosonde measurements made during the SCM Intensive Observation Periods (IOPs), the wealth of cloud and related data sets collected by the ARM Program, and the ARM infrastructure support of the SCM WG are of great value to GCSS. In return, GCSS brings the efforts of an international group of cloud system modelers to bear on ARM data sets and ARM-related scientific questions. The first major activity of the ARM-GCSS collaboration is a model intercomparison study involving SCMs and cloud system models (CSMs), also known as cloud-resolving or cloud-ensemble models. The SCM methodologies developed in the ARM Program have matured to the point where an intercomparison will help identify the strengths and weaknesses of various approaches. CSM simulations will bring much additional information about clouds to evaluate cloud parameterizations used in the SCMs. CSMs and SCMs have been compared successfully in previous GCSS intercomparison studies for tropical conditions. The ARM Southern Great Plains (SGP) site offers an opportunity for GCSS to test their models in continental, mid-latitude conditions. The Summer 1997 SCM IOP has been chosen since it provides a wide range of summertime weather events that will be a challenging test of these models

  14. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Rust, John; Schjerning, Bertel

    2015-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...

  15. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jinhyuk, Lee; Rust, John

    2016-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...

  16. Modeling Power-Constrained Optimal Backlight Dimming for Color Displays

    DEFF Research Database (Denmark)

    Burini, Nino; Nadernejad, Ehsan; Korhonen, Jari

    2013-01-01

    In this paper, we present a framework for modeling color liquid crystal displays (LCDs) having local light-emitting diode (LED) backlight with dimming capability. The proposed framework includes critical aspects like leakage, clipping, light diffusion and human perception of luminance and allows...

  17. A marked correlation function for constraining modified gravity models

    Science.gov (United States)

    White, Martin

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a `generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  18. A marked correlation function for constraining modified gravity models

    Energy Technology Data Exchange (ETDEWEB)

    White, Martin, E-mail: mwhite@berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a 'generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  19. Uncovering the Best Skill Multimap by Constraining the Error Probabilities of the Gain-Loss Model

    Science.gov (United States)

    Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca

    2012-01-01

    The Gain-Loss model is a probabilistic skill multimap model for assessing learning processes. In practical applications, more than one skill multimap could be plausible, while none corresponds to the true one. The article investigates whether constraining the error probabilities is a way of uncovering the best skill assignment among a number of…

  20. Models of surface convection and dust clouds in brown dwarfs

    International Nuclear Information System (INIS)

    Freytag, B; Allard, F; Ludwig, H-G; Homeier, D; Steffen, M

    2008-01-01

    The influence of dust grains on the atmospheres of brown dwarfs is visible in observed spectra. To investigate what prevents the dust grains from falling down, or how fresh condensable material is mixed up in the atmosphere to allow new grains to form, we performed 2D radiation-hydrodynamics simulations with CO5BOLD of the upper part of the convection zone and the atmosphere containing the dust cloud layers. We find that unlike in models of Cepheids, the convective overshoot does not play a major role. Instead, the mixing in the dust clouds is controlled by gravity waves.

  1. Cloud Computing Platform for an Online Model Library System

    Directory of Open Access Journals (Sweden)

    Mingang Chen

    2013-01-01

    Full Text Available The rapid developing of digital content industry calls for online model libraries. For the efficiency, user experience, and reliability merits of the model library, this paper designs a Web 3D model library system based on a cloud computing platform. Taking into account complex models, which cause difficulties in real-time 3D interaction, we adopt the model simplification and size adaptive adjustment methods to make the system with more efficient interaction. Meanwhile, a cloud-based architecture is developed to ensure the reliability and scalability of the system. The 3D model library system is intended to be accessible by online users with good interactive experiences. The feasibility of the solution has been tested by experiments.

  2. Risk reserve constrained economic dispatch model with wind power penetration

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, W.; Sun, H.; Peng, Y. [Department of Electrical and Electronics Engineering, Dalian University of Technology, Dalian, 116024 (China)

    2010-12-15

    This paper develops a modified economic dispatch (ED) optimization model with wind power penetration. Due to the uncertain nature of wind speed, both overestimation and underestimation of the available wind power are compensated using the up and down spinning reserves. In order to determine both of these two reserve demands, the risk-based up and down spinning reserve constraints are presented considering not only the uncertainty of available wind power, but also the load forecast error and generator outage rates. The predictor-corrector primal-dual interior point method is utilized to solve the proposed ED model. Simulation results of a system with ten conventional generators and one wind farm demonstrate the effectiveness of the proposed method. (authors)

  3. Laboratory and modeling studies of chemistry in dense molecular clouds

    Science.gov (United States)

    Huntress, W. T., Jr.; Prasad, S. S.; Mitchell, G. F.

    1980-01-01

    A chemical evolutionary model with a large number of species and a large chemical library is used to examine the principal chemical processes in interstellar clouds. Simple chemical equilibrium arguments show the potential for synthesis of very complex organic species by ion-molecule radiative association reactions.

  4. Hypersonic: Model Analysis and Checking in the Cloud

    DEFF Research Database (Denmark)

    Acretoaie, Vlad; Störrle, Harald

    2014-01-01

    ”. Objective: In this paper we investigate the conceptual and technical feasibility of a new software architecture for modeling tools, where certain advanced features are factored out of the client and moved towards the Cloud. With this approach we plan to address the above mentioned drawbacks of existing...

  5. A new revenue maximization model using customized plans in cloud ...

    African Journals Online (AJOL)

    Cloud computing is emerging as a promising field offering a variety of computing services to end users. These services are offered at different prices using various pricing schemes and techniques. End users will favor the service provider offering the best quality with the lowest price. Therefore, applying a fair pricing model ...

  6. A security model for saas in cloud computing

    International Nuclear Information System (INIS)

    Abbas, R.; Farooq, A.

    2016-01-01

    Cloud computing is a type of computing that relies on sharing computing resources rather than having local servers or personal devices to handle applications. It has many service modes like Software as-a-Service (SaaS), Platform-as-a-Service (PaaS), Infrastructure-as-a-Service (IaaS). In SaaS model, service providers install and activate the applications in cloud and cloud customers access the software from cloud. So, the user does not have the need to purchase and install a particular software on his/her machine. While using SaaS model, there are multiple security issues and problems like Data security, Data breaches, Network security, Authentication and authorization, Data integrity, Availability, Web application security and Backup which are faced by users. Many researchers minimize these security problems by putting in hard work. A large work has been done to resolve these problems but there are a lot of issues that persist and need to overcome. In this research work, we have developed a security model that improves the security of data according to the desire of the End-user. The proposed model for different data security options can be helpful to increase the data security through which trade-off between functionalities can be optimized for private and public data. (author)

  7. Spectral cumulus parameterization based on cloud-resolving model

    Science.gov (United States)

    Baba, Yuya

    2018-02-01

    We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.

  8. Influence of seeing effects on cloud model inversions

    Czech Academy of Sciences Publication Activity Database

    Tziotziou, K.; Heinzel, Petr; Tsiropoula, G.

    2007-01-01

    Roč. 472, č. 1 (2007), s. 287-292 ISSN 0004-6361 Institutional research plan: CEZ:AV0Z10030501 Keywords : cloud model * inversions * seeing effects Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 4.259, year: 2007

  9. Cloud blueprint : A model-driven approach to configuring federated clouds

    NARCIS (Netherlands)

    Papazoglou, M.; Abello, A.; Bellatreche, L.; Benatallah, B.

    2012-01-01

    Current cloud solutions are fraught with problems. They introduce a monolithic cloud stack that imposes vendor lock-in and donot permit developers to mix and match services freely from diverse cloud service tiers and configure them dynamically to address application needs. Cloud blueprinting is a

  10. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    International Nuclear Information System (INIS)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J

    2008-01-01

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization

  11. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    Energy Technology Data Exchange (ETDEWEB)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J [Atmospheric Science and Global Change Division, Pacific Northwest National Laboratory, PO Box 999, MSIN K9-30, Richland, WA (United States)], E-mail: William.Gustafson@pnl.gov

    2008-04-15

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization.

  12. THE COMPOSITION OF INTERSTELLAR GRAINS TOWARD ζ OPHIUCHI: CONSTRAINING THE ELEMENTAL BUDGET NEAR THE DIFFUSE-DENSE CLOUD TRANSITION

    Energy Technology Data Exchange (ETDEWEB)

    Poteet, Charles A.; Whittet, Douglas C. B. [New York Center for Astrobiology, Department of Physics, Applied Physics and Astronomy, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180 (United States); Draine, Bruce T., E-mail: charles.poteet@gmail.com [Princeton University Observatory, Peyton Hall, Princeton, NJ 08544 (United States)

    2015-03-10

    We investigate the composition of interstellar grains along the line of sight toward ζ Ophiuchi, a well-studied environment near the diffuse-dense cloud transition. A spectral decomposition analysis of the solid-state absorbers is performed using archival spectroscopic observations from the Spitzer Space Telescope and Infrared Space Observatory. We find strong evidence for the presence of sub-micron-sized amorphous silicate grains, principally comprised of olivine-like composition, with no convincing evidence of H{sub 2}O ice mantles. However, tentative evidence for thick H{sub 2}O ice mantles on large (a ≈ 2.8 μm) grains is presented. Solid-state abundances of elemental Mg, Si, Fe, and O are inferred from our analysis and compared to standard reference abundances. We find that nearly all of the Mg and Si atoms along the line of sight reside in amorphous silicate grains, while a substantial fraction of the elemental Fe resides in compounds other than silicates. Moreover, we find that the total abundance of elemental O is largely inconsistent with the adopted reference abundances, indicating that as much as ∼156 ppm of interstellar O is missing along the line of sight. After taking into account additional limits on the abundance of elemental O in other O-bearing solids, we conclude that any missing reservoir of elemental O must reside on large grains that are nearly opaque to infrared radiation.

  13. The effects of the Boussinesq model to the rising of the explosion clouds

    International Nuclear Information System (INIS)

    Li Xiaoli; Zheng Yi

    2010-01-01

    It is to study the rising of the explosion clouds in the normal atmosphere using Boussinesq model and the Incompressible model, the numerical model is based on the assumption that effects the clouds are gravity and buoyancy. By comparing the evolvement of different density cloud, and gives the conclusion-the Boussinesq model and the Incompressible model is accord when the cloud's density is larger compared to the density of the environment. (authors)

  14. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    OpenAIRE

    J. Tang; Y. Wang; Y. Zhao; Y. Zhao; W. Hao; X. Ning; K. Lv; Z. Shi; M. Zhao

    2017-01-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which ar...

  15. Constraining quantum collapse inflationary models with CMB data

    Energy Technology Data Exchange (ETDEWEB)

    Benetti, Micol; Alcaniz, Jailson S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro, RJ (Brazil); Landau, Susana J., E-mail: micolbenetti@on.br, E-mail: slandau@df.uba.ar, E-mail: alcaniz@on.br [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires and IFIBA, CONICET, Ciudad Universitaria, PabI, Buenos Aires 1428 (Argentina)

    2016-12-01

    The hypothesis of the self-induced collapse of the inflaton wave function was proposed as responsible for the emergence of inhomogeneity and anisotropy at all scales. This proposal was studied within an almost de Sitter space-time approximation for the background, which led to a perfect scale-invariant power spectrum, and also for a quasi-de Sitter background, which allows to distinguish departures from the standard approach due to the inclusion of the collapse hypothesis. In this work we perform a Bayesian model comparison for two different choices of the self-induced collapse in a full quasi-de Sitter expansion scenario. In particular, we analyze the possibility of detecting the imprint of these collapse schemes at low multipoles of the anisotropy temperature power spectrum of the Cosmic Microwave Background (CMB) using the most recent data provided by the Planck Collaboration. Our results show that one of the two collapse schemes analyzed provides the same Bayesian evidence of the minimal standard cosmological model ΛCDM, while the other scenario is weakly disfavoured with respect to the standard cosmology.

  16. A Constrained and Versioned Data Model for TEAM Data

    Science.gov (United States)

    Andelman, S.; Baru, C.; Chandra, S.; Fegraus, E.; Lin, K.

    2009-04-01

    The objective of the Tropical Ecology Assessment and Monitoring Network (www.teamnetwork.org) is "To generate real time data for monitoring long-term trends in tropical biodiversity through a global network of TEAM sites (i.e. field stations in tropical forests), providing an early warning system on the status of biodiversity to effectively guide conservation action". To achieve this, the TEAM Network operates by collecting data via standardized protocols at TEAM Sites. The standardized TEAM protocols include the Climate, Vegetation and Terrestrial Vertebrate Protocols. Some sites also implement additional protocols. There are currently 7 TEAM Sites with plans to grow the network to 15 by June 30, 2009 and 50 TEAM Sites by the end of 2010. At each TEAM Site, data is gathered as defined by the protocols and according to a predefined sampling schedule. The TEAM data is organized and stored in a database based on the TEAM spatio-temporal data model. This data model is at the core of the TEAM Information System - it consumes and executes spatio-temporal queries, and analytical functions that are performed on TEAM data, and defines the object data types, relationships and operations that maintain database integrity. The TEAM data model contains object types including types for observation objects (e.g. bird, butterfly and trees), sampling unit, person, role, protocol, site and the relationship of these object types. Each observation data record is a set of attribute values of an observation object and is always associated with a sampling unit, an observation timestamp or time interval, a versioned protocol and data collectors. The operations on the TEAM data model can be classified as read operations, insert operations and update operations. Following are some typical operations: The operation get(site, protocol, [sampling unit block, sampling unit,] start time, end time) returns all data records using the specified protocol and collected at the specified site, block

  17. A cloud/particle model of the interstellar medium - Galactic spiral structure

    Science.gov (United States)

    Levinson, F. H.; Roberts, W. W., Jr.

    1981-01-01

    A cloud/particle model for gas flow in galaxies is developed that incorporates cloud-cloud collisions and supernovae as dominant local processes. Cloud-cloud collisions are the main means of dissipation. To counter this dissipation and maintain local dispersion, supernova explosions in the medium administer radial snowplow pushes to all nearby clouds. The causal link between these processes is that cloud-cloud collisions will form stars and that these stars will rapidly become supernovae. The cloud/particle model is tested and used to investigate the gas dynamics and spiral structures in galaxies where these assumptions may be reasonable. Particular attention is given to whether large-scale galactic shock waves, which are thought to underlie the regular well-delineated spiral structure in some galaxies, form and persist in a cloud-supernova dominated interstellar medium; this question is answered in the affirmative.

  18. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    Science.gov (United States)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  19. A Constrained Standard Model: Effects of Fayet-Iliopoulos Terms

    International Nuclear Information System (INIS)

    Barbieri, Riccardo; Hall, Lawrence J.; Nomura, Yasunori

    2001-01-01

    In (1)the one Higgs doublet standard model was obtained by an orbifold projection of a 5D supersymmetric theory in an essentially unique way, resulting in a prediction for the Higgs mass m H = 127 +- 8 GeV and for the compactification scale 1/R = 370 +- 70 GeV. The dominant one loop contribution to the Higgs potential was found to be finite, while the above uncertainties arose from quadratically divergent brane Z factors and from other higher loop contributions. In (3), a quadratically divergent Fayet-Iliopoulos term was found at one loop in this theory. We show that the resulting uncertainties in the predictions for the Higgs boson mass and the compactification scale are small, about 25percent of the uncertainties quoted above, and hence do not affect the original predictions. However, a tree level brane Fayet-Iliopoulos term could, if large enough, modify these predictions, especially for 1/R.

  20. TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL

    Directory of Open Access Journals (Sweden)

    N. Zhu

    2016-06-01

    Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  1. Cloud-Based Model Calibration Using OpenStudio: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.

    2014-03-01

    OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.

  2. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part I: Single layer cloud

    Energy Technology Data Exchange (ETDEWEB)

    Klein, S A; McCoy, R B; Morrison, H; Ackerman, A; Avramov, A; deBoer, G; Chen, M; Cole, J; DelGenio, A; Golaz, J; Hashino, T; Harrington, J; Hoose, C; Khairoutdinov, M; Larson, V; Liu, X; Luo, Y; McFarquhar, G; Menon, S; Neggers, R; Park, S; Poellot, M; von Salzen, K; Schmidt, J; Sednev, I; Shipway, B; Shupe, M; Spangenberg, D; Sud, Y; Turner, D; Veron, D; Falk, M; Foster, M; Fridlind, A; Walker, G; Wang, Z; Wolf, A; Xie, S; Xu, K; Yang, F; Zhang, G

    2008-02-27

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a cold-air outbreak mixed-phase stratocumulus cloud observed during the Atmospheric Radiation Measurement (ARM) program's Mixed-Phase Arctic Cloud Experiment. The observed cloud occurred in a well-mixed boundary layer with a cloud top temperature of -15 C. The observed liquid water path of around 160 g m{sup -2} was about two-thirds of the adiabatic value and much greater than the mass of ice crystal precipitation which when integrated from the surface to cloud top was around 15 g m{sup -2}. The simulations were performed by seventeen single-column models (SCMs) and nine cloud-resolving models (CRMs). While the simulated ice water path is generally consistent with the observed values, the median SCM and CRM liquid water path is a factor of three smaller than observed. Results from a sensitivity study in which models removed ice microphysics indicate that in many models the interaction between liquid and ice-phase microphysics is responsible for the large model underestimate of liquid water path. Despite this general underestimate, the simulated liquid and ice water paths of several models are consistent with the observed values. Furthermore, there is some evidence that models with more sophisticated microphysics simulate liquid and ice water paths that are in better agreement with the observed values, although considerable scatter is also present. Although no single factor guarantees a good simulation, these results emphasize the need for improvement in the model representation of mixed-phase microphysics. This case study, which has been well observed from both aircraft and ground-based remote sensors, could be a benchmark for model simulations of mixed-phase clouds.

  3. Space Science Cloud: a Virtual Space Science Research Platform Based on Cloud Model

    Science.gov (United States)

    Hu, Xiaoyan; Tong, Jizhou; Zou, Ziming

    Through independent and co-operational science missions, Strategic Pioneer Program (SPP) on Space Science, the new initiative of space science program in China which was approved by CAS and implemented by National Space Science Center (NSSC), dedicates to seek new discoveries and new breakthroughs in space science, thus deepen the understanding of universe and planet earth. In the framework of this program, in order to support the operations of space science missions and satisfy the demand of related research activities for e-Science, NSSC is developing a virtual space science research platform based on cloud model, namely the Space Science Cloud (SSC). In order to support mission demonstration, SSC integrates interactive satellite orbit design tool, satellite structure and payloads layout design tool, payload observation coverage analysis tool, etc., to help scientists analyze and verify space science mission designs. Another important function of SSC is supporting the mission operations, which runs through the space satellite data pipelines. Mission operators can acquire and process observation data, then distribute the data products to other systems or issue the data and archives with the services of SSC. In addition, SSC provides useful data, tools and models for space researchers. Several databases in the field of space science are integrated and an efficient retrieve system is developing. Common tools for data visualization, deep processing (e.g., smoothing and filtering tools), analysis (e.g., FFT analysis tool and minimum variance analysis tool) and mining (e.g., proton event correlation analysis tool) are also integrated to help the researchers to better utilize the data. The space weather models on SSC include magnetic storm forecast model, multi-station middle and upper atmospheric climate model, solar energetic particle propagation model and so on. All the services above-mentioned are based on the e-Science infrastructures of CAS e.g. cloud storage and

  4. Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds

    Science.gov (United States)

    Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas

    2014-01-01

    A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.

  5. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  6. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Science.gov (United States)

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  7. Improved Modeling Approaches for Constrained Sintering of Bi-Layered Porous Structures

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye; Frandsen, Henrik Lund; Esposito, Vincenzo

    2012-01-01

    Shape instabilities during constrained sintering experiment of bi-layer porous and dense cerium gadolinium oxide (CGO) structures have been analyzed. An analytical and a numerical model based on the continuum theory of sintering has been implemented to describe the evolution of bow and densificat...

  8. Satellite remote sensing and cloud modeling of St. Anthony, Minnesota storm clouds and dew point depression

    Science.gov (United States)

    Hung, R. J.; Tsao, Y. D.

    1988-01-01

    Rawinsonde data and geosynchronous satellite imagery were used to investigate the life cycles of St. Anthony, Minnesota's severe convective storms. It is found that the fully developed storm clouds, with overshooting cloud tops penetrating above the tropopause, collapsed about three minutes before the touchdown of the tornadoes. Results indicate that the probability of producing an outbreak of tornadoes causing greater damage increases when there are higher values of potential energy storage per unit area for overshooting cloud tops penetrating the tropopause. It is also found that there is less chance for clouds with a lower moisture content to be outgrown as a storm cloud than clouds with a higher moisture content.

  9. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 2: Sensitivity tests and results

    Science.gov (United States)

    Norris, Peter M.; da Silva, Arlindo M.

    2018-01-01

    Part 1 of this series presented a Monte Carlo Bayesian method for constraining a complex statistical model of global circulation model (GCM) sub-gridcolumn moisture variability using high-resolution Moderate Resolution Imaging Spectroradiometer (MODIS) cloud data, thereby permitting parameter estimation and cloud data assimilation for large-scale models. This article performs some basic testing of this new approach, verifying that it does indeed reduce mean and standard deviation biases significantly with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud-top pressure and that it also improves the simulated rotational–Raman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the Ozone Monitoring Instrument (OMI). Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows non-gradient-based jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast, where the background state has a clear swath. This article also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in passive-radiometer-retrieved cloud observables on cloud vertical structure, beyond cloud-top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification from Riishojgaard provides some help in this respect, by

  10. Monte Carlo Bayesian Inference on a Statistical Model of Sub-Gridcolumn Moisture Variability Using High-Resolution Cloud Observations. Part 2: Sensitivity Tests and Results

    Science.gov (United States)

    Norris, Peter M.; da Silva, Arlindo M.

    2016-01-01

    Part 1 of this series presented a Monte Carlo Bayesian method for constraining a complex statistical model of global circulation model (GCM) sub-gridcolumn moisture variability using high-resolution Moderate Resolution Imaging Spectroradiometer (MODIS) cloud data, thereby permitting parameter estimation and cloud data assimilation for large-scale models. This article performs some basic testing of this new approach, verifying that it does indeed reduce mean and standard deviation biases significantly with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud-top pressure and that it also improves the simulated rotational-Raman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the Ozone Monitoring Instrument (OMI). Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows non-gradient-based jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast, where the background state has a clear swath. This article also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in passive-radiometer-retrieved cloud observables on cloud vertical structure, beyond cloud-top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification from Riishojgaard provides some help in this respect, by

  11. Approximate models for broken clouds in stochastic radiative transfer theory

    International Nuclear Information System (INIS)

    Doicu, Adrian; Efremenko, Dmitry S.; Loyola, Diego; Trautmann, Thomas

    2014-01-01

    This paper presents approximate models in stochastic radiative transfer theory. The independent column approximation and its modified version with a solar source computed in a full three-dimensional atmosphere are formulated in a stochastic framework and for arbitrary cloud statistics. The nth-order stochastic models describing the independent column approximations are equivalent to the nth-order stochastic models for the original radiance fields in which the gradient vectors are neglected. Fast approximate models are further derived on the basis of zeroth-order stochastic models and the independent column approximation. The so-called “internal mixing” models assume a combination of the optical properties of the cloud and the clear sky, while the “external mixing” models assume a combination of the radiances corresponding to completely overcast and clear skies. A consistent treatment of internal and external mixing models is provided, and a new parameterization of the closure coefficient in the effective thickness approximation is given. An efficient computation of the closure coefficient for internal mixing models, using a previously derived vector stochastic model as a reference, is also presented. Equipped with appropriate look-up tables for the closure coefficient, these models can easily be integrated into operational trace gas retrieval systems that exploit absorption features in the near-IR solar spectrum. - Highlights: • Independent column approximation in a stochastic setting. • Fast internal and external mixing models for total and diffuse radiances. • Efficient optimization of internal mixing models to match reference models

  12. On meeting capital requirements with a chance-constrained optimization model.

    Science.gov (United States)

    Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan

    2016-01-01

    This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.

  13. A simple dynamic rising nuclear cloud based model of ground radioactive fallout for atmospheric nuclear explosion

    International Nuclear Information System (INIS)

    Zheng Yi

    2008-01-01

    A simple dynamic rising nuclear cloud based model for atmospheric nuclear explosion radioactive prediction was presented. The deposition of particles and initial cloud radius changing with time before the cloud stabilization was considered. Large-scale relative diffusion theory was used after cloud stabilization. The model was considered reasonable and dependable in comparison with four U.S. nuclear test cases and DELFIC model results. (authors)

  14. A Location-Based Interactive Model of Internet of Things and Cloud (IoT-Cloud) for Mobile Cloud Computing Applications.

    Science.gov (United States)

    Dinh, Thanh; Kim, Younghan; Lee, Hyukjoon

    2017-03-01

    This paper presents a location-based interactive model of Internet of Things (IoT) and cloud integration (IoT-cloud) for mobile cloud computing applications, in comparison with the periodic sensing model. In the latter, sensing collections are performed without awareness of sensing demands. Sensors are required to report their sensing data periodically regardless of whether or not there are demands for their sensing services. This leads to unnecessary energy loss due to redundant transmission. In the proposed model, IoT-cloud provides sensing services on demand based on interest and location of mobile users. By taking advantages of the cloud as a coordinator, sensing scheduling of sensors is controlled by the cloud, which knows when and where mobile users request for sensing services. Therefore, when there is no demand, sensors are put into an inactive mode to save energy. Through extensive analysis and experimental results, we show that the location-based model achieves a significant improvement in terms of network lifetime compared to the periodic model.

  15. A Location-Based Interactive Model of Internet of Things and Cloud (IoT-Cloud for Mobile Cloud Computing Applications

    Directory of Open Access Journals (Sweden)

    Thanh Dinh

    2017-03-01

    Full Text Available This paper presents a location-based interactive model of Internet of Things (IoT and cloud integration (IoT-cloud for mobile cloud computing applications, in comparison with the periodic sensing model. In the latter, sensing collections are performed without awareness of sensing demands. Sensors are required to report their sensing data periodically regardless of whether or not there are demands for their sensing services. This leads to unnecessary energy loss due to redundant transmission. In the proposed model, IoT-cloud provides sensing services on demand based on interest and location of mobile users. By taking advantages of the cloud as a coordinator, sensing scheduling of sensors is controlled by the cloud, which knows when and where mobile users request for sensing services. Therefore, when there is no demand, sensors are put into an inactive mode to save energy. Through extensive analysis and experimental results, we show that the location-based model achieves a significant improvement in terms of network lifetime compared to the periodic model.

  16. A Location-Based Interactive Model of Internet of Things and Cloud (IoT-Cloud) for Mobile Cloud Computing Applications †

    Science.gov (United States)

    Dinh, Thanh; Kim, Younghan; Lee, Hyukjoon

    2017-01-01

    This paper presents a location-based interactive model of Internet of Things (IoT) and cloud integration (IoT-cloud) for mobile cloud computing applications, in comparison with the periodic sensing model. In the latter, sensing collections are performed without awareness of sensing demands. Sensors are required to report their sensing data periodically regardless of whether or not there are demands for their sensing services. This leads to unnecessary energy loss due to redundant transmission. In the proposed model, IoT-cloud provides sensing services on demand based on interest and location of mobile users. By taking advantages of the cloud as a coordinator, sensing scheduling of sensors is controlled by the cloud, which knows when and where mobile users request for sensing services. Therefore, when there is no demand, sensors are put into an inactive mode to save energy. Through extensive analysis and experimental results, we show that the location-based model achieves a significant improvement in terms of network lifetime compared to the periodic model. PMID:28257067

  17. Comparison of convective clouds observed by spaceborne W-band radar and simulated by cloud-resolving atmospheric models

    Science.gov (United States)

    Dodson, Jason B.

    Deep convective clouds (DCCs) play an important role in regulating global climate through vertical mass flux, vertical water transport, and radiation. For general circulation models (GCMs) to simulate the global climate realistically, they must simulate DCCs realistically. GCMs have traditionally used cumulus parameterizations (CPs). Much recent research has shown that multiple persistent unrealistic behaviors in GCMs are related to limitations of CPs. Two alternatives to CPs exist: the global cloud-resolving model (GCRM), and the multiscale modeling framework (MMF). Both can directly simulate the coarser features of DCCs because of their multi-kilometer horizontal resolutions, and can simulate large-scale meteorological processes more realistically than GCMs. However, the question of realistic behavior of simulated DCCs remains. How closely do simulated DCCs resemble observed DCCs? In this study I examine the behavior of DCCs in the Nonhydrostatic Icosahedral Atmospheric Model (NICAM) and Superparameterized Community Atmospheric Model (SP-CAM), the latter with both single-moment and double-moment microphysics. I place particular emphasis on the relationship between cloud vertical structure and convective environment. I also emphasize the transition between shallow clouds and mature DCCs. The spatial domains used are the tropical oceans and the contiguous United States (CONUS), the latter of which produces frequent vigorous convection during the summer. CloudSat is used to observe DCCs, and A-Train and reanalysis data are used to represent the large-scale environment in which the clouds form. The CloudSat cloud mask and radar reflectivity profiles for CONUS cumuliform clouds (defined as clouds with a base within the planetary boundary layer) during boreal summer are first averaged and compared. Both NICAM and SP-CAM greatly underestimate the vertical growth of cumuliform clouds. Then they are sorted by three large-scale environmental variables: total preciptable

  18. Monte Carlo Bayesian Inference on a Statistical Model of Sub-gridcolumn Moisture Variability Using High-resolution Cloud Observations . Part II; Sensitivity Tests and Results

    Science.gov (United States)

    da Silva, Arlindo M.; Norris, Peter M.

    2013-01-01

    Part I presented a Monte Carlo Bayesian method for constraining a complex statistical model of GCM sub-gridcolumn moisture variability using high-resolution MODIS cloud data, thereby permitting large-scale model parameter estimation and cloud data assimilation. This part performs some basic testing of this new approach, verifying that it does indeed significantly reduce mean and standard deviation biases with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud top pressure, and that it also improves the simulated rotational-Ramman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the OMI instrument. Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows finite jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast where the background state has a clear swath. This paper also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in the cloud observables on cloud vertical structure, beyond cloud top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification due to Riishojgaard (1998) provides some help in this respect, by better honoring inversion structures in the background state.

  19. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part I: Single layer cloud

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Stephen A.; McCoy, Renata B.; Morrison, Hugh; Ackerman, Andrew S.; Avramov, Alexander; de Boer, Gijs; Chen, Mingxuan; Cole, Jason N.S.; Del Genio, Anthony D.; Falk, Michael; Foster, Michael J.; Fridlind, Ann; Golaz, Jean-Christophe; Hashino, Tempei; Harrington, Jerry Y.; Hoose, Corinna; Khairoutdinov, Marat F.; Larson, Vincent E.; Liu, Xiaohong; Luo, Yali; McFarquhar, Greg M.; Menon, Surabi; Neggers, Roel A. J.; Park, Sungsu; Poellot, Michael R.; Schmidt, Jerome M.; Sednev, Igor; Shipway, Ben J.; Shupe, Matthew D.; Spangenberg, Douglas A.; Sud, Yogesh C.; Turner, David D.; Veron, Dana E.; von Salzen, Knut; Walker, Gregory K.; Wang, Zhien; Wolf, Audrey B.; Xie, Shaocheng; Xu, Kuan-Man; Yang, Fanglin; Zhang, Gong

    2009-02-02

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a cold-air outbreak mixed-phase stratocumulus cloud observed during the Atmospheric Radiation Measurement (ARM) program's Mixed-Phase Arctic Cloud Experiment. The observed cloud occurred in a well-mixed boundary layer with a cloud top temperature of -15 C. The observed average liquid water path of around 160 g m{sup -2} was about two-thirds of the adiabatic value and much greater than the average mass of ice crystal precipitation which when integrated from the surface to cloud top was around 15 g m{sup -2}. The simulations were performed by seventeen single-column models (SCMs) and nine cloud-resolving models (CRMs). While the simulated ice water path is generally consistent with the observed values, the median SCM and CRM liquid water path is a factor of three smaller than observed. Results from a sensitivity study in which models removed ice microphysics suggest that in many models the interaction between liquid and ice-phase microphysics is responsible for the large model underestimate of liquid water path. Despite this general underestimate, the simulated liquid and ice water paths of several models are consistent with the observed values. Furthermore, there is evidence that models with more sophisticated microphysics simulate liquid and ice water paths that are in better agreement with the observed values, although considerable scatter is also present. Although no single factor guarantees a good simulation, these results emphasize the need for improvement in the model representation of mixed-phase microphysics.

  20. Using a cloud to replenish parched groundwater modeling efforts.

    Science.gov (United States)

    Hunt, Randall J; Luchette, Joseph; Schreuder, Willem A; Rumbaugh, James O; Doherty, John; Tonkin, Matthew J; Rumbaugh, Douglas B

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate "virtual" computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  1. Using a cloud to replenish parched groundwater modeling efforts

    Science.gov (United States)

    Hunt, Randall J.; Luchette, Joseph; Schreuder, Willem A.; Rumbaugh, James O.; Doherty, John; Tonkin, Matthew J.; Rumbaugh, Douglas B.

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate “virtual” computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  2. FINDING CUBOID-BASED BUILDING MODELS IN POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    W. Nguatem

    2012-07-01

    Full Text Available In this paper, we present an automatic approach for the derivation of 3D building models of level-of-detail 1 (LOD 1 from point clouds obtained from (dense image matching or, for comparison only, from LIDAR. Our approach makes use of the predominance of vertical structures and orthogonal intersections in architectural scenes. After robustly determining the scene's vertical direction based on the 3D points we use it as constraint for a RANSAC-based search for vertical planes in the point cloud. The planes are further analyzed to segment reliable outlines for rectangular surface within these planes, which are connected to construct cuboid-based building models. We demonstrate that our approach is robust and effective over a range of real-world input data sets with varying point density, amount of noise, and outliers.

  3. Comprehensive models of diffuse interstellar clouds : physical conditions and molecular abundances

    NARCIS (Netherlands)

    Dishoeck, van E.F.; Black, J.H.

    1986-01-01

    The limitations of steady state models of interstellar clouds are explored by means of comparison with observational data corresponding to clouds in front of Zeta Per, Zeta Oph, Chi Oph, and Omicron Per. The improved cloud models were constructed to reproduce the observed H and H2(J) column

  4. Towards a government public cloud model: The case of South Africa

    CSIR Research Space (South Africa)

    Mvelase, PS

    2013-06-01

    Full Text Available the government to benefit from other cloud computing advantages. However, modelling a multidimensional social problem as complex as the public cloud for a national government requires time, knowledge and experience from a wide range of specialization disciplines...

  5. Generalized Additive Models for Nowcasting Cloud Shading

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Paulescu, M.; Badescu, V.

    2014-01-01

    Roč. 101, March (2014), s. 272-282 ISSN 0038-092X R&D Projects: GA MŠk LD12009 Grant - others:European Cooperation in Science and Technology(XE) COST ES1002 Institutional support: RVO:67985807 Keywords : sunshine number * nowcasting * generalized additive model * Markov chain Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.469, year: 2014

  6. Cloud Based Metalearning System for Predictive Modeling of Biomedical Data

    Directory of Open Access Journals (Sweden)

    Milan Vukićević

    2014-01-01

    Full Text Available Rapid growth and storage of biomedical data enabled many opportunities for predictive modeling and improvement of healthcare processes. On the other side analysis of such large amounts of data is a difficult and computationally intensive task for most existing data mining algorithms. This problem is addressed by proposing a cloud based system that integrates metalearning framework for ranking and selection of best predictive algorithms for data at hand and open source big data technologies for analysis of biomedical data.

  7. Evaluation of NCMRWF unified model vertical cloud structure with CloudSat over the Indian summer monsoon region

    Science.gov (United States)

    Jayakumar, A.; Mamgain, Ashu; Jisesh, A. S.; Mohandas, Saji; Rakhi, R.; Rajagopal, E. N.

    2016-05-01

    Representation of rainfall distribution and monsoon circulation in the high resolution versions of NCMRWF Unified model (NCUM-REG) for the short-range forecasting of extreme rainfall event is vastly dependent on the key factors such as vertical cloud distribution, convection and convection/cloud relationship in the model. Hence it is highly relevant to evaluate the vertical structure of cloud and precipitation of the model over the monsoon environment. In this regard, we utilized the synergy of the capabilities of CloudSat data for long observational period, by conditioning it for the synoptic situation of the model simulation period. Simulations were run at 4-km grid length with the convective parameterization effectively switched off and on. Since the sample of CloudSat overpasses through the monsoon domain is small, the aforementioned methodology may qualitatively evaluate the vertical cloud structure for the model simulation period. It is envisaged that the present study will open up the possibility of further improvement in the high resolution version of NCUM in the tropics for the Indian summer monsoon associated rainfall events.

  8. Cloud data centers and cost modeling a complete guide to planning, designing and building a cloud data center

    CERN Document Server

    Wu, Caesar

    2015-01-01

    Cloud Data Centers and Cost Modeling establishes a framework for strategic decision-makers to facilitate the development of cloud data centers. Just as building a house requires a clear understanding of the blueprints, architecture, and costs of the project; building a cloud-based data center requires similar knowledge. The authors take a theoretical and practical approach, starting with the key questions to help uncover needs and clarify project scope. They then demonstrate probability tools to test and support decisions, and provide processes that resolve key issues. After laying a foundati

  9. Criticisms and defences of the balance-of-payments constrained growth model: some old, some new

    Directory of Open Access Journals (Sweden)

    John S.L. McCombie

    2011-12-01

    Full Text Available This paper assesses various critiques that have been levelled over the years against Thirlwall’s Law and the balance-of-payments constrained growth model. It starts by assessing the criticisms that the law is largely capturing an identity; that the law of one price renders the model incoherent; and that statistical testing using cross-country data rejects the hypothesis that the actual and the balance-of-payments equilibrium growth rates are the same. It goes on to consider the argument that calculations of the “constant-market-shares” income elasticities of demand for exports demonstrate that the UK (and by implication other advanced countries could not have been balance-of-payments constrained in the early postwar period. Next Krugman’s interpretation of the law (or what he terms the “45-degree rule”, which is at variance with the usual demand-oriented explanation, is examined. The paper next assesses attempts to reconcile the demand and supply side of the model and examines whether or not the balance-of-payments constrained growth model is subject to the fallacy of composition. It concludes that none of these criticisms invalidate the model, which remains a powerful explanation of why growth rates differ.

  10. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    Science.gov (United States)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  11. Modelling the Intention to Adopt Cloud Computing Services: A Transaction Cost Theory Perspective

    Directory of Open Access Journals (Sweden)

    Ogan Yigitbasioglu

    2014-11-01

    Full Text Available This paper uses transaction cost theory to study cloud computing adoption. A model is developed and tested with data from an Australian survey. According to the results, perceived vendor opportunism and perceived legislative uncertainty around cloud computing were significantly associated with perceived cloud computing security risk. There was also a significant negative relationship between perceived cloud computing security risk and the intention to adopt cloud services. This study also reports on adoption rates of cloud computing in terms of applications, as well as the types of services used.

  12. Modelling and Vibration Control of Beams with Partially Debonded Active Constrained Layer Damping Patch

    Science.gov (United States)

    SUN, D.; TONG, L.

    2002-05-01

    A detailed model for the beams with partially debonded active constraining damping (ACLD) treatment is presented. In this model, the transverse displacement of the constraining layer is considered to be non-identical to that of the host structure. In the perfect bonding region, the viscoelastic core is modelled to carry both peel and shear stresses, while in the debonding area, it is assumed that no peel and shear stresses be transferred between the host beam and the constraining layer. The adhesive layer between the piezoelectric sensor and the host beam is also considered in this model. In active control, the positive position feedback control is employed to control the first mode of the beam. Based on this model, the incompatibility of the transverse displacements of the active constraining layer and the host beam is investigated. The passive and active damping behaviors of the ACLD patch with different thicknesses, locations and lengths are examined. Moreover, the effects of debonding of the damping layer on both passive and active control are examined via a simulation example. The results show that the incompatibility of the transverse displacements is remarkable in the regions near the ends of the ACLD patch especially for the high order vibration modes. It is found that a thinner damping layer may lead to larger shear strain and consequently results in a larger passive and active damping. In addition to the thickness of the damping layer, its length and location are also key factors to the hybrid control. The numerical results unveil that edge debonding can lead to a reduction of both passive and active damping, and the hybrid damping may be more sensitive to the debonding of the damping layer than the passive damping.

  13. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    International Nuclear Information System (INIS)

    Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J

    2011-01-01

    In this work, tensile tests and one-dimensional constitutive modeling were performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigated the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles were performed during each test. The material was observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5–4.2 MPa was observed for the constrained displacement recovery experiments. After the experiments were performed, the Chen and Lagoudas model was used to simulate and predict the experimental results. The material properties used in the constitutive model—namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction—were calibrated from a single 10% extension free recovery experiment. The model was then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data

  14. Model to Implement Virtual Computing Labs via Cloud Computing Services

    Directory of Open Access Journals (Sweden)

    Washington Luna Encalada

    2017-07-01

    Full Text Available In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs, and bring your own device (BYOD are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the reproduction of the benefits of an educational institution’s physical laboratory. For a university without a computing lab, to obtain hands-on IT training with software, operating systems, networks, servers, storage, and cloud computing similar to that which could be received on a university campus computing lab, it is necessary to use a combination of technological tools. Such teaching tools must promote the transmission of knowledge, encourage interaction and collaboration, and ensure students obtain valuable hands-on experience. That, in turn, allows the universities to focus more on teaching and research activities than on the implementation and configuration of complex physical systems. In this article, we present a model for implementing ecosystems which allow universities to teach practical Information Technology (IT skills. The model utilizes what is called a “social cloud”, which utilizes all cloud computing services, such as Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS. Additionally, it integrates the cloud learning aspects of a MOOC and several aspects of social networking and support. Social clouds have striking benefits such as centrality, ease of use, scalability, and ubiquity, providing a superior learning environment when compared to that of a simple physical lab. The proposed model allows students to foster all the educational pillars such as learning to know, learning to be, learning

  15. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.

  16. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    Science.gov (United States)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to

  17. SPATIAL MOTION OF THE MAGELLANIC CLOUDS: TIDAL MODELS RULED OUT?

    International Nuclear Information System (INIS)

    Ruzicka, Adam; Palous, Jan; Theis, Christian

    2009-01-01

    Recently, Kallivayalil et al. derived new values of the proper motion for the Large and Small Magellanic Clouds (LMC and SMC, respectively). The spatial velocities of both Clouds are unexpectedly higher than their previous values resulting from agreement between the available theoretical models of the Magellanic System and the observations of neutral hydrogen (H I) associated with the LMC and the SMC. Such proper motion estimates are likely to be at odds with the scenarios for creation of the large-scale structures in the Magellanic System suggested so far. We investigated this hypothesis for the pure tidal models, as they were the first ones devised to explain the evolution of the Magellanic System, and the tidal stripping is intrinsically involved in every model assuming the gravitational interaction. The parameter space for the Milky Way (MW)-LMC-SMC interaction was analyzed by a robust search algorithm (genetic algorithm) combined with a fast, restricted N-body model of the interaction. Our method extended the known variety of evolutionary scenarios satisfying the observed kinematics and morphology of the Magellanic large-scale structures. Nevertheless, assuming the tidal interaction, no satisfactory reproduction of the H I data available for the Magellanic Clouds was achieved with the new proper motions. We conclude that for the proper motion data by Kallivayalil et al., within their 1σ errors, the dynamical evolution of the Magellanic System with the currently accepted total mass of the MW cannot be explained in the framework of pure tidal models. The optimal value for the western component of the LMC proper motion was found to be μ W lmc ∼> -1.3 mas yr -1 in case of tidal models. It corresponds to the reduction of the Kallivayalil et al. value for μ W lmc by ∼ 40% in its magnitude.

  18. Final Technical Report for "High-resolution global modeling of the effects of subgrid-scale clouds and turbulence on precipitating cloud systems"

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Vincent [Univ. of Wisconsin, Milwaukee, WI (United States)

    2016-11-25

    The Multiscale Modeling Framework (MMF) embeds a cloud-resolving model in each grid column of a General Circulation Model (GCM). A MMF model does not need to use a deep convective parameterization, and thereby dispenses with the uncertainties in such parameterizations. However, MMF models grossly under-resolve shallow boundary-layer clouds, and hence those clouds may still benefit from parameterization. In this grant, we successfully created a climate model that embeds a cloud parameterization (“CLUBB”) within a MMF model. This involved interfacing CLUBB’s clouds with microphysics and reducing computational cost. We have evaluated the resulting simulated clouds and precipitation with satellite observations. The chief benefit of the project is to provide a MMF model that has an improved representation of clouds and that provides improved simulations of precipitation.

  19. Lightning NOx emissions over the USA constrained by TES ozone observations and the GEOS-Chem model

    Science.gov (United States)

    Jourdain, L.; Kulawik, S. S.; Worden, H. M.; Pickering, K. E.; Worden, J.; Thompson, A. M.

    2010-01-01

    Improved estimates of NOx from lightning sources are required to understand tropospheric NOx and ozone distributions, the oxidising capacity of the troposphere and corresponding feedbacks between chemistry and climate change. In this paper, we report new satellite ozone observations from the Tropospheric Emission Spectrometer (TES) instrument that can be used to test and constrain the parameterization of the lightning source of NOx in global models. Using the National Lightning Detection (NLDN) and the Long Range Lightning Detection Network (LRLDN) data as well as the HYPSLIT transport and dispersion model, we show that TES provides direct observations of ozone enhanced layers downwind of convective events over the USA in July 2006. We find that the GEOS-Chem global chemistry-transport model with a parameterization based on cloud top height, scaled regionally and monthly to OTD/LIS (Optical Transient Detector/Lightning Imaging Sensor) climatology, captures the ozone enhancements seen by TES. We show that the model's ability to reproduce the location of the enhancements is due to the fact that this model reproduces the pattern of the convective events occurrence on a daily basis during the summer of 2006 over the USA, even though it does not well represent the relative distribution of lightning intensities. However, this model with a value of 6 Tg N/yr for the lightning source (i.e.: with a mean production of 260 moles NO/Flash over the USA in summer) underestimates the intensities of the ozone enhancements seen by TES. By imposing a production of 520 moles NO/Flash for lightning occurring in midlatitudes, which better agrees with the values proposed by the most recent studies, we decrease the bias between TES and GEOS-Chem ozone over the USA in July 2006 by 40%. However, our conclusion on the strength of the lightning source of NOx is limited by the fact that the contribution from the stratosphere is underestimated in the GEOS-Chem simulations.

  20. The application of time series models to cloud field morphology analysis

    Science.gov (United States)

    Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.

    1987-01-01

    A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.

  1. The cloud-phase feedback in the Super-parameterized Community Earth System Model

    Science.gov (United States)

    Burt, M. A.; Randall, D. A.

    2016-12-01

    Recent comparisons of observations and climate model simulations by I. Tan and colleagues have suggested that the Wegener-Bergeron-Findeisen (WBF) process tends to be too active in climate models, making too much cloud ice, and resulting in an exaggerated negative cloud-phase feedback on climate change. We explore the WBF process and its effect on shortwave cloud forcing in present-day and future climate simulations with the Community Earth System Model, and its super-parameterized counterpart. Results show that SP-CESM has much less cloud ice and a weaker cloud-phase feedback than CESM.

  2. Constraining the Influence of Natural Variability to Improve Estimates of Global Aerosol Indirect Effects in a Nudged Version of the Community Atmosphere Model 5

    Energy Technology Data Exchange (ETDEWEB)

    Kooperman, G. J.; Pritchard, M. S.; Ghan, Steven J.; Wang, Minghuai; Somerville, Richard C.; Russell, Lynn

    2012-12-11

    Natural modes of variability on many timescales influence aerosol particle distributions and cloud properties such that isolating statistically significant differences in cloud radiative forcing due to anthropogenic aerosol perturbations (indirect effects) typically requires integrating over long simulations. For state-of-the-art global climate models (GCM), especially those in which embedded cloud-resolving models replace conventional statistical parameterizations (i.e. multi-scale modeling framework, MMF), the required long integrations can be prohibitively expensive. Here an alternative approach is explored, which implements Newtonian relaxation (nudging) to constrain simulations with both pre-industrial and present-day aerosol emissions toward identical meteorological conditions, thus reducing differences in natural variability and dampening feedback responses in order to isolate radiative forcing. Ten-year GCM simulations with nudging provide a more stable estimate of the global-annual mean aerosol indirect radiative forcing than do conventional free-running simulations. The estimates have mean values and 95% confidence intervals of -1.54 ± 0.02 W/m2 and -1.63 ± 0.17 W/m2 for nudged and free-running simulations, respectively. Nudging also substantially increases the fraction of the world’s area in which a statistically significant aerosol indirect effect can be detected (68% and 25% of the Earth's surface for nudged and free-running simulations, respectively). One-year MMF simulations with and without nudging provide global-annual mean aerosol indirect radiative forcing estimates of -0.80 W/m2 and -0.56 W/m2, respectively. The one-year nudged results compare well with previous estimates from three-year free-running simulations (-0.77 W/m2), which showed the aerosol-cloud relationship to be in better agreement with observations and high-resolution models than in the results obtained with conventional parameterizations.

  3. Abs: a high-level modeling language for cloud-aware programming

    NARCIS (Netherlands)

    N. Bezirgiannis (Nikolaos); F.S. de Boer (Frank)

    2016-01-01

    textabstractCloud technology has become an invaluable tool to the IT business, because of its attractive economic model. Yet, from the programmers’ perspective, the development of cloud applications remains a major challenge. In this paper we introduce a programming language that allows Cloud

  4. Geographical point cloud modelling with the 3D medial axis transform

    NARCIS (Netherlands)

    Peters, R.Y.

    2018-01-01

    A geographical point cloud is a detailed three-dimensional representation of the geometry of our geographic environment.
    Using geographical point cloud modelling, we are able to extract valuable information from geographical point clouds that can be used for applications in asset management,

  5. An Economic Model for Self-tuned Cloud Caching

    OpenAIRE

    Dash, Debabrata; Kantere, Verena; Ailamaki, Anastasia

    2009-01-01

    Cloud computing, the new trend for service infrastructures requires user multi-tenancy as well as minimal capital expenditure. In a cloud that services large amounts of data that are massively collected and queried, such as scientific data, users typically pay for query services. The cloud supports caching of data in order to provide quality query services. User payments cover query execution costs and maintenance of cloud infrastructure, and incur cloud profit. The challenge resides in provi...

  6. Cloud radiative effects and changes simulated by the Coupled Model Intercomparison Project Phase 5 models

    Science.gov (United States)

    Shin, Sun-Hee; Kim, Ok-Yeon; Kim, Dongmin; Lee, Myong-In

    2017-07-01

    Using 32 CMIP5 (Coupled Model Intercomparison Project Phase 5) models, this study examines the veracity in the simulation of cloud amount and their radiative effects (CREs) in the historical run driven by observed external radiative forcing for 1850-2005, and their future changes in the RCP (Representative Concentration Pathway) 4.5 scenario runs for 2006-2100. Validation metrics for the historical run are designed to examine the accuracy in the representation of spatial patterns for climatological mean, and annual and interannual variations of clouds and CREs. The models show large spread in the simulation of cloud amounts, specifically in the low cloud amount. The observed relationship between cloud amount and the controlling large-scale environment are also reproduced diversely by various models. Based on the validation metrics, four models—ACCESS1.0, ACCESS1.3, HadGEM2-CC, and HadGEM2-ES—are selected as best models, and the average of the four models performs more skillfully than the multimodel ensemble average. All models project global-mean SST warming at the increase of the greenhouse gases, but the magnitude varies across the simulations between 1 and 2 K, which is largely attributable to the difference in the change of cloud amount and distribution. The models that simulate more SST warming show a greater increase in the net CRE due to reduced low cloud and increased incoming shortwave radiation, particularly over the regions of marine boundary layer in the subtropics. Selected best-performing models project a significant reduction in global-mean cloud amount of about -0.99% K-1 and net radiative warming of 0.46 W m-2 K-1, suggesting a role of positive feedback to global warming.

  7. Evolution in Cloud Population Statistics of the MJO: From AMIE Field Observations to Global Cloud-Permiting Models

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Chidong [Univ. of Miami, Coral Gables, FL (United States)

    2016-08-14

    Motivated by the success of the AMIE/DYNAMO field campaign, which collected unprecedented observations of cloud and precipitation from the tropical Indian Ocean in Octber 2011 – March 2012, this project explored how such observations can be applied to assist the development of global cloud-permitting models through evaluating and correcting model biases in cloud statistics. The main accomplishment of this project were made in four categories: generating observational products for model evaluation, using AMIE/DYNAMO observations to validate global model simulations, using AMIE/DYNAMO observations in numerical studies of cloud-permitting models, and providing leadership in the field. Results from this project provide valuable information for building a seamless bridge between DOE ASR program’s component on process level understanding of cloud processes in the tropics and RGCM focus on global variability and regional extremes. In particular, experience gained from this project would be directly applicable to evaluation and improvements of ACME, especially as it transitions to a non-hydrostatic variable resolution model.

  8. Modeling the Cloud to Enhance Capabilities for Crises and Catastrophe Management

    Science.gov (United States)

    2016-11-16

    through support by a prior DOD grant, and in this project, we focused on how to effectively adapt this for the cloud catastrophe environment. The...the effects of varying cloud resources and the cloud architecture on L, o, and g values, we will be able to formulate realistic analytical models of...variation in computing and communication costs of test problems due to varying loads in the cloud environment. We used the parallel matrix multiplication

  9. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 1: Method

    Science.gov (United States)

    Norris, Peter M.; da Silva, Arlindo M.

    2018-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC. PMID:29618847

  10. Monte Carlo Bayesian Inference on a Statistical Model of Sub-Gridcolumn Moisture Variability Using High-Resolution Cloud Observations. Part 1: Method

    Science.gov (United States)

    Norris, Peter M.; Da Silva, Arlindo M.

    2016-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.

  11. Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models

    Science.gov (United States)

    Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.

    2017-06-01

    The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H II bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.

  12. Sensitivity of tropical convection in cloud-resolving WRF simulations to model physics and forcing procedures

    Science.gov (United States)

    Endo, S.; Lin, W.; Jackson, R. C.; Collis, S. M.; Vogelmann, A. M.; Wang, D.; Oue, M.; Kollias, P.

    2017-12-01

    Tropical convection is one of the main drivers of the climate system and recognized as a major source of uncertainty in climate models. High-resolution modeling is performed with a focus on the deep convection cases during the active monsoon period of the TWP-ICE field campaign to explore ways to improve the fidelity of convection permitting tropical simulations. Cloud resolving model (CRM) simulations are performed with WRF modified to apply flexible configurations for LES/CRM simulations. We have enhanced the capability of the forcing module to test different implementations of large-scale vertical advective forcing, including a function for optional use of large-scale thermodynamic profiles and a function for the condensate advection. The baseline 3D CRM configurations are, following Fridlind et al. (2012), driven by observationally-constrained ARM forcing and tested with diagnosed surface fluxes and fixed sea-surface temperature and prescribed aerosol size distributions. After the spin-up period, the simulations follow the observed precipitation peaks associated with the passages of precipitation systems. Preliminary analysis shows that the simulation is generally not sensitive to the treatment of the large-scale vertical advection of heat and moisture, while more noticeable changes in the peak precipitation rate are produced when thermodynamic profiles above the boundary layer were nudged to the reference profiles from the forcing dataset. The presentation will explore comparisons with observationally-based metrics associated with convective characteristics and examine the model performance with a focus on model physics, doubly-periodic vs. nested configurations, and different forcing procedures/sources. A radar simulator will be used to understand possible uncertainties in radar-based retrievals of convection properties. Fridlind, A. M., et al. (2012), A comparison of TWP-ICE observational data with cloud-resolving model results, J. Geophys. Res., 117, D05204

  13. Network-constrained Cournot models of liberalized electricity markets: the devil is in the details

    International Nuclear Information System (INIS)

    Neuhoff, Karsten; Barquin, Julian; Vazquez, Miguel; Boots, Maroeska; Rijkers, Fieke A.M.; Ehrenmann, Andreas; Hobbs, Benjamin F.

    2005-01-01

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model. (Author)

  14. Network-constrained Cournot models of liberalized electricity markets. The devil is in the details

    Energy Technology Data Exchange (ETDEWEB)

    Neuhoff, Karsten [Department of Applied Economics, Sidgwick Ave., University of Cambridge, CB3 9DE (United Kingdom); Barquin, Julian; Vazquez, Miguel [Instituto de Investigacion Tecnologica, Universidad Pontificia Comillas, c/Santa Cruz de Marcenado 26-28015 Madrid (Spain); Boots, Maroeska G. [Energy Research Centre of the Netherlands ECN, Badhuisweg 3, 1031 CM Amsterdam (Netherlands); Ehrenmann, Andreas [Judge Institute of Management, University of Cambridge, Trumpington Street, CB2 1AG (United Kingdom); Hobbs, Benjamin F. [Department of Geography and Environmental Engineering, Johns Hopkins University, Baltimore, MD 21218 (United States); Rijkers, Fieke A.M. [Contributed while at ECN, now at Nederlandse Mededingingsautoriteit (NMa), Dte, Postbus 16326, 2500 BH Den Haag (Netherlands)

    2005-05-15

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model.

  15. Network-constrained Cournot models of liberalized electricity markets: the devil is in the details

    Energy Technology Data Exchange (ETDEWEB)

    Neuhoff, Karsten [Cambridge Univ., Dept. of Applied Economics, Cambridge (United Kingdom); Barquin, Julian; Vazquez, Miguel [Universidad Pontificia Comillas, Inst. de Investigacion Tecnologica, Madrid (Spain); Boots, Maroeska; Rijkers, Fieke A.M. [Energy Research Centre of the Netherlands ECN, Amsterdam (Netherlands); Ehrenmann, Andreas [Cambridge Univ., Judge Inst. of Management, Cambridge (United Kingdom); Hobbs, Benjamin F. [Johns Hopkins Univ., Dept. of Geography and Environmental Engineering, Baltimore, MD (United States)

    2005-05-01

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model. (Author)

  16. Distributed Hydrologic Modeling Apps for Decision Support in the Cloud

    Science.gov (United States)

    Swain, N. R.; Latu, K.; Christiensen, S.; Jones, N.; Nelson, J.

    2013-12-01

    Advances in computation resources and greater availability of water resources data represent an untapped resource for addressing hydrologic uncertainties in water resources decision-making. The current practice of water authorities relies on empirical, lumped hydrologic models to estimate watershed response. These models are not capable of taking advantage of many of the spatial datasets that are now available. Physically-based, distributed hydrologic models are capable of using these data resources and providing better predictions through stochastic analysis. However, there exists a digital divide that discourages many science-minded decision makers from using distributed models. This divide can be spanned using a combination of existing web technologies. The purpose of this presentation is to present a cloud-based environment that will offer hydrologic modeling tools or 'apps' for decision support and the web technologies that have been selected to aid in its implementation. Compared to the more commonly used lumped-parameter models, distributed models, while being more intuitive, are still data intensive, computationally expensive, and difficult to modify for scenario exploration. However, web technologies such as web GIS, web services, and cloud computing have made the data more accessible, provided an inexpensive means of high-performance computing, and created an environment for developing user-friendly apps for distributed modeling. Since many water authorities are primarily interested in the scenario exploration exercises with hydrologic models, we are creating a toolkit that facilitates the development of a series of apps for manipulating existing distributed models. There are a number of hurdles that cloud-based hydrologic modeling developers face. One of these is how to work with the geospatial data inherent with this class of models in a web environment. Supporting geospatial data in a website is beyond the capabilities of standard web frameworks and it

  17. Study of tropical clouds feedback to a climate warming as simulated by climate models

    International Nuclear Information System (INIS)

    Brient, Florent

    2012-01-01

    The last IPCC report affirms the predominant role of low cloud-radiative feedbacks in the inter-model spread of climate sensitivity. Understanding the mechanisms that control the behavior of low-level clouds is thus crucial. However, the complexity of coupled ocean-atmosphere models and the large number of processes potentially involved make the analysis of this response difficult. To simplify the analysis and to identify the most critical controls of cloud feedbacks, we analyze the cloud response to climate change simulated by the IPSL-CM5A model in a hierarchy of configurations. A comparison between three model configurations (coupled, atmospheric and aqua-planet) using the same physical parametrizations shows that the cloud response to global warming is dominated by a decrease of low clouds in regimes of moderate subsidence. Using a Single Column Model, forced by weak subsidence large-scale forcing, allows us to reproduce the vertical cloud profile predicted in the 3D model, as well as its response to climate change (if a stochastic forcing is added on vertical velocity). We analyze the sensitivity of this low-cloud response to external forcing and also to uncertain parameters of physical parameterizations involved on the atmospheric model. Through a moist static energy (MSE) budget, we highlight several mechanisms: (1) Robust: Over weak subsidence regimes, the Clausius-Clapeyron relationship predicts that a warmer atmosphere leads to a increase of the vertical MSE gradient, resulting on a strengthening of the import of low-MSE from the free atmosphere into the cloudy boundary layer. The MSE budget links changes of vertical advection and cloud radiative effects. (2) Physics Model Dependent: The coupling between shallow convection, turbulence and cloud schemes allows the intensification of low-MSE transport so that cloud radiative cooling becomes 'less necessary' to balance the energy budget (Robust positive low cloud-radiative feedback for the model). The

  18. Modeling and query the uncertainty of network constrained moving objects based on RFID data

    Science.gov (United States)

    Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie

    2007-06-01

    The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.

  19. Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)

    2016-04-15

    The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

  20. Modeling Dzyaloshinskii-Moriya Interaction at Transition Metal Interfaces: Constrained Moment versus Generalized Bloch Theorem

    KAUST Repository

    Dong, Yao-Jun; Belabbes, Abderrezak; Manchon, Aurelien

    2017-01-01

    Dzyaloshinskii-Moriya interaction (DMI) at Pt/Co interfaces is investigated theoretically using two different first principles methods. The first one uses the constrained moment method to build a spin spiral in real space, while the second method uses the generalized Bloch theorem approach to construct a spin spiral in reciprocal space. We show that although the two methods produce an overall similar total DMI energy, the dependence of DMI as a function of the spin spiral wavelength is dramatically different. We suggest that long-range magnetic interactions, that determine itinerant magnetism in transition metals, are responsible for this discrepancy. We conclude that the generalized Bloch theorem approach is more adapted to model DMI in transition metal systems, where magnetism is delocalized, while the constrained moment approach is mostly applicable to weak or insulating magnets, where magnetism is localized.

  1. Modeling Dzyaloshinskii-Moriya Interaction at Transition Metal Interfaces: Constrained Moment versus Generalized Bloch Theorem

    KAUST Repository

    Dong, Yao-Jun

    2017-10-29

    Dzyaloshinskii-Moriya interaction (DMI) at Pt/Co interfaces is investigated theoretically using two different first principles methods. The first one uses the constrained moment method to build a spin spiral in real space, while the second method uses the generalized Bloch theorem approach to construct a spin spiral in reciprocal space. We show that although the two methods produce an overall similar total DMI energy, the dependence of DMI as a function of the spin spiral wavelength is dramatically different. We suggest that long-range magnetic interactions, that determine itinerant magnetism in transition metals, are responsible for this discrepancy. We conclude that the generalized Bloch theorem approach is more adapted to model DMI in transition metal systems, where magnetism is delocalized, while the constrained moment approach is mostly applicable to weak or insulating magnets, where magnetism is localized.

  2. Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems

    Directory of Open Access Journals (Sweden)

    Man Hong

    2013-01-01

    Full Text Available To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR is proposed. A predictive control algorithm based on the Kalman filter for constrained Hammerstein-Wiener systems is designed. An output feedback control law regarding the linear subsystem is derived by state observation. The size of reaction heat produced and its influence on the output are evaluated by the Kalman filter. The observation and evaluation results are calculated by the multistep predictive approach. Actual control variables are computed while considering the constraints of the optimal control problem in a finite horizon through the receding horizon. The simulation example of the CSTR tester shows the effectiveness and feasibility of the proposed algorithm.

  3. Process-model simulations of cloud albedo enhancement by aerosols in the Arctic

    Science.gov (United States)

    Kravitz, Ben; Wang, Hailong; Rasch, Philip J.; Morrison, Hugh; Solomon, Amy B.

    2014-01-01

    A cloud-resolving model is used to simulate the effectiveness of Arctic marine cloud brightening via injection of cloud condensation nuclei (CCN), either through geoengineering or other increased sources of Arctic aerosols. An updated cloud microphysical scheme is employed, with prognostic CCN and cloud particle numbers in both liquid and mixed-phase marine low clouds. Injection of CCN into the marine boundary layer can delay the collapse of the boundary layer and increase low-cloud albedo. Albedo increases are stronger for pure liquid clouds than mixed-phase clouds. Liquid precipitation can be suppressed by CCN injection, whereas ice precipitation (snow) is affected less; thus, the effectiveness of brightening mixed-phase clouds is lower than for liquid-only clouds. CCN injection into a clean regime results in a greater albedo increase than injection into a polluted regime, consistent with current knowledge about aerosol–cloud interactions. Unlike previous studies investigating warm clouds, dynamical changes in circulation owing to precipitation changes are small. According to these results, which are dependent upon the representation of ice nucleation processes in the employed microphysical scheme, Arctic geoengineering is unlikely to be effective as the sole means of altering the global radiation budget but could have substantial local radiative effects. PMID:25404677

  4. Comparison of Cloud Properties from CALIPSO-CloudSat and Geostationary Satellite Data

    Science.gov (United States)

    Nguyen, L.; Minnis, P.; Chang, F.; Winker, D.; Sun-Mack, S.; Spangenberg, D.; Austin, R.

    2007-01-01

    Cloud properties are being derived in near-real time from geostationary satellite imager data for a variety of weather and climate applications and research. Assessment of the uncertainties in each of the derived cloud parameters is essential for confident use of the products. Determination of cloud amount, cloud top height, and cloud layering is especially important for using these real -time products for applications such as aircraft icing condition diagnosis and numerical weather prediction model assimilation. Furthermore, the distribution of clouds as a function of altitude has become a central component of efforts to evaluate climate model cloud simulations. Validation of those parameters has been difficult except over limited areas where ground-based active sensors, such as cloud radars or lidars, have been available on a regular basis. Retrievals of cloud properties are sensitive to the surface background, time of day, and the clouds themselves. Thus, it is essential to assess the geostationary satellite retrievals over a variety of locations. The availability of cloud radar data from CloudSat and lidar data from CALIPSO make it possible to perform those assessments over each geostationary domain at 0130 and 1330 LT. In this paper, CloudSat and CALIPSO data are matched with contemporaneous Geostationary Operational Environmental Satellite (GOES), Multi-functional Transport Satellite (MTSAT), and Meteosat-8 data. Unlike comparisons with cloud products derived from A-Train imagers, this study considers comparisons of nadir active sensor data with off-nadir retrievals. These matched data are used to determine the uncertainties in cloud-top heights and cloud amounts derived from the geostationary satellite data using the Clouds and the Earth s Radiant Energy System (CERES) cloud retrieval algorithms. The CERES multi-layer cloud detection method is also evaluated to determine its accuracy and limitations in the off-nadir mode. The results will be useful for

  5. EDITORIAL: Aerosol cloud interactions—a challenge for measurements and modeling at the cutting edge of cloud climate interactions

    Science.gov (United States)

    Spichtinger, Peter; Cziczo, Daniel J.

    2008-04-01

    Research in aerosol properties and cloud characteristics have historically been considered two separate disciplines within the field of atmospheric science. As such, it has been uncommon for a single researcher, or even research group, to have considerable expertise in both subject areas. The recent attention paid to global climate change has shown that clouds can have a considerable effect on the Earth's climate and that one of the most uncertain aspects in their formation, persistence, and ultimate dissipation is the role played by aerosols. This highlights the need for researchers in both disciplines to interact more closely than they have in the past. This is the vision behind this focus issue of Environmental Research Letters. Certain interactions between aerosols and clouds are relatively well studied and understood. For example, it is known that an increase in the aerosol concentration will increase the number of droplets in warm clouds, decrease their average size, reduce the rate of precipitation, and extend the lifetime. Other effects are not as well known. For example, persistent ice super-saturated conditions are observed in the upper troposphere that appear to exceed our understanding of the conditions required for cirrus cloud formation. Further, the interplay of dynamics versus effects purely attributed to aerosols remains highly uncertain. The purpose of this focus issue is to consider the current state of knowledge of aerosol/cloud interactions, to define the contemporary uncertainties, and to outline research foci as we strive to better understand the Earth's climate system. This focus issue brings together laboratory experiments, field data, and model studies. The authors address issues associated with warm liquid water, cold ice, and intermediate temperature mixed-phase clouds. The topics include the uncertainty associated with the effect of black carbon and organics, aerosol types of anthropogenic interest, on droplet and ice formation. Phases

  6. Modeling Dynamic Contrast-Enhanced MRI Data with a Constrained Local AIF

    DEFF Research Database (Denmark)

    Duan, Chong; Kallehauge, Jesper F.; Pérez-Torres, Carlos J

    2018-01-01

    PURPOSE: This study aims to develop a constrained local arterial input function (cL-AIF) to improve quantitative analysis of dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) data by accounting for the contrast-agent bolus amplitude error in the voxel-specific AIF. PROCEDURES....... RESULTS: When the data model included the cL-AIF, tracer kinetic parameters were correctly estimated from in silico data under contrast-to-noise conditions typical of clinical DCE-MRI experiments. Considering the clinical cervical cancer data, Bayesian model selection was performed for all tumor voxels...

  7. Kovacs effect and fluctuation-dissipation relations in 1D kinetically constrained models

    International Nuclear Information System (INIS)

    Buhot, Arnaud

    2003-01-01

    Strong and fragile glass relaxation behaviours are obtained simply changing the constraints of the kinetically constrained Ising chain from symmetric to purely asymmetric. We study the out-of-equilibrium dynamics of these two models focusing on the Kovacs effect and the fluctuation-dissipation (FD) relations. The Kovacs or memory effect, commonly observed in structural glasses, is present for both constraints but enhanced with the asymmetric ones. Most surprisingly, the related FD relations satisfy the FD theorem in both cases. This result strongly differs from the simple quenching procedure where the asymmetric model presents strong deviations from the FD theorem

  8. A cost-constrained model of strategic service quality emphasis in nursing homes.

    Science.gov (United States)

    Davis, M A; Provan, K G

    1996-02-01

    This study employed structural equation modeling to test the relationship between three aspects of the environmental context of nursing homes; Medicaid dependence, ownership status, and market demand, and two basic strategic orientations: low cost and differentiation based on service quality emphasis. Hypotheses were proposed and tested against data collected from a sample of nursing homes operating in a single state. Because of the overwhelming importance of cost control in the nursing home industry, a cost constrained strategy perspective was supported. Specifically, while the three contextual variables had no direct effect on service quality emphasis, the entire model was supported when cost control orientation was introduced as a mediating variable.

  9. A Chance-Constrained Economic Dispatch Model in Wind-Thermal-Energy Storage System

    Directory of Open Access Journals (Sweden)

    Yanzhe Hu

    2017-03-01

    Full Text Available As a type of renewable energy, wind energy is integrated into the power system with more and more penetration levels. It is challenging for the power system operators (PSOs to cope with the uncertainty and variation of the wind power and its forecasts. A chance-constrained economic dispatch (ED model for the wind-thermal-energy storage system (WTESS is developed in this paper. An optimization model with the wind power and the energy storage system (ESS is first established with the consideration of both the economic benefits of the system and less wind curtailments. The original wind power generation is processed by the ESS to obtain the final wind power output generation (FWPG. A Gaussian mixture model (GMM distribution is adopted to characterize the probabilistic and cumulative distribution functions with an analytical expression. Then, a chance-constrained ED model integrated by the wind-energy storage system (W-ESS is developed by considering both the overestimation costs and the underestimation costs of the system and solved by the sequential linear programming method. Numerical simulation results using the wind power data in four wind farms are performed on the developed ED model with the IEEE 30-bus system. It is verified that the developed ED model is effective to integrate the uncertain and variable wind power. The GMM distribution could accurately fit the actual distribution of the final wind power output, and the ESS could help effectively decrease the operation costs.

  10. Combining observations and models to reduce uncertainty in the cloud response to global warming

    Science.gov (United States)

    Norris, J. R.; Myers, T.; Chellappan, S.

    2017-12-01

    Currently there is large uncertainty on how subtropical low-level clouds will respond to global warming and whether they will act as a positive feedback or negative feedback. Global climate models substantially agree on what changes in atmospheric structure and circulation will occur with global warming but greatly disagree over how clouds will respond to these changes in structure and circulation. An examination of models with the most realistic simulations of low-level cloudiness indicates that the model cloud response to atmospheric changes associated with global warming is quantitatively similar to the model cloud response to atmospheric changes at interannual time scales. For these models, the cloud response to global warming predicted by multilinear regression using coefficients derived from interannual time scales is quantitatively similar to the cloud response to global warming directly simulated by the model. Since there is a large spread among cloud response coefficients even among models with the most realistic cloud simulations, substitution of coefficients derived from satellite observations reduces the uncertainty range of the low-level cloud feedback. Increased sea surface temperature associated with global warming acts to reduce low-level cloudiness, which is partially offset by increased lower tropospheric stratification that acts to enhance low-level cloudiness. Changes in free-tropospheric relative humidity, subsidence, and horizontal advection have only a small impact on low-level cloud. The net reduction in subtropical low-level cloudiness increases absorption of solar radiation by the climate system, thus resulting in a weak positive feedback.

  11. High-Resolution Global Modeling of the Effects of Subgrid-Scale Clouds and Turbulence on Precipitating Cloud Systems

    Energy Technology Data Exchange (ETDEWEB)

    Bogenschutz, Peter [National Center for Atmospheric Research, Boulder, CO (United States); Moeng, Chin-Hoh [National Center for Atmospheric Research, Boulder, CO (United States)

    2015-10-13

    The PI’s at the National Center for Atmospheric Research (NCAR), Chin-Hoh Moeng and Peter Bogenschutz, have primarily focused their time on the implementation of the Simplified-Higher Order Turbulence Closure (SHOC; Bogenschutz and Krueger 2013) to the Multi-scale Modeling Framework (MMF) global model and testing of SHOC on deep convective cloud regimes.

  12. A Coupled fcGCM-GCE Modeling System: A 3D Cloud Resolving Model and a Regional Scale Model

    Science.gov (United States)

    Tao, Wei-Kuo

    2005-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and ore sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicity calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A Brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), (3) A discussion on the Goddard WRF version (its developments and applications), and (4) The characteristics of the four-dimensional cloud data

  13. Using In Situ Observations and Satellite Retrievals to Constrain Large-Eddy Simulations and Single-Column Simulations: Implications for Boundary-Layer Cloud Parameterization in the NASA GISS GCM

    Science.gov (United States)

    Remillard, J.

    2015-12-01

    Two low-cloud periods from the CAP-MBL deployment of the ARM Mobile Facility at the Azores are selected through a cluster analysis of ISCCP cloud property matrices, so as to represent two low-cloud weather states that the GISS GCM severely underpredicts not only in that region but also globally. The two cases represent (1) shallow cumulus clouds occurring in a cold-air outbreak behind a cold front, and (2) stratocumulus clouds occurring when the region was dominated by a high-pressure system. Observations and MERRA reanalysis are used to derive specifications used for large-eddy simulations (LES) and single-column model (SCM) simulations. The LES captures the major differences in horizontal structure between the two low-cloud fields, but there are unconstrained uncertainties in cloud microphysics and challenges in reproducing W-band Doppler radar moments. The SCM run on the vertical grid used for CMIP-5 runs of the GCM does a poor job of representing the shallow cumulus case and is unable to maintain an overcast deck in the stratocumulus case, providing some clues regarding problems with low-cloud representation in the GCM. SCM sensitivity tests with a finer vertical grid in the boundary layer show substantial improvement in the representation of cloud amount for both cases. GCM simulations with CMIP-5 versus finer vertical gridding in the boundary layer are compared with observations. The adoption of a two-moment cloud microphysics scheme in the GCM is also tested in this framework. The methodology followed in this study, with the process-based examination of different time and space scales in both models and observations, represents a prototype for GCM cloud parameterization improvements.

  14. A distance constrained synaptic plasticity model of C. elegans neuronal network

    Science.gov (United States)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  15. Observations of Co-variation in Cloud Properties and their Relationships with Atmospheric State

    Science.gov (United States)

    Sinclair, K.; van Diedenhoven, B.; Fridlind, A. M.; Arnold, T. G.; Yorks, J. E.; Heymsfield, G. M.; McFarquhar, G. M.; Um, J.

    2017-12-01

    Radiative properties of upper tropospheric ice clouds are generally not well represented in global and cloud models. Cloud top height, cloud thermodynamic phase, cloud optical thickness, cloud water path, particle size and ice crystal shape all serve as observational targets for models to constrain cloud properties. Trends or biases in these cloud properties could have profound effects on the climate since they affect cloud radiative properties. Better understanding of co-variation between these cloud properties and linkages with atmospheric state variables can lead to better representation of clouds in models by reducing biases in their micro- and macro-physical properties as well as their radiative properties. This will also enhance our general understanding of cloud processes. In this analysis we look at remote sensing, in situ and reanalysis data from the MODIS Airborne Simulator (MAS), Cloud Physics Lidar (CPL), Cloud Radar System (CRS), GEOS-5 reanalysis data and GOES imagery obtained during the Tropical Composition, Cloud and Climate Coupling (TC4) airborne campaign. The MAS, CPL and CRS were mounted on the ER-2 high-altitude aircraft during this campaign. In situ observations of ice size and shape were made aboard the DC8 and WB57 aircrafts. We explore how thermodynamic phase, ice effective radius, particle shape and radar reflectivity vary with altitude and also investigate how these observed cloud properties vary with cloud type, cloud top temperature, relative humidity and wind profiles. Observed systematic relationships are supported by physical interpretations of cloud processes and any unexpected differences are examined.

  16. Source model for the Copahue volcano magmaplumbing system constrained by InSARsurface deformation observations

    Science.gov (United States)

    Lundgren, P.; Nikkhoo, M.; Samsonov, S. V.; Milillo, P.; Gil-Cruz, F., Sr.; Lazo, J.

    2017-12-01

    Copahue volcano straddling the edge of the Agrio-Caviahue caldera along the Chile-Argentinaborder in the southern Andes has been in unrest since inflation began in late 2011. We constrain Copahue'ssource models with satellite and airborne interferometric synthetic aperture radar (InSAR) deformationobservations. InSAR time series from descending track RADARSAT-2 and COSMO-SkyMed data span theentire inflation period from 2011 to 2016, with their initially high rates of 12 and 15 cm/yr, respectively,slowing only slightly despite ongoing small eruptions through 2016. InSAR ascending and descending tracktime series for the 2013-2016 time period constrain a two-source compound dislocation model, with a rate ofvolume increase of 13 × 106 m3/yr. They consist of a shallow, near-vertical, elongated source centered at2.5 km beneath the summit and a deeper, shallowly plunging source centered at 7 km depth connecting theshallow source to the deeper caldera. The deeper source is located directly beneath the volcano tectonicseismicity with the lower bounds of the seismicity parallel to the plunge of the deep source. InSAR time seriesalso show normal fault offsets on the NE flank Copahue faults. Coulomb stress change calculations forright-lateral strike slip (RLSS), thrust, and normal receiver faults show positive values in the north caldera forboth RLSS and normal faults, suggesting that northward trending seismicity and Copahue fault motion withinthe caldera are caused by the modeled sources. Together, the InSAR-constrained source model and theseismicity suggest a deep conduit or transfer zone where magma moves from the central caldera toCopahue's upper edifice.

  17. Microphysical variability of vigorous Amazonian deep convection observed by CloudSat, and relevance for cloud-resolving model

    Science.gov (United States)

    Dodson, J. B.; Taylor, P. C.

    2017-12-01

    The number and varieties of both satellite cloud observations and cloud simulations are increasing rapidly. This create a challenge in identifying the best methods for quantifying the physical processes associated with deep convection, and then comparing convective observations with simulations. The use of satellite simulators in conjunction with model output is an increasingly popular method of comparison studies. However, the complexity of deep convective systems renders simplistic comparison metrics hazardous, possibly resulting is misleading or even contradicting conclusions. To investigate this, CloudSat observations of Amazonian deep convective cores (DCCs) and associated anvils are compared and contrasted with output from cloud resolving models in a manner that both highlights microphysical proprties of observed convection, and displays the effects of microphysical parameterizations on allowing robust comparisons. First, contoured frequency by altitude diagrams (CFAD) are calculated from the reflectivity fields of DCCs observed by CloudSat. This reveals two distinct modes of hydrometeor variability in the high level cloud region, with one dominated by snow and aggregates, and the other by large graupel and hail. Second, output from the superparameterized Community Atmospheric Model (SP-CAM) data are processed with the Quickbeam radar simulator to produce CFADs which can be compared with the observed CFADs. Two versions of SP-CAM are used, with one (version 4) having single-moment microphysics which excludes graupel/hail, and the other (version 5) a double-moment scheme with graupel. The change from version 4 to 5 improves the reflectivity CFAD, even without corresponding changes to non-hydrometeor fields such as vertical velocity. However, it does not produce a realistic double hydrometeor mode. Finally, the influences of microphysics are further tested in the System for Atmospheric Modeling (SAM), which allows for higher control over model parameters than

  18. The collision of a strong shock with a gas cloud: a model for Cassiopeia A

    International Nuclear Information System (INIS)

    Sgro, A.G.

    1975-01-01

    The result of the collision of the shock with the cloud is a shock traveling around the cloud, a shock transmitted into the cloud, and a shock reflected from the cloud. By equating the cooling time of the posttransmitted shock gas to the time required for the transmitted shock to travel the length of the cloud, a critical cloud density n/subc/ /sup prime/ is defined. For clouds with density greater than n/subc/ /sup prime/, the posttransmitted shock gas cools rapidly and then emits the lines of the lower ionization stages of its constituent elements. The structure of such and its expected appearance to an observer are discussed and compared with the quasi-stationary condensations of Cas A. Conversely, clouds with density less than n/subc//sup prime/ remain hot for several thousand years, and are sources of X-radiation whose temperatures are much less than that of the intercloud gas. After the transmitted shock passes, the cloud pressure is greater than the pressure in the surrounding gas, causing the cloud to expand and the emission to decrease from its value just after the collision. A model in which the soft X-radiation of Cas A is due to a collection of such clouds is discussed. The faint emission patches to the north of Cas A are interpreted as preshocked clouds which will probably become quasi-stationary condensations after being hit by the shock

  19. Dynamical insurance models with investment: Constrained singular problems for integrodifferential equations

    Science.gov (United States)

    Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.

    2016-01-01

    Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.

  20. a Modeling Method of Fluttering Leaves Based on Point Cloud

    Science.gov (United States)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  1. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    Directory of Open Access Journals (Sweden)

    J. Tang

    2017-09-01

    Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  2. Lifetime-Aware Cloud Data Centers: Models and Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Luca Chiaraviglio

    2016-06-01

    Full Text Available We present a model to evaluate the server lifetime in cloud data centers (DCs. In particular, when the server power level is decreased, the failure rate tends to be reduced as a consequence of the limited number of components powered on. However, the variation between the different power states triggers a failure rate increase. We therefore consider these two effects in a server lifetime model, subject to an energy-aware management policy. We then evaluate our model in a realistic case study. Our results show that the impact on the server lifetime is far from negligible. As a consequence, we argue that a lifetime-aware approach should be pursued to decide how and when to apply a power state change to a server.

  3. A comparative analysis of pricing models for enterprise cloud platforms

    CSIR Research Space (South Africa)

    Mvelase, P

    2013-09-01

    Full Text Available on the realization that it is not economically viable for SMMEs to acquire their own private cloud infrastructure or even subscribe to public cloud services as a single entity. In our VE-enabled cloud enterprise architecture for SMMEs, temporary co...

  4. Security Certification Challenges in a Cloud Computing Delivery Model

    Science.gov (United States)

    2010-04-27

    Relevant Security Standards, Certifications, and Guidance  NIST SP 800 series  ISO /IEC 27001 framework  Cloud Security Alliance  Statement of...CSA Domains / Cloud Features ISO 27001 Cloud Service Provider Responsibility Government Agency Responsibility Analyze Security gaps Compensating

  5. ARTISTIC VISUALIZATION OF TRAJECTORY DATA USING CLOUD MODEL

    Directory of Open Access Journals (Sweden)

    T. Wu

    2017-09-01

    Full Text Available Rapid advance of location acquisition technologies boosts the generation of trajectory data, which track the traces of moving objects. A trajectory is typically represented by a sequence of timestamped geographical locations. Data visualization is an efficient means to represent distributions and structures of datasets and reveal hidden patterns in the data. In this paper, we explore a cloud model-based method for the generation of stylized renderings of trajectory data. The artistic visualizations of the proposed method do not have the goal to allow for data mining tasks or others but instead show the aesthetic effect of the traces of moving objects in a distorted manner. The techniques used to create the images of traces of moving objects include the uncertain line using extended cloud model, stroke-based rendering of geolocation in varying styles, and stylistic shading with aesthetic effects for print or electronic displays, as well as various parameters to be further personalized. The influence of different parameters on the aesthetic qualities of various painted images is investigated, including step size, types of strokes, colour modes, and quantitative comparisons using four aesthetic measures are also involved into the experiment. The experimental results suggest that the proposed method is with advantages of uncertainty, simplicity and effectiveness, and it would inspire professional graphic designers and amateur users who may be interested in playful and creative exploration of artistic visualization of trajectory data.

  6. Artistic Visualization of Trajectory Data Using Cloud Model

    Science.gov (United States)

    Wu, T.; Zhou, Y.; Zhang, L.

    2017-09-01

    Rapid advance of location acquisition technologies boosts the generation of trajectory data, which track the traces of moving objects. A trajectory is typically represented by a sequence of timestamped geographical locations. Data visualization is an efficient means to represent distributions and structures of datasets and reveal hidden patterns in the data. In this paper, we explore a cloud model-based method for the generation of stylized renderings of trajectory data. The artistic visualizations of the proposed method do not have the goal to allow for data mining tasks or others but instead show the aesthetic effect of the traces of moving objects in a distorted manner. The techniques used to create the images of traces of moving objects include the uncertain line using extended cloud model, stroke-based rendering of geolocation in varying styles, and stylistic shading with aesthetic effects for print or electronic displays, as well as various parameters to be further personalized. The influence of different parameters on the aesthetic qualities of various painted images is investigated, including step size, types of strokes, colour modes, and quantitative comparisons using four aesthetic measures are also involved into the experiment. The experimental results suggest that the proposed method is with advantages of uncertainty, simplicity and effectiveness, and it would inspire professional graphic designers and amateur users who may be interested in playful and creative exploration of artistic visualization of trajectory data.

  7. Robust model predictive control for constrained continuous-time nonlinear systems

    Science.gov (United States)

    Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong

    2018-02-01

    In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.

  8. Inexact Multistage Stochastic Chance Constrained Programming Model for Water Resources Management under Uncertainties

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2017-01-01

    Full Text Available In order to formulate water allocation schemes under uncertainties in the water resources management systems, an inexact multistage stochastic chance constrained programming (IMSCCP model is proposed. The model integrates stochastic chance constrained programming, multistage stochastic programming, and inexact stochastic programming within a general optimization framework to handle the uncertainties occurring in both constraints and objective. These uncertainties are expressed as probability distributions, interval with multiply distributed stochastic boundaries, dynamic features of the long-term water allocation plans, and so on. Compared with the existing inexact multistage stochastic programming, the IMSCCP can be used to assess more system risks and handle more complicated uncertainties in water resources management systems. The IMSCCP model is applied to a hypothetical case study of water resources management. In order to construct an approximate solution for the model, a hybrid algorithm, which incorporates stochastic simulation, back propagation neural network, and genetic algorithm, is proposed. The results show that the optimal value represents the maximal net system benefit achieved with a given confidence level under chance constraints, and the solutions provide optimal water allocation schemes to multiple users over a multiperiod planning horizon.

  9. Cloud Computing Impelementation Using Model Roadmap for Cloud Computing Adoption (ROCCA on IT Consultant Industry

    Directory of Open Access Journals (Sweden)

    Panji Arief Perdana

    2017-09-01

    increase the performance PT Matrica Consulting Service based on the characteristics of the cloud which is flexible and secure to be accessed as long as it is connected to the Internet and maintained properly.

  10. Constrained consequence

    CSIR Research Space (South Africa)

    Britz, K

    2011-09-01

    Full Text Available their basic properties and relationship. In Section 3 we present a modal instance of these constructions which also illustrates with an example how to reason abductively with constrained entailment in a causal or action oriented context. In Section 4 we... of models with the former approach, whereas in Section 3.3 we give an example illustrating ways in which C can be de ned with both. Here we employ the following versions of local consequence: De nition 3.4. Given a model M = hW;R;Vi and formulas...

  11. Event-triggered decentralized robust model predictive control for constrained large-scale interconnected systems

    Directory of Open Access Journals (Sweden)

    Ling Lu

    2016-12-01

    Full Text Available This paper considers the problem of event-triggered decentralized model predictive control (MPC for constrained large-scale linear systems subject to additive bounded disturbances. The constraint tightening method is utilized to formulate the MPC optimization problem. The local predictive control law for each subsystem is determined aperiodically by relevant triggering rule which allows a considerable reduction of the computational load. And then, the robust feasibility and closed-loop stability are proved and it is shown that every subsystem state will be driven into a robust invariant set. Finally, the effectiveness of the proposed approach is illustrated via numerical simulations.

  12. Modeling Oil Exploration and Production: Resource-Constrained and Agent-Based Approaches

    International Nuclear Information System (INIS)

    Jakobsson, Kristofer

    2010-05-01

    Energy is essential to the functioning of society, and oil is the single largest commercial energy source. Some analysts have concluded that the peak in oil production is soon about to happen on the global scale, while others disagree. Such incompatible views can persist because the issue of 'peak oil' cuts through the established scientific disciplines. The question is: what characterizes the modeling approaches that are available today, and how can they be further developed to improve a trans-disciplinary understanding of oil depletion? The objective of this thesis is to present long-term scenarios of oil production (Paper I) using a resource-constrained model; and an agent-based model of the oil exploration process (Paper II). It is also an objective to assess the strengths, limitations, and future development potentials of resource-constrained modeling, analytical economic modeling, and agent-based modeling. Resource-constrained models are only suitable when the time frame is measured in decades, but they can give a rough indication of which production scenarios are reasonable given the size of the resource. However, the models are comprehensible, transparent and the only feasible long-term forecasting tools at present. It is certainly possible to distinguish between reasonable scenarios, based on historically observed parameter values, and unreasonable scenarios with parameter values obtained through flawed analogy. The economic subfield of optimal depletion theory is founded on the notion of rational economic agents, and there is a causal relation between decisions made at the micro-level and the macro-result. In terms of future improvements, however, the analytical form considerably restricts the versatility of the approach. Agent-based modeling makes it feasible to combine economically motivated agents with a physical environment. An example relating to oil exploration is given in Paper II, where it is shown that the exploratory activities of individual

  13. Value creation in the cloud: understanding business model factors affecting value of cloud computing

    OpenAIRE

    Morgan, Lorraine; Conboy, Kieran

    2013-01-01

    peer-reviewed Despite the rapid emergence of cloud technology, its prevalence and accessibility to all types of organizations and its potential to predominantly shift competitive landscapes by providing a new platform for creating and delivering business value, empirical research on the business value of cloud computing, and in particular how service providers create value for their customers, is quite limited. Of what little research exists to date, most focuses on technical issu...

  14. Is ozone model bias driven by errors in cloud predictions? A quantitative assessment using satellite cloud retrievals in WRF-Chem

    Science.gov (United States)

    Ryu, Y. H.; Hodzic, A.; Barré, J.; Descombes, G.; Minnis, P.

    2017-12-01

    Clouds play a key role in radiation and hence O3 photochemistry by modulating photolysis rates and light-dependent emissions of biogenic volatile organic compounds (BVOCs). It is not well known, however, how much of the bias in O3 predictions is caused by inaccurate cloud predictions. This study quantifies the errors in surface O3 predictions associated with clouds in summertime over CONUS using the Weather Research and Forecasting with Chemistry (WRF-Chem) model. Cloud fields used for photochemistry are corrected based on satellite cloud retrievals in sensitivity simulations. It is found that the WRF-Chem model is able to detect about 60% of clouds in the right locations and generally underpredicts cloud optical depths. The errors in hourly O3 due to the errors in cloud predictions can be up to 60 ppb. On average in summertime over CONUS, the errors in 8-h average O3 of 1-6 ppb are found to be attributable to those in cloud predictions under cloudy sky conditions. The contribution of changes in photolysis rates due to clouds is found to be larger ( 80 % on average) than that of light-dependent BVOC emissions. The effects of cloud corrections on O­3 are about 2 times larger in VOC-limited than NOx-limited regimes, suggesting that the benefits of accurate cloud predictions would be greater in VOC-limited than NOx-limited regimes.

  15. The Cloud Feedback Model Intercomparison Project (CFMIP) contribution to CMIP6.

    Science.gov (United States)

    Webb, Mark J.; Andrews, Timothy; Bodas-Salcedo, Alejandro; Bony, Sandrine; Bretherton, Christopher S.; Chadwick, Robin; Chepfer, Helene; Douville, Herve; Good, Peter; Kay, Jennifer E.; hide

    2017-01-01

    The primary objective of CFMIP is to inform future assessments of cloud feedbacks through improved understanding of cloud-climate feedback mechanisms and better evaluation of cloud processes and cloud feedbacks in climate models. However, the CFMIP approach is also increasingly being used to understand other aspects of climate change, and so a second objective has now been introduced, to improve understanding of circulation, regional-scale precipitation, and non-linear changes. CFMIP is supporting ongoing model inter-comparison activities by coordinating a hierarchy of targeted experiments for CMIP6, along with a set of cloud-related output diagnostics. CFMIP contributes primarily to addressing the CMIP6 questions 'How does the Earth system respond to forcing?' and 'What are the origins and consequences of systematic model biases?' and supports the activities of the WCRP Grand Challenge on Clouds, Circulation and Climate Sensitivity. A compact set of Tier 1 experiments is proposed for CMIP6 to address this question: (1) what are the physical mechanisms underlying the range of cloud feedbacks and cloud adjustments predicted by climate models, and which models have the most credible cloud feedbacks? Additional Tier 2 experiments are proposed to address the following questions. (2) Are cloud feedbacks consistent for climate cooling and warming, and if not, why? (3) How do cloud-radiative effects impact the structure, the strength and the variability of the general atmospheric circulation in present and future climates? (4) How do responses in the climate system due to changes in solar forcing differ from changes due to CO2, and is the response sensitive to the sign of the forcing? (5) To what extent is regional climate change per CO2 doubling state-dependent (non-linear), and why? (6) Are climate feedbacks during the 20th century different to those acting on long-term climate change and climate sensitivity? (7) How do regional climate responses (e.g. in precipitation

  16. Feasibility Assessment of a Fine-Grained Access Control Model on Resource Constrained Sensors.

    Science.gov (United States)

    Uriarte Itzazelaia, Mikel; Astorga, Jasone; Jacob, Eduardo; Huarte, Maider; Romaña, Pedro

    2018-02-13

    Upcoming smart scenarios enabled by the Internet of Things (IoT) envision smart objects that provide services that can adapt to user behavior or be managed to achieve greater productivity. In such environments, smart things are inexpensive and, therefore, constrained devices. However, they are also critical components because of the importance of the information that they provide. Given this, strong security is a requirement, but not all security mechanisms in general and access control models in particular are feasible. In this paper, we present the feasibility assessment of an access control model that utilizes a hybrid architecture and a policy language that provides dynamic fine-grained policy enforcement in the sensors, which requires an efficient message exchange protocol called Hidra. This experimental performance assessment includes a prototype implementation, a performance evaluation model, the measurements and related discussions, which demonstrate the feasibility and adequacy of the analyzed access control model.

  17. Constraining spatial variations of the fine-structure constant in symmetron models

    Directory of Open Access Journals (Sweden)

    A.M.M. Pinho

    2017-06-01

    Full Text Available We introduce a methodology to test models with spatial variations of the fine-structure constant α, based on the calculation of the angular power spectrum of these measurements. This methodology enables comparisons of observations and theoretical models through their predictions on the statistics of the α variation. Here we apply it to the case of symmetron models. We find no indications of deviations from the standard behavior, with current data providing an upper limit to the strength of the symmetron coupling to gravity (log⁡β2<−0.9 when this is the only free parameter, and not able to constrain the model when also the symmetry breaking scale factor aSSB is free to vary.

  18. A multilayer model to simulate rocket exhaust clouds

    Directory of Open Access Journals (Sweden)

    Davidson Martins Moreira

    2011-01-01

    Full Text Available This paper presents the MSDEF (Modelo Simulador da Dispersão de Efluentes de Foguetes, in Portuguese model, which represents the solution for time-dependent advection-diffusion equation applying the Laplace transform considering the Atmospheric Boundary Layer as a multilayer system. This solution allows a time evolution description of the concentration field emitted from a source during a release lasting time tr , and it takes into account deposition velocity, first-order chemical reaction, gravitational settling, precipitation scavenging, and plume rise effect. This solution is suitable for describing critical events relative to accidental release of toxic, flammable, or explosive substances. A qualitative evaluation of the model to simulate rocket exhaust clouds is showed.

  19. Using cloud models of heartbeats as the entity identifier to secure mobile devices.

    Science.gov (United States)

    Fu, Donglai; Liu, Yanhua

    2017-01-01

    Mobile devices are extensively used to store more private and often sensitive information. Therefore, it is important to protect them against unauthorised access. Authentication ensures that authorised users can use mobile devices. However, traditional authentication methods, such as numerical or graphic passwords, are vulnerable to passive attacks. For example, an adversary can steal the password by snooping from a shorter distance. To avoid these problems, this study presents a biometric approach that uses cloud models of heartbeats as the entity identifier to secure mobile devices. Here, it is identified that these concepts including cloud model or cloud have nothing to do with cloud computing. The cloud model appearing in the study is the cognitive model. In the proposed method, heartbeats are collected by two ECG electrodes that are connected to one mobile device. The backward normal cloud generator is used to generate ECG standard cloud models characterising the heartbeat template. When a user tries to have access to their mobile device, cloud models regenerated by fresh heartbeats will be compared with ECG standard cloud models to determine if the current user can use this mobile device. This authentication method was evaluated from three aspects including accuracy, authentication time and energy consumption. The proposed method gives 86.04% of true acceptance rate with 2.73% of false acceptance rate. One authentication can be done in 6s, and this processing consumes about 2000 mW of power.

  20. submitter Modeling the thermodynamics and kinetics of sulfuric acid-dimethylamine-water nanoparticle growth in the CLOUD chamber

    CERN Document Server

    Ahlm, L; Schobesberger, S; Praplan, A P; Kim, J; Tikkanen, O -P; Lawler, M J; Smith, J N; Tröstl, J; Acosta Navarro, J C; Baltensperger, U; Bianchi, F; Donahue, N M; Duplissy, J; Franchin, A; Jokinen, T; Keskinen, H; Kirkby, J; Kürten, A; Laaksonen, A; Lehtipalo, K; Petäjä, T; Riccobono, F; Rissanen, M P; Rondo, L; Schallhart, S; Simon, M; Winkler, P M; Worsnop, D R; Virtanen, A; Riipinen, I

    2016-01-01

    Dimethylamine (DMA) has a stabilizing effect on sulfuric acid (SA) clusters, and the SA and DMA molecules and clusters likely play important roles in both aerosol particle formation and growth in the atmosphere. We use the monodisperse particle growth model for acid-base chemistry in nanoparticle growth (MABNAG) together with direct and indirect observations from the CLOUD4 and CLOUD7 experiments in the cosmics leaving outdoor droplets (CLOUD) chamber at CERN to investigate the size and composition evolution of freshly formed particles consisting of SA, DMA, and water as they grow to 20 nm in dry diameter. Hygroscopic growth factors are measured using a nano-hygroscopicity tandem differential mobility analyzer (nano-HTDMA), which combined with simulations of particle water uptake using the thermodynamic extended-aerosol inorganics model (E-AIM) constrain the chemical composition. MABNAG predicts a particle-phase ratio between DMA and SA molecules of 1.1–1.3 for a 2 nm particle and DMA gas-phase mixing ratio...

  1. Constrained parameterisation of photosynthetic capacity causes significant increase of modelled tropical vegetation surface temperature

    Science.gov (United States)

    Kattge, J.; Knorr, W.; Raddatz, T.; Wirth, C.

    2009-04-01

    Photosynthetic capacity is one of the most sensitive parameters of terrestrial biosphere models whose representation in global scale simulations has been severely hampered by a lack of systematic analyses using a sufficiently broad database. Due to its coupling to stomatal conductance changes in the parameterisation of photosynthetic capacity may potentially influence transpiration rates and vegetation surface temperature. Here, we provide a constrained parameterisation of photosynthetic capacity for different plant functional types in the context of the photosynthesis model proposed by Farquhar et al. (1980), based on a comprehensive compilation of leaf photosynthesis rates and leaf nitrogen content. Mean values of photosynthetic capacity were implemented into the coupled climate-vegetation model ECHAM5/JSBACH and modelled gross primary production (GPP) is compared to a compilation of independent observations on stand scale. Compared to the current standard parameterisation the root-mean-squared difference between modelled and observed GPP is substantially reduced for almost all PFTs by the new parameterisation of photosynthetic capacity. We find a systematic depression of NUE (photosynthetic capacity divided by leaf nitrogen content) on certain tropical soils that are known to be deficient in phosphorus. Photosynthetic capacity of tropical trees derived by this study is substantially lower than standard estimates currently used in terrestrial biosphere models. This causes a decrease of modelled GPP while it significantly increases modelled tropical vegetation surface temperatures, up to 0.8°C. These results emphasise the importance of a constrained parameterisation of photosynthetic capacity not only for the carbon cycle, but also for the climate system.

  2. Modelling operations and security of cloud systems using Z-notation and Chinese Wall security policy

    Science.gov (United States)

    Basu, Srijita; Sengupta, Anirban; Mazumdar, Chandan

    2016-11-01

    Enterprises are increasingly using cloud computing for hosting their applications. Availability of fast Internet and cheap bandwidth are causing greater number of people to use cloud-based services. This has the advantage of lower cost and minimum maintenance. However, ensuring security of user data and proper management of cloud infrastructure remain major areas of concern. Existing techniques are either too complex, or fail to properly represent the actual cloud scenario. This article presents a formal cloud model using the constructs of Z-notation. Principles of the Chinese Wall security policy have been applied to design secure cloud-specific operations. The proposed methodology will enable users to safely host their services, as well as process sensitive data, on cloud.

  3. Potential transformation of trace species including aircraft exhaust in a cloud environment. The `Chedrom model`

    Energy Technology Data Exchange (ETDEWEB)

    Ozolin, Y.E.; Karol, I.L. [Main Geophysical Observatory, St. Petersburg (Russian Federation); Ramaroson, R. [Office National d`Etudes et de Recherches Aerospatiales (ONERA), 92 - Chatillon (France)

    1997-12-31

    Box model for coupled gaseous and aqueous phases is used for sensitivity study of potential transformation of trace gases in a cloud environment. The rate of this transformation decreases with decreasing of pH in droplets, with decreasing of photodissociation rates inside the cloud and with increasing of the droplet size. Model calculations show the potential formation of H{sub 2}O{sub 2} in aqueous phase and transformation of gaseous HNO{sub 3} into NO{sub x} in a cloud. This model is applied for exploration of aircraft exhausts evolution in plume inside a cloud. (author) 10 refs.

  4. Potential transformation of trace species including aircraft exhaust in a cloud environment. The `Chedrom model`

    Energy Technology Data Exchange (ETDEWEB)

    Ozolin, Y E; Karol, I L [Main Geophysical Observatory, St. Petersburg (Russian Federation); Ramaroson, R [Office National d` Etudes et de Recherches Aerospatiales (ONERA), 92 - Chatillon (France)

    1998-12-31

    Box model for coupled gaseous and aqueous phases is used for sensitivity study of potential transformation of trace gases in a cloud environment. The rate of this transformation decreases with decreasing of pH in droplets, with decreasing of photodissociation rates inside the cloud and with increasing of the droplet size. Model calculations show the potential formation of H{sub 2}O{sub 2} in aqueous phase and transformation of gaseous HNO{sub 3} into NO{sub x} in a cloud. This model is applied for exploration of aircraft exhausts evolution in plume inside a cloud. (author) 10 refs.

  5. Improving Climate Projections by Understanding How Cloud Phase affects Radiation

    Science.gov (United States)

    Cesana, Gregory; Storelvmo, Trude

    2017-01-01

    Whether a cloud is predominantly water or ice strongly influences interactions between clouds and radiation coming down from the Sun or up from the Earth. Being able to simulate cloud phase transitions accurately in climate models based on observational data sets is critical in order to improve confidence in climate projections, because this uncertainty contributes greatly to the overall uncertainty associated with cloud-climate feedbacks. Ultimately, it translates into uncertainties in Earth's sensitivity to higher CO2 levels. While a lot of effort has recently been made toward constraining cloud phase in climate models, more remains to be done to document the radiative properties of clouds according to their phase. Here we discuss the added value of a new satellite data set that advances the field by providing estimates of the cloud radiative effect as a function of cloud phase and the implications for climate projections.

  6. An energy balance model exploration of the impacts of interactions between surface albedo, cloud cover and water vapor on polar amplification

    Science.gov (United States)

    Södergren, A. Helena; McDonald, Adrian J.; Bodeker, Gregory E.

    2017-11-01

    We examine the effects of non-linear interactions between surface albedo, water vapor and cloud cover (referred to as climate variables) on amplified warming of the polar regions, using a new energy balance model. Our simulations show that the sum of the contributions to surface temperature changes due to any variable considered in isolation is smaller than the temperature changes from coupled feedback simulations. This non-linearity is strongest when all three climate variables are allowed to interact. Surface albedo appears to be the strongest driver of this non-linear behavior, followed by water vapor and clouds. This is because increases in longwave radiation absorbed by the surface, related to increases in water vapor and clouds, and increases in surface absorbed shortwave radiation caused by a decrease in surface albedo, amplify each other. Furthermore, our results corroborate previous findings that while increases in cloud cover and water vapor, along with the greenhouse effect itself, warm the polar regions, water vapor also significantly warms equatorial regions, which reduces polar amplification. Changes in surface albedo drive large changes in absorption of incoming shortwave radiation, thereby enhancing surface warming. Unlike high latitudes, surface albedo change at low latitudes are more constrained. Interactions between surface albedo, water vapor and clouds drive larger increases in temperatures in the polar regions compared to low latitudes. This is in spite of the fact that, due to a forcing, cloud cover increases at high latitudes and decreases in low latitudes, and that water vapor significantly enhances warming at low latitudes.

  7. Cloud-turbulence interactions: Sensitivity of a general circulation model to closure assumptions

    International Nuclear Information System (INIS)

    Brinkop, S.; Roeckner, E.

    1993-01-01

    Several approaches to parameterize the turbulent transport of momentum, heat, water vapour and cloud water for use in a general circulation model (GCM) have been tested in one-dimensional and three-dimensional model simulations. The schemes differ with respect to their closure assumptions (conventional eddy diffusivity model versus turbulent kinetic energy closure) and also regarding their treatment of cloud-turbulence interactions. The basis properties of these parameterizations are discussed first in column simulations of a stratocumulus-topped atmospheric boundary layer (ABL) under a strong subsidence inversion during the KONTROL experiment in the North Sea. It is found that the K-models tend to decouple the cloud layer from the adjacent layers because the turbulent activity is calculated from local variables. The higher-order scheme performs better in this respect because internally generated turbulence can be transported up and down through the action of turbulent diffusion. Thus, the TKE-scheme provides not only a better link between the cloud and the sub-cloud layer but also between the cloud and the inversion as a result of cloud-top entrainment. In the stratocumulus case study, where the cloud is confined by a pronounced subsidence inversion, increased entrainment favours cloud dilution through enhanced evaporation of cloud droplets. In the GCM study, however, additional cloud-top entrainment supports cloud formation because indirect cloud generating processes are promoted through efficient ventilation of the ABL, such as the enhanced moisture supply by surface evaporation and the increased depth of the ABL. As a result, tropical convection is more vigorous, the hydrological cycle is intensified, the whole troposphere becomes warmer and moister in general and the cloudiness in the upper part of the ABL is increased. (orig.)

  8. Toward GEOS-6, A Global Cloud System Resolving Atmospheric Model

    Science.gov (United States)

    Putman, William M.

    2010-01-01

    NASA is committed to observing and understanding the weather and climate of our home planet through the use of multi-scale modeling systems and space-based observations. Global climate models have evolved to take advantage of the influx of multi- and many-core computing technologies and the availability of large clusters of multi-core microprocessors. GEOS-6 is a next-generation cloud system resolving atmospheric model that will place NASA at the forefront of scientific exploration of our atmosphere and climate. Model simulations with GEOS-6 will produce a realistic representation of our atmosphere on the scale of typical satellite observations, bringing a visual comprehension of model results to a new level among the climate enthusiasts. In preparation for GEOS-6, the agency's flagship Earth System Modeling Framework [JDl] has been enhanced to support cutting-edge high-resolution global climate and weather simulations. Improvements include a cubed-sphere grid that exposes parallelism; a non-hydrostatic finite volume dynamical core, and algorithm designed for co-processor technologies, among others. GEOS-6 represents a fundamental advancement in the capability of global Earth system models. The ability to directly compare global simulations at the resolution of spaceborne satellite images will lead to algorithm improvements and better utilization of space-based observations within the GOES data assimilation system

  9. An Equilibrium Chance-Constrained Multiobjective Programming Model with Birandom Parameters and Its Application to Inventory Problem

    Directory of Open Access Journals (Sweden)

    Zhimiao Tao

    2013-01-01

    Full Text Available An equilibrium chance-constrained multiobjective programming model with birandom parameters is proposed. A type of linear model is converted into its crisp equivalent model. Then a birandom simulation technique is developed to tackle the general birandom objective functions and birandom constraints. By embedding the birandom simulation technique, a modified genetic algorithm is designed to solve the equilibrium chance-constrained multiobjective programming model. We apply the proposed model and algorithm to a real-world inventory problem and show the effectiveness of the model and the solution method.

  10. A supply function model for representing the strategic bidding of the producers in constrained electricity markets

    International Nuclear Information System (INIS)

    Bompard, Ettore; Napoli, Roberto; Lu, Wene; Jiang, Xiuchen

    2010-01-01

    The modeling of the bidding behaviour of the producer is a key-point in the modeling and simulation of the competitive electricity markets. In our paper, the linear supply function model is applied so as to find the Supply Function Equilibrium analytically. It also proposed a new and efficient approach to find SFEs for the network constrained electricity markets by finding the best slope of the supply function with the help of changing the intercept, and the method can be applied on the large systems. The approach proposed is applied to study IEEE-118 bus test systems and the comparison between bidding slope and bidding intercept is presented, as well, with reference to the test system. (author)

  11. Chance-constrained programming models for capital budgeting with NPV as fuzzy parameters

    Science.gov (United States)

    Huang, Xiaoxia

    2007-01-01

    In an uncertain economic environment, experts' knowledge about outlays and cash inflows of available projects consists of much vagueness instead of randomness. Investment outlays and annual net cash flows of a project are usually predicted by using experts' knowledge. Fuzzy variables can overcome the difficulties in predicting these parameters. In this paper, capital budgeting problem with fuzzy investment outlays and fuzzy annual net cash flows is studied based on credibility measure. Net present value (NPV) method is employed, and two fuzzy chance-constrained programming models for capital budgeting problem are provided. A fuzzy simulation-based genetic algorithm is provided for solving the proposed model problems. Two numerical examples are also presented to illustrate the modelling idea and the effectiveness of the proposed algorithm.

  12. A Hybrid Method for the Modelling and Optimisation of Constrained Search Problems

    Directory of Open Access Journals (Sweden)

    Sitek Pawel

    2014-08-01

    Full Text Available The paper presents a concept and the outline of the implementation of a hybrid approach to modelling and solving constrained problems. Two environments of mathematical programming (in particular, integer programming and declarative programming (in particular, constraint logic programming were integrated. The strengths of integer programming and constraint logic programming, in which constraints are treated in a different way and different methods are implemented, were combined to use the strengths of both. The hybrid method is not worse than either of its components used independently. The proposed approach is particularly important for the decision models with an objective function and many discrete decision variables added up in multiple constraints. To validate the proposed approach, two illustrative examples are presented and solved. The first example is the authors’ original model of cost optimisation in the supply chain with multimodal transportation. The second one is the two-echelon variant of the well-known capacitated vehicle routing problem.

  13. The Balance-of-Payments-Constrained Growth Model and the Limits to Export-Led Growth

    Directory of Open Access Journals (Sweden)

    Robert A. Blecker

    2000-12-01

    Full Text Available This paper discusses how A. P. Thirlwall's model of balance-of-payments-constrained growth can be adapted to analyze the idea of a "fallacy of composition" in the export-led growth strategy of many developing countries. The Deaton-Muellbauer model of the Almost Ideal Demand System (AIDS is used to represent the adding-up constraints on individual countries' exports, when they are all trying to export competing products to the same foreign markets (i.e. newly industrializing countries are exporting similar types of manufactured goods to the OECD countries. The relevance of the model to the recent financial crises in developing countries and policy alternatives for redirecting development strategies are also discussed.

  14. Efficient non-negative constrained model-based inversion in optoacoustic tomography

    International Nuclear Information System (INIS)

    Ding, Lu; Luís Deán-Ben, X; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis

    2015-01-01

    The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency. (paper)

  15. Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhien [Univ. of Wyoming, Laramie, WY (United States)

    2016-12-13

    Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentration retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic evaluations

  16. Forecasting Lightning Threat using Cloud-resolving Model Simulations

    Science.gov (United States)

    McCaul, E. W., Jr.; Goodman, S. J.; LaCasse, K. M.; Cecil, D. J.

    2009-01-01

    As numerical forecasts capable of resolving individual convective clouds become more common, it is of interest to see if quantitative forecasts of lightning flash rate density are possible, based on fields computed by the numerical model. Previous observational research has shown robust relationships between observed lightning flash rates and inferred updraft and large precipitation ice fields in the mixed phase regions of storms, and that these relationships might allow simulated fields to serve as proxies for lightning flash rate density. It is shown in this paper that two simple proxy fields do indeed provide reasonable and cost-effective bases for creating time-evolving maps of predicted lightning flash rate density, judging from a series of diverse simulation case study events in North Alabama for which Lightning Mapping Array data provide ground truth. One method is based on the product of upward velocity and the mixing ratio of precipitating ice hydrometeors, modeled as graupel only, in the mixed phase region of storms at the -15\\dgc\\ level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domainwide statistics of the peak values of simulated flash rate proxy fields against domainwide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. A blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Weather Research and Forecast Model simulations of selected North Alabama cases show that this model can distinguish the general character and intensity of most convective events, and that the proposed methods show promise as a means of generating

  17. ARM Cloud Radar Simulator Package for Global Climate Models Value-Added Product

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yuying [North Carolina State Univ., Raleigh, NC (United States); Xie, Shaocheng [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-01

    It has been challenging to directly compare U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility ground-based cloud radar measurements with climate model output because of limitations or features of the observing processes and the spatial gap between model and the single-point measurements. To facilitate the use of ARM radar data in numerical models, an ARM cloud radar simulator was developed to converts model data into pseudo-ARM cloud radar observations that mimic the instrument view of a narrow atmospheric column (as compared to a large global climate model [GCM] grid-cell), thus allowing meaningful comparison between model output and ARM cloud observations. The ARM cloud radar simulator value-added product (VAP) was developed based on the CloudSat simulator contained in the community satellite simulator package, the Cloud Feedback Model Intercomparison Project (CFMIP) Observation Simulator Package (COSP) (Bodas-Salcedo et al., 2011), which has been widely used in climate model evaluation with satellite data (Klein et al., 2013, Zhang et al., 2010). The essential part of the CloudSat simulator is the QuickBeam radar simulator that is used to produce CloudSat-like radar reflectivity, but is capable of simulating reflectivity for other radars (Marchand et al., 2009; Haynes et al., 2007). Adapting QuickBeam to the ARM cloud radar simulator within COSP required two primary changes: one was to set the frequency to 35 GHz for the ARM Ka-band cloud radar, as opposed to 94 GHz used for the CloudSat W-band radar, and the second was to invert the view from the ground to space so as to attenuate the beam correctly. In addition, the ARM cloud radar simulator uses a finer vertical resolution (100 m compared to 500 m for CloudSat) to resolve the more detailed structure of clouds captured by the ARM radars. The ARM simulator has been developed following the COSP workflow (Figure 1) and using the capabilities available in COSP

  18. Model simulations of aerosol effects on clouds and precipitation in comparison with ARM data

    Energy Technology Data Exchange (ETDEWEB)

    Penner, Joyce E. [Univ. of Michigan, Ann Arbor, MI (United States); Zhou, Cheng [Univ. of Michigan, Ann Arbor, MI (United States)

    2017-01-12

    Observation-based studies have shown that the aerosol cloud lifetime effect or the increase of cloud liquid water path (LWP) with increased aerosol loading may have been overestimated in climate models. Here, we simulate shallow warm clouds on 05/27/2011 at the Southern Great Plains (SGP) measurement site established by Department of Energy's Atmospheric Radiation Measurement (ARM) Program using a single column version of a global climate model (Community Atmosphere Model or CAM) and a cloud resolving model (CRM). The LWP simulated by CAM increases substantially with aerosol loading while that in the CRM does not. The increase of LWP in CAM is caused by a large decrease of the autoconversion rate when cloud droplet number increases. In the CRM, the autoconversion rate is also reduced, but this is offset or even outweighed by the increased evaporation of cloud droplets near cloud top, resulting in an overall decrease in LWP. Our results suggest that climate models need to include the dependence of cloud top growth and the evaporation/condensation process on cloud droplet number concentrations.

  19. A Dynamic Model for Load Balancing in Cloud Infrastructure

    Directory of Open Access Journals (Sweden)

    Jitendra Bhagwandas Bhatia

    2015-08-01

    Full Text Available This paper analysis various challenges faced in optimizing computing resource utilization via load balancing and presents a platform-independent model for load balancing which targets high availability of resources, low SLA (Service Level agreement violations and saves power. To achieve this, incoming requests are monitored for sudden burst, a prediction model is employed to maintain high availability and a power-aware algorithm is applied for choosing a suitable physical node for a virtual host. The proposed dynamic load balancing model provides a way to conflicting goals of saving power and maintaining high resource availability.For anyone building a private, public or hybrid IaaS cloud infrastructure, load balancing of virtual hosts on a limited number of physical nodes, becomes a crucial aspect. This paper analysis various challenges faced in optimizing computing resource utilization via load balancing and presents a platform independent model for load balancing which targets high availability of resources, low SLA (Service Level agreement violations and saves power. To achieve this, incoming requests are monitored for sudden burst, prediction model is employed to maintain high availability and power aware algorithm is applied for choosing a suitable physical node for virtual host. The proposed dynamic load balancing model provides a way to conflicting goals of saving power and maintaining high resource availability.

  20. Aerosol activation and cloud processing in the global aerosol-climate model ECHAM5-HAM

    Directory of Open Access Journals (Sweden)

    G. J. Roelofs

    2006-01-01

    Full Text Available A parameterization for cloud processing is presented that calculates activation of aerosol particles to cloud drops, cloud drop size, and pH-dependent aqueous phase sulfur chemistry. The parameterization is implemented in the global aerosol-climate model ECHAM5-HAM. The cloud processing parameterization uses updraft speed, temperature, and aerosol size and chemical parameters simulated by ECHAM5-HAM to estimate the maximum supersaturation at the cloud base, and subsequently the cloud drop number concentration (CDNC due to activation. In-cloud sulfate production occurs through oxidation of dissolved SO2 by ozone and hydrogen peroxide. The model simulates realistic distributions for annually averaged CDNC although it is underestimated especially in remote marine regions. On average, CDNC is dominated by cloud droplets growing on particles from the accumulation mode, with smaller contributions from the Aitken and coarse modes. The simulations indicate that in-cloud sulfate production is a potentially important source of accumulation mode sized cloud condensation nuclei, due to chemical growth of activated Aitken particles and to enhanced coalescence of processed particles. The strength of this source depends on the distribution of produced sulfate over the activated modes. This distribution is affected by uncertainties in many parameters that play a direct role in particle activation, such as the updraft velocity, the aerosol chemical composition and the organic solubility, and the simulated CDNC is found to be relatively sensitive to these uncertainties.

  1. Ten Years of Cloud Properties from MODIS: Global Statistics and Use in Climate Model Evaluation

    Science.gov (United States)

    Platnick, Steven E.

    2011-01-01

    The NASA Moderate Resolution Imaging Spectroradiometer (MODIS), launched onboard the Terra and Aqua spacecrafts, began Earth observations on February 24, 2000 and June 24,2002, respectively. Among the algorithms developed and applied to this sensor, a suite of cloud products includes cloud masking/detection, cloud-top properties (temperature, pressure), and optical properties (optical thickness, effective particle radius, water path, and thermodynamic phase). All cloud algorithms underwent numerous changes and enhancements between for the latest Collection 5 production version; this process continues with the current Collection 6 development. We will show example MODIS Collection 5 cloud climatologies derived from global spatial . and temporal aggregations provided in the archived gridded Level-3 MODIS atmosphere team product (product names MOD08 and MYD08 for MODIS Terra and Aqua, respectively). Data sets in this Level-3 product include scalar statistics as well as 1- and 2-D histograms of many cloud properties, allowing for higher order information and correlation studies. In addition to these statistics, we will show trends and statistical significance in annual and seasonal means for a variety of the MODIS cloud properties, as well as the time required for detection given assumed trends. To assist in climate model evaluation, we have developed a MODIS cloud simulator with an accompanying netCDF file containing subsetted monthly Level-3 statistical data sets that correspond to the simulator output. Correlations of cloud properties with ENSO offer the potential to evaluate model cloud sensitivity; initial results will be discussed.

  2. RACLOUDS - Model for Clouds Risk Analysis in the Information Assets Context

    Directory of Open Access Journals (Sweden)

    SILVA, P. F.

    2016-06-01

    Full Text Available Cloud computing offers benefits in terms of availability and cost, but transfers the responsibility of information security management for the cloud service provider. Thus the consumer loses control over the security of their information and services. This factor has prevented the migration to cloud computing in many businesses. This paper proposes a model where the cloud consumer can perform risk analysis on providers before and after contracting the service. The proposed model establishes the responsibilities of three actors: Consumer, Provider and Security Labs. The inclusion of actor Security Labs provides more credibility to risk analysis making the results more consistent for the consumer.

  3. Baryon magnetic moments in the quark model and pion cloud contributions

    International Nuclear Information System (INIS)

    Sato, Toshiro; Sawada, Shoji

    1981-01-01

    Baryon magnetic moment is studied paying attention to the effects of pion cloud which is surrounding the 'bare' baryon whose magnetic moment is given by the quark model with broken SU(6) symmetry. The precisely measured nucleon magnetic moments are reproduced by the pion cloud contributions from the distance larger than 1.4 fm. The effects of pion cloud on the hyperon magnetic moments are also discussed. It is shown that the pion cloud contributions largely reduce the discrepancies between the quark model predictions and the recent accurate experimental data on the hyperon magnetic moments. (author)

  4. Experimental and Modeling Studies of Interactions of Marine Aerosols and Clouds

    National Research Council Canada - National Science Library

    Kreidenweis, Sonia

    1995-01-01

    The specific objectives of the modeling component are to develop models of the marine boundary layer, including models that predict cloud formation and evolution and the effects of such processes on the marine aerosol (and vice versa...

  5. Blueprint model and language for engineering cloud applications

    OpenAIRE

    Nguyen, D.K.

    2013-01-01

    The research presented in this thesis is positioned within the domain of engineering CSBAs. Its contribution is twofold: (1) a uniform specification language, called the Blueprint Specification Language (BSL), for specifying cloud services across several cloud vendors and (2) a set of associated techniques, called the Blueprint Manipulation Techniques (BMTs), for publishing, querying, and composing cloud service specifications with aim to support the flexible design and configuration of an CSBA.

  6. The virtual machine (VM) scaler: an infrastructure manager supporting environmental modeling on IaaS clouds

    Science.gov (United States)

    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...

  7. A business model for a South African government public cloud platform

    CSIR Research Space (South Africa)

    Mvelase, P

    2014-05-01

    Full Text Available of public services is conducted. This paper designs a cloud business model that suits South Africa’s perspective. The idea is to model a government public cloud which does not interfere with the secured business functions of the government but find a...

  8. The Kimball Free-Cloud Model: A Failed Innovation in Chemical Education?

    Science.gov (United States)

    Jensen, William B.

    2014-01-01

    This historical review traces the origins of the Kimball free-cloud model of the chemical bond, otherwise known as the charge-cloud or tangent-sphere model, and the central role it played in attempts to reform the introductory chemical curriculum at both the high school and college levels in the 1960s. It also critically evaluates the limitations…

  9. The Radiative Properties of Small Clouds: Multi-Scale Observations and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Feingold, Graham [NOAA ESRL; McComiskey, Allison [CIRES, University of Colorado

    2013-09-25

    Warm, liquid clouds and their representation in climate models continue to represent one of the most significant unknowns in climate sensitivity and climate change. Our project combines ARM observations, LES modeling, and satellite imagery to characterize shallow clouds and the role of aerosol in modifying their radiative effects.

  10. Single-Column Model Simulations of Subtropical Marine Boundary-Layer Cloud Transitions Under Weakening Inversions

    NARCIS (Netherlands)

    Neggers, R.A.J.; Ackerman, Andrew S.; Angevine, W. M.; Bazile, Eric; Beau, I.; Blossey, P. N.; Boutle, I. A.; de Bruijn, C.; cheng, A; van der Dussen, J.J.; Fletcher, J.; Dal Gesso, S.; Jam, A.; Kawai, H; Cheedela, S. K.; Larson, V. E.; Lefebvre, Marie Pierre; Lock, A. P.; Meyer, N. R.; de Roode, S.R.; de Rooy, WC; Sandu, I; Xiao, H; Xu, K. M.

    2017-01-01

    Results are presented of the GASS/EUCLIPSE single-column model intercomparison study on the subtropical marine low-level cloud transition. A central goal is to establish the performance of state-of-the-art boundary-layer schemes for weather and climate models for this cloud regime, using

  11. Final Report for 'Modeling Electron Cloud Diagnostics for High-Intensity Proton Accelerators'

    International Nuclear Information System (INIS)

    Veitzer, Seth A.

    2009-01-01

    Electron clouds in accelerators such as the ILC degrade beam quality and limit operating efficiency. The need to mitigate electron clouds has a direct impact on the design and operation of these accelerators, translating into increased cost and reduced performance. Diagnostic techniques for measuring electron clouds in accelerating cavities are needed to provide an assessment of electron cloud evolution and mitigation. Accurate numerical modeling of these diagnostics is needed to validate the experimental techniques. In this Phase I, we developed detailed numerical models of microwave propagation through electron clouds in accelerating cavities with geometries relevant to existing and future high-intensity proton accelerators such as Project X and the ILC. Our numerical techniques and simulation results from the Phase I showed that there was a high probability of success in measuring both the evolution of electron clouds and the effects of non-uniform electron density distributions in Phase II.

  12. The Importance of Business Model Factors for Cloud Computing Adoption: Role of Previous Experiences

    Directory of Open Access Journals (Sweden)

    Bogataj Habjan Kristina

    2017-08-01

    Full Text Available Background and Purpose: Bringing several opportunities for more effective and efficient IT governance and service exploitation, cloud computing is expected to impact the European and global economies significantly. Market data show that despite many advantages and promised benefits the adoption of cloud computing is not as fast and widespread as foreseen. This situation shows the need for further exploration of the potentials of cloud computing and its implementation on the market. The purpose of this research was to identify individual business model factors with the highest impact on cloud computing adoption. In addition, the aim was to identify the differences in opinion regarding the importance of business model factors on cloud computing adoption according to companies’ previous experiences with cloud computing services.

  13. Forecasting Lightning Threat using Cloud-Resolving Model Simulations

    Science.gov (United States)

    McCaul, Eugene W., Jr.; Goodman, Steven J.; LaCasse, Katherine M.; Cecil, Daniel J.

    2008-01-01

    Two new approaches are proposed and developed for making time and space dependent, quantitative short-term forecasts of lightning threat, and a blend of these approaches is devised that capitalizes on the strengths of each. The new methods are distinctive in that they are based entirely on the ice-phase hydrometeor fields generated by regional cloud-resolving numerical simulations, such as those produced by the WRF model. These methods are justified by established observational evidence linking aspects of the precipitating ice hydrometeor fields to total flash rates. The methods are straightforward and easy to implement, and offer an effective near-term alternative to the incorporation of complex and costly cloud electrification schemes into numerical models. One method is based on upward fluxes of precipitating ice hydrometeors in the mixed phase region at the-15 C level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domain-wide statistics of the peak values of simulated flash rate proxy fields against domain-wide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. Our blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Exploratory tests for selected North Alabama cases show that, because WRF can distinguish the general character of most convective events, our methods show promise as a means of generating quantitatively realistic fields of lightning threat. However, because the models tend to have more difficulty in predicting the instantaneous placement of storms, forecasts of the detailed location of the lightning threat based on single

  14. Above the cloud computing orbital services distributed data model

    Science.gov (United States)

    Straub, Jeremy

    2014-05-01

    Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.

  15. The simplest model of a dust cloud in a plasma

    International Nuclear Information System (INIS)

    Ignatov, A.M.

    1998-01-01

    A cloud consisting of a finite number of dust grains in a plasma is considered. It is shown that the absorption of the plasma by the dust grains gives rise to the formation of a plasma flow toward to the cloud. The drag force produced by this flow acts upon the dust grains and counterbalances the electrostatic repulsing force. The distribution of the grain density inside the cloud is determined. The characteristic size of the cloud is estimated as r D 3/2 /a 1/2 , where r D is the plasma Debye radius, and a is the size of the dust grains

  16. Structural model of the Northern Latium volcanic area constrained by MT, gravity and aeromagnetic data

    Directory of Open Access Journals (Sweden)

    P. Gasparini

    1997-06-01

    Full Text Available The results of about 120 magnetotelluric soundings carried out in the Vulsini, Vico and Sabatini volcanic areas were modeled along with Bouguer and aeromagnetic anomalies to reconstruct a model of the structure of the shallow (less than 5 km of depth crust. The interpretations were constrained by the information gathered from the deep boreholes drilled for geothermal exploration. MT and aeromagnetic anomalies allow the depth to the top of the sedimentary basement and the thickness of the volcanic layer to be inferred. Gravity anomalies are strongly affected by the variations of morphology of the top of the sedimentary basement, consisting of a Tertiary flysch, and of the interface with the underlying Mesozoic carbonates. Gravity data have also been used to extrapolate the thickness of the neogenic unit indicated by some boreholes. There is no evidence for other important density and susceptibility heterogeneities and deeper sources of magnetic and/or gravity anomalies in all the surveyed area.

  17. Constraining models of f(R) gravity with Planck and WiggleZ power spectrum data

    Science.gov (United States)

    Dossett, Jason; Hu, Bin; Parkinson, David

    2014-03-01

    In order to explain cosmic acceleration without invoking ``dark'' physics, we consider f(R) modified gravity models, which replace the standard Einstein-Hilbert action in General Relativity with a higher derivative theory. We use data from the WiggleZ Dark Energy survey to probe the formation of structure on large scales which can place tight constraints on these models. We combine the large-scale structure data with measurements of the cosmic microwave background from the Planck surveyor. After parameterizing the modification of the action using the Compton wavelength parameter B0, we constrain this parameter using ISiTGR, assuming an initial non-informative log prior probability distribution of this cross-over scale. We find that the addition of the WiggleZ power spectrum provides the tightest constraints to date on B0 by an order of magnitude, giving log10(B0) explanation.

  18. Stock management in hospital pharmacy using chance-constrained model predictive control.

    Science.gov (United States)

    Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del

    2016-05-01

    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Adaptively Constrained Stochastic Model Predictive Control for the Optimal Dispatch of Microgrid

    Directory of Open Access Journals (Sweden)

    Xiaogang Guo

    2018-01-01

    Full Text Available In this paper, an adaptively constrained stochastic model predictive control (MPC is proposed to achieve less-conservative coordination between energy storage units and uncertain renewable energy sources (RESs in a microgrid (MG. Besides the economic objective of MG operation, the limits of state-of-charge (SOC and discharging/charging power of the energy storage unit are formulated as chance constraints when accommodating uncertainties of RESs, considering mild violations of these constraints are allowed during long-term operation, and a closed-loop online update strategy is performed to adaptively tighten or relax constraints according to the actual deviation probability of violation level from the desired one as well as the current change rate of deviation probability. Numerical studies show that the proposed adaptively constrained stochastic MPC for MG optimal operation is much less conservative compared with the scenario optimization based robust MPC, and also presents a better convergence performance to the desired constraint violation level than other online update strategies.

  20. Estimation of convective entrainment properties from a cloud-resolving model simulation during TWP-ICE

    Science.gov (United States)

    Zhang, Guang J.; Wu, Xiaoqing; Zeng, Xiping; Mitovski, Toni

    2016-10-01

    The fractional entrainment rate in convective clouds is an important parameter in current convective parameterization schemes of climate models. In this paper, it is estimated using a 1-km-resolution cloud-resolving model (CRM) simulation of convective clouds from TWP-ICE (the Tropical Warm Pool-International Cloud Experiment). The clouds are divided into different types, characterized by cloud-top heights. The entrainment rates and moist static energy that is entrained or detrained are determined by analyzing the budget of moist static energy for each cloud type. Results show that the entrained air is a mixture of approximately equal amount of cloud air and environmental air, and the detrained air is a mixture of ~80 % of cloud air and 20 % of the air with saturation moist static energy at the environmental temperature. After taking into account the difference in moist static energy between the entrained air and the mean environment, the estimated fractional entrainment rate is much larger than those used in current convective parameterization schemes. High-resolution (100 m) large-eddy simulation of TWP-ICE convection was also analyzed to support the CRM results. It is shown that the characteristics of entrainment rates estimated using both the high-resolution data and CRM-resolution coarse-grained data are similar. For each cloud category, the entrainment rate is high near cloud base and top, but low in the middle of clouds. The entrainment rates are best fitted to the inverse of in-cloud vertical velocity by a second order polynomial.

  1. Constraining the dark energy models with H (z ) data: An approach independent of H0

    Science.gov (United States)

    Anagnostopoulos, Fotios K.; Basilakos, Spyros

    2018-03-01

    We study the performance of the latest H (z ) data in constraining the cosmological parameters of different cosmological models, including that of Chevalier-Polarski-Linder w0w1 parametrization. First, we introduce a statistical procedure in which the chi-square estimator is not affected by the value of the Hubble constant. As a result, we find that the H (z ) data do not rule out the possibility of either nonflat models or dynamical dark energy cosmological models. However, we verify that the time varying equation-of-state parameter w (z ) is not constrained by the current expansion data. Combining the H (z ) and the Type Ia supernova data, we find that the H (z )/SNIa overall statistical analysis provides a substantial improvement of the cosmological constraints with respect to those of the H (z ) analysis. Moreover, the w0-w1 parameter space provided by the H (z )/SNIa joint analysis is in very good agreement with that of Planck 2015, which confirms that the present analysis with the H (z ) and supernova type Ia (SNIa) probes correctly reveals the expansion of the Universe as found by the team of Planck. Finally, we generate sets of Monte Carlo realizations in order to quantify the ability of the H (z ) data to provide strong constraints on the dark energy model parameters. The Monte Carlo approach shows significant improvement of the constraints, when increasing the sample to 100 H (z ) measurements. Such a goal can be achieved in the future, especially in the light of the next generation of surveys.

  2. Provide a model to improve the performance of intrusion detection systems in the cloud

    OpenAIRE

    Foroogh Sedighi

    2016-01-01

    High availability of tools and service providers in cloud computing and the fact that cloud computing services are provided by internet and deal with public, have caused important challenges for new computing model. Cloud computing faces problems and challenges such as user privacy, data security, data ownership, availability of services, and recovery after breaking down, performance, scalability, programmability. So far, many different methods are presented for detection of intrusion in clou...

  3. A novel cost based model for energy consumption in cloud computing.

    Science.gov (United States)

    Horri, A; Dastghaibyfard, Gh

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment.

  4. Clouds and the extratropical circulation response to global warming in a hierarchy of global atmosphere models

    Science.gov (United States)

    Voigt, A.

    2017-12-01

    Climate models project that global warming will lead to substantial changes in extratropical jet streams. Yet, many quantitative aspects of warming-induced jet stream changes remain uncertain, and recent work has indicated an important role of clouds and their radiative interactions. Here, I will investigate how cloud-radiative changes impact the zonal-mean extratropical circulation response under global warming using a hierarchy of global atmosphere models. I will first focus on aquaplanet setups with prescribed sea-surface temperatures (SSTs), which reproduce the model spread found in realistic simulations with interactive SSTs. Simulations with two CMIP5 models MPI-ESM and IPSL-CM5A and prescribed clouds show that half of the circulation response can be attributed to cloud changes. The rise of tropical high-level clouds and the upward and poleward movement of midlatitude high-level clouds lead to poleward jet shifts. High-latitude low-level cloud changes shift the jet poleward in one model but not in the other. The impact of clouds on the jet operates via the atmospheric radiative forcing that is created by the cloud changes and is qualitatively reproduced in a dry Held-Suarez model, although the latter is too sensitive because of its simplified treatment of diabatic processes. I will then show that the aquaplanet results also hold when the models are used in a realistic setup that includes continents and seasonality. I will further juxtapose these prescribed-SST simulations with interactive-SST simulations and show that atmospheric and surface cloud-radiative interactions impact the jet poleward jet shifts in about equal measure. Finally, I will discuss the cloud impact on regional and seasonal circulation changes.

  5. Global vertical mass transport by clouds - A two-dimensional model study

    International Nuclear Information System (INIS)

    Olofsson, Mats

    1988-05-01

    A two-dimensional global dispersion model, where vertical transport in the troposphere carried out by convective as well as by frontal cloud systems is explicitly treated, is developed from an existing diffusion model. A parameterization scheme for the cloud transport, based on global cloud statistics, is presented. The model has been tested by using Kr-85, Rn-222 and SO 2 as tracers. Comparisons have been made with observed distributions of these tracers, but also with model results without the cloud transport, using eddy diffusion as the primary means of vertical transport. The model results indicate that for trace species with a turnover time of days to weeks, the introduction of cloud-transport gives much more realistic simulations of their vertical distribution. Layers of increased mixing ratio with height, which can be found in real atmosphere, are reproduced in our cloud-transport model profiles, but can never be simulated with a pure eddy diffusion model. The horizontal transport in the model, by advection and eddy diffusion, gives a realistic distribution between the hemispheres of the more long-lived tracers (Kr-85). A combination of vertical transport by convective and frontal cloud systems is shown to improve the model simulations, compared to limiting it to convective transport only. The importance of including cumulus clouds in the convective transport scheme, in addition to the efficient transport by cumulonimbus clouds, is discussed. The model results are shown to be more sensitive to the vertical detrainment distribution profile than to the absolute magnitude of the vertical mass transport. The scavenging processes for SO 2 are parameterized without the introduction of detailed chemistry. An enhanced removal, due to the increased contact with droplets in the in-cloud lifting process, is introduced in the model. (author)

  6. COLLISIONAL GROOMING MODELS OF THE KUIPER BELT DUST CLOUD

    International Nuclear Information System (INIS)

    Kuchner, Marc J.; Stark, Christopher C.

    2010-01-01

    We modeled the three-dimensional structure of the Kuiper Belt (KB) dust cloud at four different dust production rates, incorporating both planet-dust interactions and grain-grain collisions using the collisional grooming algorithm. Simulated images of a model with a face-on optical depth of ∼10 -4 primarily show an azimuthally symmetric ring at 40-47 AU in submillimeter and infrared wavelengths; this ring is associated with the cold classical KB. For models with lower optical depths (10 -6 and 10 -7 ), synthetic infrared images show that the ring widens and a gap opens in the ring at the location of Neptune; this feature is caused by trapping of dust grains in Neptune's mean motion resonances. At low optical depths, a secondary ring also appears associated with the hole cleared in the center of the disk by Saturn. Our simulations, which incorporate 25 different grain sizes, illustrate that grain-grain collisions are important in sculpting today's KB dust, and probably other aspects of the solar system dust complex; collisions erase all signs of azimuthal asymmetry from the submillimeter image of the disk at every dust level we considered. The model images switch from being dominated by resonantly trapped small grains ('transport dominated') to being dominated by the birth ring ('collision dominated') when the optical depth reaches a critical value of τ ∼ v/c, where v is the local Keplerian speed.

  7. Collisional Grooming Models of the Kuiper Belt Dust Cloud

    Science.gov (United States)

    Kuchner, Marc J.; Stark, Christopher C.

    2010-01-01

    We modeled the three-dimensional structure of the Kuiper Belt (KB) dust cloud at four different dust production rates, incorporating both planet-dust interactions and grain-grain collisions using the collisional grooming algorithm. Simulated images of a model with a face-on optical depth of approximately 10 (exp -4) primarily show an azimuthally- symmetric ring at 40-47 AU in submillimeter and infrared wavelengths; this ring is associated with the cold classical KB. For models with lower optical depths (10 (exp -6) and 10 (exp-7)), synthetic infrared images show that the ring widens and a gap opens in the ring at the location of Neptune; this feature is caused by trapping of dust grains in Neptune's mean motion resonances. At low optical depths, a secondary ring also appears associated with the hole cleared in the center of the disk by Saturn. Our simulations, which incorporate 25 different grain sizes, illustrate that grain-grain collisions are important in sculpting today's KB dust, and probably other aspects of the solar system dust complex; collisions erase all signs of azimuthal asymmetry from the submillimeter image of the disk at every dust level we considered. The model images switch from being dominated by resonantly trapped small grains ("transport dominated") to being dominated by the birth ring ("collision dominated") when the optical depth reaches a critical value of r approximately v/c, where v is the local Keplerian speed.

  8. Above the cloud computing: applying cloud computing principles to create an orbital services model

    Science.gov (United States)

    Straub, Jeremy; Mohammad, Atif; Berk, Josh; Nervold, Anders K.

    2013-05-01

    Large satellites and exquisite planetary missions are generally self-contained. They have, onboard, all of the computational, communications and other capabilities required to perform their designated functions. Because of this, the satellite or spacecraft carries hardware that may be utilized only a fraction of the time; however, the full cost of development and launch are still bone by the program. Small satellites do not have this luxury. Due to mass and volume constraints, they cannot afford to carry numerous pieces of barely utilized equipment or large antennas. This paper proposes a cloud-computing model for exposing satellite services in an orbital environment. Under this approach, each satellite with available capabilities broadcasts a service description for each service that it can provide (e.g., general computing capacity, DSP capabilities, specialized sensing capabilities, transmission capabilities, etc.) and its orbital elements. Consumer spacecraft retain a cache of service providers and select one utilizing decision making heuristics (e.g., suitability of performance, opportunity to transmit instructions and receive results - based on the orbits of the two craft). The two craft negotiate service provisioning (e.g., when the service can be available and for how long) based on the operating rules prioritizing use of (and allowing access to) the service on the service provider craft, based on the credentials of the consumer. Service description, negotiation and sample service performance protocols are presented. The required components of each consumer or provider spacecraft are reviewed. These include fully autonomous control capabilities (for provider craft), a lightweight orbit determination routine (to determine when consumer and provider craft can see each other and, possibly, pointing requirements for craft with directional antennas) and an authentication and resource utilization priority-based access decision making subsystem (for provider craft

  9. Blueprint model and language for engineering cloud applications

    NARCIS (Netherlands)

    Nguyen, D.K.

    2013-01-01

    The research presented in this thesis is positioned within the domain of engineering CSBAs. Its contribution is twofold: (1) a uniform specification language, called the Blueprint Specification Language (BSL), for specifying cloud services across several cloud vendors and (2) a set of associated

  10. User Utility Oriented Queuing Model for Resource Allocation in Cloud Environment

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    2015-01-01

    Full Text Available Resource allocation is one of the most important research topics in servers. In the cloud environment, there are massive hardware resources of different kinds, and many kinds of services are usually run on virtual machines of the cloud server. In addition, cloud environment is commercialized, and economical factor should also be considered. In order to deal with commercialization and virtualization of cloud environment, we proposed a user utility oriented queuing model for task scheduling. Firstly, we modeled task scheduling in cloud environment as an M/M/1 queuing system. Secondly, we classified the utility into time utility and cost utility and built a linear programming model to maximize total utility for both of them. Finally, we proposed a utility oriented algorithm to maximize the total utility. Massive experiments validate the effectiveness of our proposed model.

  11. Representation of Arctic mixed-phase clouds and the Wegener-Bergeron-Findeisen process in climate models: Perspectives from a cloud-resolving study

    Science.gov (United States)

    Fan, Jiwen; Ghan, Steven; Ovchinnikov, Mikhail; Liu, Xiaohong; Rasch, Philip J.; Korolev, Alexei

    2011-01-01

    Two types of Arctic mixed-phase clouds observed during the ISDAC and M-PACE field campaigns are simulated using a 3-dimensional cloud-resolving model (CRM) with size-resolved cloud microphysics. The modeled cloud properties agree reasonably well with aircraft measurements and surface-based retrievals. Cloud properties such as the probability density function (PDF) of vertical velocity (w), cloud liquid and ice, regimes of cloud particle growth, including the Wegener-Bergeron-Findeisen (WBF) process, and the relationships among properties/processes in mixed-phase clouds are examined to gain insights for improving their representation in General Circulation Models (GCMs). The PDF of the simulated w is well represented by a Gaussian function, validating, at least for arctic clouds, the subgrid treatment used in GCMs. The PDFs of liquid and ice water contents can be approximated by Gamma functions, and a Gaussian function can describe the total water distribution, but a fixed variance assumption should be avoided in both cases. The CRM results support the assumption frequently used in GCMs that mixed phase clouds maintain water vapor near liquid saturation. Thus, ice continues to grow throughout the stratiform cloud but the WBF process occurs in about 50% of cloud volume where liquid and ice co-exist, predominantly in downdrafts. In updrafts, liquid and ice particles grow simultaneously. The relationship between the ice depositional growth rate and cloud ice strongly depends on the capacitance of ice particles. The simplified size-independent capacitance of ice particles used in GCMs could lead to large deviations in ice depositional growth.

  12. A study of cloud microphysics and precipitation over the Tibetan Plateau by radar observations and cloud-resolving model simulations: Cloud Microphysics over Tibetan Plateau

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Wenhua [State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing China; Pacific Northwest National Laboratory, Richland Washington USA; Sui, Chung-Hsiung [Department of Atmospheric Sciences, National Taiwan University, Taipei Taiwan; Fan, Jiwen [Pacific Northwest National Laboratory, Richland Washington USA; Hu, Zhiqun [State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing China; Zhong, Lingzhi [State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing China

    2016-11-27

    Cloud microphysical properties and precipitation over the Tibetan Plateau (TP) are unique because of the high terrains, clean atmosphere, and sufficient water vapor. With dual-polarization precipitation radar and cloud radar measurements during the Third Tibetan Plateau Atmospheric Scientific Experiment (TIPEX-III), the simulated microphysics and precipitation by the Weather Research and Forecasting model (WRF) with the Chinese Academy of Meteorological Sciences (CAMS) microphysics and other microphysical schemes are investigated through a typical plateau rainfall event on 22 July 2014. Results show that the WRF-CAMS simulation reasonably reproduces the spatial distribution of 24-h accumulated precipitation, but has limitations in simulating time evolution of precipitation rates. The model-calculated polarimetric radar variables have biases as well, suggesting bias in modeled hydrometeor types. The raindrop sizes in convective region are larger than those in stratiform region indicated by the small intercept of raindrop size distribution in the former. The sensitivity experiments show that precipitation processes are sensitive to the changes of warm rain processes in condensation and nucleated droplet size (but less sensitive to evaporation process). Increasing droplet condensation produces the best area-averaged rain rate during weak convection period compared with the observation, suggesting a considerable bias in thermodynamics in the baseline simulation. Increasing the initial cloud droplet size causes the rain rate reduced by half, an opposite effect to that of increasing droplet condensation.

  13. Cloud vector mapping using MODIS 09 Climate Modeling Grid (CMG) for the year 2010 and 2011

    International Nuclear Information System (INIS)

    Jah, Asjad Asif; Farrukh, Yousaf Bin; Ali, Rao Muhammad Saeed

    2013-01-01

    An alternate use for MODIS images was sought by mapping cloud movement directions and dissipation time during the 2010 and 2011 floods. MODIS Level-02 daily CMG (Climate Modelling Grid) land-cover images were downloaded and subsequently rectified and clipped to the study area. These images were then put together to observe the direction of cloud movement and vectorize the observed paths. Initial findings suggest that usually cloud does not have a prolonged coverage period over the northern humid region of the country and dissipates within less than 24-hours. Additionally, this led to the development of a robust methodology for cloud motion analysis using FOSS and market leading GIS utilities

  14. A GLOBAL MAGNETIC TOPOLOGY MODEL FOR MAGNETIC CLOUDS. II

    Energy Technology Data Exchange (ETDEWEB)

    Hidalgo, M. A., E-mail: miguel.hidalgo@uah.es [Departamento de Fisica, Universidad de Alcala, Apartado 20, E-28871 Alcala de Henares, Madrid (Spain)

    2013-04-01

    In the present work, we extensively used our analytical approach to the global magnetic field topology of magnetic clouds (MCs), introduced in a previous paper, in order to show its potential and to study its physical consistency. The model assumes toroidal topology with a non-uniform (variable maximum radius) cross-section along them. Moreover, it has a non-force-free character and also includes the expansion of its cross-section. As is shown, the model allows us, first, to analyze MC magnetic structures-determining their physical parameters-with a variety of magnetic field shapes, and second, to reconstruct their relative orientation in the interplanetary medium from the observations obtained by several spacecraft. Therefore, multipoint spacecraft observations give the opportunity to infer the structure of this large-scale magnetic flux rope structure in the solar wind. For these tasks, we use data from Helios (A and B), STEREO (A and B), and Advanced Composition Explorer. We show that the proposed analytical model can explain quite well the topology of several MCs in the interplanetary medium and is a good starting point for understanding the physical mechanisms under these phenomena.

  15. Constraining model parameters on remotely sensed evaporation: justification for distribution in ungauged basins?

    Directory of Open Access Journals (Sweden)

    H. C. Winsemius

    2008-12-01

    Full Text Available In this study, land surface related parameter distributions of a conceptual semi-distributed hydrological model are constrained by employing time series of satellite-based evaporation estimates during the dry season as explanatory information. The approach has been applied to the ungauged Luangwa river basin (150 000 (km2 in Zambia. The information contained in these evaporation estimates imposes compliance of the model with the largest outgoing water balance term, evaporation, and a spatially and temporally realistic depletion of soil moisture within the dry season. The model results in turn provide a better understanding of the information density of remotely sensed evaporation. Model parameters to which evaporation is sensitive, have been spatially distributed on the basis of dominant land cover characteristics. Consequently, their values were conditioned by means of Monte-Carlo sampling and evaluation on satellite evaporation estimates. The results show that behavioural parameter sets for model units with similar land cover are indeed clustered. The clustering reveals hydrologically meaningful signatures in the parameter response surface: wetland-dominated areas (also called dambos show optimal parameter ranges that reflect vegetation with a relatively small unsaturated zone (due to the shallow rooting depth of the vegetation which is easily moisture stressed. The forested areas and highlands show parameter ranges that indicate a much deeper root zone which is more drought resistent. Clustering was consequently used to formulate fuzzy membership functions that can be used to constrain parameter realizations in further calibration. Unrealistic parameter ranges, found for instance in the high unsaturated soil zone values in the highlands may indicate either overestimation of satellite-based evaporation or model structural deficiencies. We believe that in these areas, groundwater uptake into the root zone and lateral movement of

  16. A Nonparametric Shape Prior Constrained Active Contour Model for Segmentation of Coronaries in CTA Images

    Directory of Open Access Journals (Sweden)

    Yin Wang

    2014-01-01

    Full Text Available We present a nonparametric shape constrained algorithm for segmentation of coronary arteries in computed tomography images within the framework of active contours. An adaptive scale selection scheme, based on the global histogram information of the image data, is employed to determine the appropriate window size for each point on the active contour, which improves the performance of the active contour model in the low contrast local image regions. The possible leakage, which cannot be identified by using intensity features alone, is reduced through the application of the proposed shape constraint, where the shape of circular sampled intensity profile is used to evaluate the likelihood of current segmentation being considered vascular structures. Experiments on both synthetic and clinical datasets have demonstrated the efficiency and robustness of the proposed method. The results on clinical datasets have shown that the proposed approach is capable of extracting more detailed coronary vessels with subvoxel accuracy.

  17. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los; Schö nlieb, Carola-Bibiane

    2013-01-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  18. A Modified FCM Classifier Constrained by Conditional Random Field Model for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    WANG Shaoyu

    2016-12-01

    Full Text Available Remote sensing imagery has abundant spatial correlation information, but traditional pixel-based clustering algorithms don't take the spatial information into account, therefore the results are often not good. To this issue, a modified FCM classifier constrained by conditional random field model is proposed. Adjacent pixels' priori classified information will have a constraint on the classification of the center pixel, thus extracting spatial correlation information. Spectral information and spatial correlation information are considered at the same time when clustering based on second order conditional random field. What's more, the global optimal inference of pixel's classified posterior probability can be get using loopy belief propagation. The experiment shows that the proposed algorithm can effectively maintain the shape feature of the object, and the classification accuracy is higher than traditional algorithms.

  19. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  20. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  1. An ensemble Kalman filter for statistical estimation of physics constrained nonlinear regression models

    International Nuclear Information System (INIS)

    Harlim, John; Mahdi, Adam; Majda, Andrew J.

    2014-01-01

    A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partial noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model

  2. Mechanisms and Model Diversity of Trade-Wind Shallow Cumulus Cloud Feedbacks: A Review

    Science.gov (United States)

    Vial, Jessica; Bony, Sandrine; Stevens, Bjorn; Vogel, Raphaela

    2017-11-01

    Shallow cumulus clouds in the trade-wind regions are at the heart of the long standing uncertainty in climate sensitivity estimates. In current climate models, cloud feedbacks are strongly influenced by cloud-base cloud amount in the trades. Therefore, understanding the key factors controlling cloudiness near cloud-base in shallow convective regimes has emerged as an important topic of investigation. We review physical understanding of these key controlling factors and discuss the value of the different approaches that have been developed so far, based on global and high-resolution model experimentations and process-oriented analyses across a range of models and for observations. The trade-wind cloud feedbacks appear to depend on two important aspects: (1) how cloudiness near cloud-base is controlled by the local interplay between turbulent, convective and radiative processes; (2) how these processes interact with their surrounding environment and are influenced by mesoscale organization. Our synthesis of studies that have explored these aspects suggests that the large diversity of model responses is related to fundamental differences in how the processes controlling trade cumulus operate in models, notably, whether they are parameterized or resolved. In models with parameterized convection, cloudiness near cloud-base is very sensitive to the vigor of convective mixing in response to changes in environmental conditions. This is in contrast with results from high-resolution models, which suggest that cloudiness near cloud-base is nearly invariant with warming and independent of large-scale environmental changes. Uncertainties are difficult to narrow using current observations, as the trade cumulus variability and its relation to large-scale environmental factors strongly depend on the time and/or spatial scales at which the mechanisms are evaluated. New opportunities for testing physical understanding of the factors controlling shallow cumulus cloud responses using

  3. Mechanisms and Model Diversity of Trade-Wind Shallow Cumulus Cloud Feedbacks: A Review

    Science.gov (United States)

    Vial, Jessica; Bony, Sandrine; Stevens, Bjorn; Vogel, Raphaela

    Shallow cumulus clouds in the trade-wind regions are at the heart of the long standing uncertainty in climate sensitivity estimates. In current climate models, cloud feedbacks are strongly influenced by cloud-base cloud amount in the trades. Therefore, understanding the key factors controlling cloudiness near cloud-base in shallow convective regimes has emerged as an important topic of investigation. We review physical understanding of these key controlling factors and discuss the value of the different approaches that have been developed so far, based on global and high-resolution model experimentations and process-oriented analyses across a range of models and for observations. The trade-wind cloud feedbacks appear to depend on two important aspects: (1) how cloudiness near cloud-base is controlled by the local interplay between turbulent, convective and radiative processes; (2) how these processes interact with their surrounding environment and are influenced by mesoscale organization. Our synthesis of studies that have explored these aspects suggests that the large diversity of model responses is related to fundamental differences in how the processes controlling trade cumulus operate in models, notably, whether they are parameterized or resolved. In models with parameterized convection, cloudiness near cloud-base is very sensitive to the vigor of convective mixing in response to changes in environmental conditions. This is in contrast with results from high-resolution models, which suggest that cloudiness near cloud-base is nearly invariant with warming and independent of large-scale environmental changes. Uncertainties are difficult to narrow using current observations, as the trade cumulus variability and its relation to large-scale environmental factors strongly depend on the time and/or spatial scales at which the mechanisms are evaluated. New opportunities for testing physical understanding of the factors controlling shallow cumulus cloud responses using

  4. Cloud-Resolving Modeling Intercomparison Study of a Squall Line Case from MC3E - Properties of Convective Core

    Science.gov (United States)

    Fan, J.; Han, B.; Varble, A.; Morrison, H.; North, K.; Kollias, P.; Chen, B.; Dong, X.; Giangrande, S. E.; Khain, A.; Lin, Y.; Mansell, E.; Milbrandt, J.; Stenz, R.; Thompson, G.; Wang, Y.

    2016-12-01

    The large spread in CRM model simulations of deep convection and aerosol effects on deep convective clouds (DCCs) makes it difficult to (1) further our understanding of deep convection and (2) define "benchmarks" and then limit their use in parameterization developments. A constrained model intercomparsion study on a mid-latitude mesoscale squall line is performed using the Weather Research & Forecasting (WRF) model at 1-km horizontal grid spacing with eight cloud microphysics schemes to understand specific processes that lead to the large spreads of simulated convection and precipitation. Various observational data are employed to evaluate the baseline simulations. All simulations tend to produce a wider convective area but a much narrower stratiform area. The magnitudes of virtual potential temperature drop, pressure rise, and wind speed peak associated with the passage of the gust front are significantly smaller compared with the observations, suggesting simulated cool pools are weaker. Simulations generally overestimate the vertical velocity and radar reflectivity in convective cores compared with the retrievals. The modeled updraft velocity and precipitation have a significant spread across eight schemes. The spread of updraft velocity is the combination of both low-level pressure perturbation gradient (PPG) and buoyancy. Both PPG and thermal buoyancy are small for simulations of weak convection but both are large for those of strong convection. Ice-related parameterizations contribute majorly to the spread of updraft velocity, while they are not the reason for the large spread of precipitation. The understandings gained in this study can help to focus future observations and parameterization development.

  5. Analytic Closed-Form Solution of a Mixed Layer Model for Stratocumulus Clouds

    Science.gov (United States)

    Akyurek, Bengu Ozge

    Stratocumulus clouds play an important role in climate cooling and are hard to predict using global climate and weather forecast models. Thus, previous studies in the literature use observations and numerical simulation tools, such as large-eddy simulation (LES), to solve the governing equations for the evolution of stratocumulus clouds. In contrast to the previous works, this work provides an analytic closed-form solution to the cloud thickness evolution of stratocumulus clouds in a mixed-layer model framework. With a focus on application over coastal lands, the diurnal cycle of cloud thickness and whether or not clouds dissipate are of particular interest. An analytic solution enables the sensitivity analysis of implicitly interdependent variables and extrema analysis of cloud variables that are hard to achieve using numerical solutions. In this work, the sensitivity of inversion height, cloud-base height, and cloud thickness with respect to initial and boundary conditions, such as Bowen ratio, subsidence, surface temperature, and initial inversion height, are studied. A critical initial cloud thickness value that can be dissipated pre- and post-sunrise is provided. Furthermore, an extrema analysis is provided to obtain the minima and maxima of the inversion height and cloud thickness within 24 h. The proposed solution is validated against LES results under the same initial and boundary conditions. Then, the proposed analytic framework is extended to incorporate multiple vertical columns that are coupled by advection through wind flow. This enables a bridge between the micro-scale and the mesoscale relations. The effect of advection on cloud evolution is studied and a sensitivity analysis is provided.

  6. DATA-CONSTRAINED CORONAL MASS EJECTIONS IN A GLOBAL MAGNETOHYDRODYNAMICS MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Jin, M. [Lockheed Martin Solar and Astrophysics Lab, Palo Alto, CA 94304 (United States); Manchester, W. B.; Van der Holst, B.; Sokolov, I.; Tóth, G.; Gombosi, T. I. [Climate and Space Sciences and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Mullinix, R. E.; Taktakishvili, A.; Chulaki, A., E-mail: jinmeng@lmsal.com, E-mail: chipm@umich.edu, E-mail: richard.e.mullinix@nasa.gov, E-mail: Aleksandre.Taktakishvili-1@nasa.gov [Community Coordinated Modeling Center, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2017-01-10

    We present a first-principles-based coronal mass ejection (CME) model suitable for both scientific and operational purposes by combining a global magnetohydrodynamics (MHD) solar wind model with a flux-rope-driven CME model. Realistic CME events are simulated self-consistently with high fidelity and forecasting capability by constraining initial flux rope parameters with observational data from GONG, SOHO /LASCO, and STEREO /COR. We automate this process so that minimum manual intervention is required in specifying the CME initial state. With the newly developed data-driven Eruptive Event Generator using Gibson–Low configuration, we present a method to derive Gibson–Low flux rope parameters through a handful of observational quantities so that the modeled CMEs can propagate with the desired CME speeds near the Sun. A test result with CMEs launched with different Carrington rotation magnetograms is shown. Our study shows a promising result for using the first-principles-based MHD global model as a forecasting tool, which is capable of predicting the CME direction of propagation, arrival time, and ICME magnetic field at 1 au (see the companion paper by Jin et al. 2016a).

  7. Integrated Model to Assess Cloud Deployment Effectiveness When Developing an IT-strategy

    Science.gov (United States)

    Razumnikov, S.; Prankevich, D.

    2016-04-01

    Developing an IT-strategy of cloud deployment is a complex issue since even the stage of its formation necessitates revealing what applications will be the best possible to meet the requirements of a company business-strategy, evaluate reliability and safety of cloud providers and analyze staff satisfaction. A system of criteria, as well an integrated model to assess cloud deployment effectiveness is offered. The model makes it possible to identify what applications being at the disposal of a company, as well as new tools to be deployed are reliable and safe enough for implementation in the cloud environment. The data on practical use of the procedure to assess cloud deployment effectiveness by a provider of telecommunication services is presented. The model was used to calculate values of integral indexes of services to be assessed, then, ones, meeting the criteria and answering the business-strategy of a company, were selected.

  8. The role of aerosols in cloud drop parameterizations and its applications in global climate models

    Energy Technology Data Exchange (ETDEWEB)

    Chuang, C.C.; Penner, J.E. [Lawrence Livermore National Lab., CA (United States)

    1996-04-01

    The characteristics of the cloud drop size distribution near cloud base are initially determined by aerosols that serve as cloud condensation nuclei and the updraft velocity. We have developed parameterizations relating cloud drop number concentration to aerosol number and sulfate mass concentrations and used them in a coupled global aerosol/general circulation model (GCM) to estimate the indirect aerosol forcing. The global aerosol model made use of our detailed emissions inventories for the amount of particulate matter from biomass burning sources and from fossil fuel sources as well as emissions inventories of the gas-phase anthropogenic SO{sub 2}. This work is aimed at validating the coupled model with the Atmospheric Radiation Measurement (ARM) Program measurements and assessing the possible magnitude of the aerosol-induced cloud effects on climate.

  9. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  10. Inverse modeling of cloud-aerosol interactions -- Part 1: Detailed response surface analysis

    NARCIS (Netherlands)

    Partridge, D.G.; Vrugt, J.A.; Tunved, P.; Ekman, A.M.L.; Gorea, D.; Sooroshian, A.

    2011-01-01

    New methodologies are required to probe the sensitivity of parameters describing cloud droplet activation. This paper presents an inverse modeling-based method for exploring cloud-aerosol interactions via response surfaces. The objective function, containing the difference between the measured and

  11. Simulation modeling of cloud computing for smart grid using CloudSim

    Directory of Open Access Journals (Sweden)

    Sandeep Mehmi

    2017-05-01

    Full Text Available In this paper a smart grid cloud has been simulated using CloudSim. Various parameters like number of virtual machines (VM, VM Image size, VM RAM, VM bandwidth, cloudlet length, and their effect on cost and cloudlet completion time in time-shared and space-shared resource allocation policy have been studied. As the number of cloudlets increased from 68 to 178, greater number of cloudlets completed their execution with high cloudlet completion time in time-shared allocation policy as compared to space-shared allocation policy. Similar trend has been observed when VM bandwidth is increased from 1 Gbps to 10 Gbps and VM RAM is increased from 512 MB to 5120 MB. The cost of processing increased linearly with respect to increase in number of VMs, VM Image size and cloudlet length.

  12. A Hybrid Verifiable and Delegated Cryptographic Model in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Jaber Ibrahim Naser

    2018-02-01

    Full Text Available Access control is very important in cloud data sharing. Especially in the domains like healthcare, it is essential to have access control mechanisms in place for confidentiality and secure data access. Attribute based encryption has been around for many years to secure data and provide controlled access. In this paper, we proposed a framework that supports circuit and attributes based encryption mechanism that involves multiple parties. They are data owner, data user, cloud server and attribute authority. An important feature of the proposed system is the verifiable delegation of the decryption process to cloud server. Data owner encrypts data and delegates decryption process to cloud. Cloud server performs partial decryption and then the final decrypted data are shared for users as per the privileges. Data owner  thus reduces computational complexity by delegating decryption process cloud server. We built a prototype application using the Microsoft.NET platform for proof of the concept. The empirical results revealed that there is controlled access with multiple user roles and access control rights for secure and confidential data access in cloud computing.

  13. Empirical Succession Mapping and Data Assimilation to Constrain Demographic Processes in an Ecosystem Model

    Science.gov (United States)

    Kelly, R.; Andrews, T.; Dietze, M.

    2015-12-01

    Shifts in ecological communities in response to environmental change have implications for biodiversity, ecosystem function, and feedbacks to global climate change. Community composition is fundamentally the product of demography, but demographic processes are simplified or missing altogether in many ecosystem, Earth system, and species distribution models. This limitation arises in part because demographic data are noisy and difficult to synthesize. As a consequence, demographic processes are challenging to formulate in models in the first place, and to verify and constrain with data thereafter. Here, we used a novel analysis of the USFS Forest Inventory Analysis to improve the representation of demography in an ecosystem model. First, we created an Empirical Succession Mapping (ESM) based on ~1 million individual tree observations from the eastern U.S. to identify broad demographic patterns related to forest succession and disturbance. We used results from this analysis to guide reformulation of the Ecosystem Demography model (ED), an existing forest simulator with explicit tree demography. Results from the ESM reveal a coherent, cyclic pattern of change in temperate forest tree size and density over the eastern U.S. The ESM captures key ecological processes including succession, self-thinning, and gap-filling, and quantifies the typical trajectory of these processes as a function of tree size and stand density. Recruitment is most rapid in early-successional stands with low density and mean diameter, but slows as stand density increases; mean diameter increases until thinning promotes recruitment of small-diameter trees. Strikingly, the upper bound of size-density space that emerges in the ESM conforms closely to the self-thinning power law often observed in ecology. The ED model obeys this same overall size-density boundary, but overestimates plot-level growth, mortality, and fecundity rates, leading to unrealistic emergent demographic patterns. In particular

  14. Development of Presentation Model with Cloud Based Infrastructure

    Directory of Open Access Journals (Sweden)

    Magdalena Widiantari Maria

    2018-01-01

    Full Text Available Computer mediated communication are the communication activities using technology which have rapidly in progress. Communication interactive activities nowadays has no longer only involving person to person but mediated by technology, and have been done in many fields including in education and teaching activity. In this study, presentation media based on cloud's infrastructure designed to replace face to face or in class lectures. In addition, the presentation will allow media data storage indefinitely, and accessible wherever and anytime. This is in line with the concept of student center learning where students were encouraged to more active in the lecture activities. The purpose of this research is making or designing a presentation model based on cloud‘s infrastructure. This research is using research and development method which is consists of four stages, where the first phase is composing the concept of media presentation design. The second phase are choosing the subject that will be designed as the subject of presentation. The third stage is designing presentation model. And the fourth phase is collecting materials of the subject that will be presented by each lecturer.

  15. Economic model of a cloud provider operating in a federated cloud

    OpenAIRE

    Goiri Presa, Íñigo; Guitart Fernández, Jordi; Torres Viñals, Jordi

    2012-01-01

    Resource provisioning in Cloud providers is a challenge because of the high variability of load over time. On the one hand, the providers can serve most of the requests owning only a restricted amount of resources, but this forces to reject customers during peak hours. On the other hand, valley hours incur in under-utilization of the resources, which forces the providers to increase their prices to be profitable. Federation overcomes these limitations and allows pro...

  16. A Unified Model of Cloud-to-Ground Lightning Stroke

    Science.gov (United States)

    Nag, A.; Rakov, V. A.

    2014-12-01

    The first stroke in a cloud-to-ground lightning discharge is thought to follow (or be initiated by) the preliminary breakdown process which often produces a train of relatively large microsecond-scale electric field pulses. This process is poorly understood and rarely modeled. Each lightning stroke is composed of a downward leader process and an upward return-stroke process, which are usually modeled separately. We present a unified engineering model for computing the electric field produced by a sequence of preliminary breakdown, stepped leader, and return stroke processes, serving to transport negative charge to ground. We assume that a negatively-charged channel extends downward in a stepped fashion through the relatively-high-field region between the main negative and lower positive charge centers and then through the relatively-low-field region below the lower positive charge center. A relatively-high-field region is also assumed to exist near ground. The preliminary breakdown pulse train is assumed to be generated when the negatively-charged channel interacts with the lower positive charge region. At each step, an equivalent current source is activated at the lower extremity of the channel, resulting in a step current wave that propagates upward along the channel. The leader deposits net negative charge onto the channel. Once the stepped leader attaches to ground (upward connecting leader is presently neglected), an upward-propagating return stroke is initiated, which neutralizes the charge deposited by the leader along the channel. We examine the effect of various model parameters, such as step length and current propagation speed, on model-predicted electric fields. We also compare the computed fields with pertinent measurements available in the literature.

  17. Aerosol-cloud interactions in a multi-scale modeling framework

    Science.gov (United States)

    Lin, G.; Ghan, S. J.

    2017-12-01

    Atmospheric aerosols play an important role in changing the Earth's climate through scattering/absorbing solar and terrestrial radiation and interacting with clouds. However, quantification of the aerosol effects remains one of the most uncertain aspects of current and future climate projection. Much of the uncertainty results from the multi-scale nature of aerosol-cloud interactions, which is very challenging to represent in traditional global climate models (GCMs). In contrast, the multi-scale modeling framework (MMF) provides a viable solution, which explicitly resolves the cloud/precipitation in the cloud resolved model (CRM) embedded in the GCM grid column. In the MMF version of community atmospheric model version 5 (CAM5), aerosol processes are treated with a parameterization, called the Explicit Clouds Parameterized Pollutants (ECPP). It uses the cloud/precipitation statistics derived from the CRM to treat the cloud processing of aerosols on the GCM grid. However, this treatment treats clouds on the CRM grid but aerosols on the GCM grid, which is inconsistent with the reality that cloud-aerosol interactions occur on the cloud scale. To overcome the limitation, here, we propose a new aerosol treatment in the MMF: Explicit Clouds Explicit Aerosols (ECEP), in which we resolve both clouds and aerosols explicitly on the CRM grid. We first applied the MMF with ECPP to the Accelerated Climate Modeling for Energy (ACME) model to have an MMF version of ACME. Further, we also developed an alternative version of ACME-MMF with ECEP. Based on these two models, we have conducted two simulations: one with the ECPP and the other with ECEP. Preliminary results showed that the ECEP simulations tend to predict higher aerosol concentrations than ECPP simulations, because of the more efficient vertical transport from the surface to the higher atmosphere but the less efficient wet removal. We also found that the cloud droplet number concentrations are also different between the

  18. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    Science.gov (United States)

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  19. Constraining climate sensitivity and continental versus seafloor weathering using an inverse geological carbon cycle model.

    Science.gov (United States)

    Krissansen-Totton, Joshua; Catling, David C

    2017-05-22

    The relative influences of tectonics, continental weathering and seafloor weathering in controlling the geological carbon cycle are unknown. Here we develop a new carbon cycle model that explicitly captures the kinetics of seafloor weathering to investigate carbon fluxes and the evolution of atmospheric CO 2 and ocean pH since 100 Myr ago. We compare model outputs to proxy data, and rigorously constrain model parameters using Bayesian inverse methods. Assuming our forward model is an accurate representation of the carbon cycle, to fit proxies the temperature dependence of continental weathering must be weaker than commonly assumed. We find that 15-31 °C (1σ) surface warming is required to double the continental weathering flux, versus 3-10 °C in previous work. In addition, continental weatherability has increased 1.7-3.3 times since 100 Myr ago, demanding explanation by uplift and sea-level changes. The average Earth system climate sensitivity is  K (1σ) per CO 2 doubling, which is notably higher than fast-feedback estimates. These conclusions are robust to assumptions about outgassing, modern fluxes and seafloor weathering kinetics.

  20. Collaborative Cloud Manufacturing: Design of Business Model Innovations Enabled by Cyberphysical Systems in Distributed Manufacturing Systems

    Directory of Open Access Journals (Sweden)

    Erwin Rauch

    2016-01-01

    Full Text Available Collaborative cloud manufacturing, as a concept of distributed manufacturing, allows different opportunities for changing the logic of generating and capturing value. Cyberphysical systems and the technologies behind them are the enablers for new business models which have the potential to be disruptive. This paper introduces the topics of distributed manufacturing as well as cyberphysical systems. Furthermore, the main business model clusters of distributed manufacturing systems are described, including collaborative cloud manufacturing. The paper aims to provide support for developing business model innovations based on collaborative cloud manufacturing. Therefore, three business model architecture types of a differentiated business logic are discussed, taking into consideration the parameters which have an influence and the design of the business model and its architecture. As a result, new business models can be developed systematically and new ideas can be generated to boost the concept of collaborative cloud manufacturing within all sustainable business models.

  1. Modeling and Simulation of the Gonghe geothermal field (Qinghai, China) Constrained by Geophysical

    Science.gov (United States)

    Zeng, Z.; Wang, K.; Zhao, X.; Huai, N.; He, R.

    2017-12-01

    The Gonghe geothermal field in Qinghai is important because of its variety of geothermal resource types. Now, the Gonghe geothermal field has been a demonstration area of geothermal development and utilization in China. It has been the topic of numerous geophysical investigations conducted to determine the depth to and the nature of the heat source, and to image the channel of heat flow. This work focuses on the causes of geothermal fields used numerical simulation method constrained by geophysical data. At first, by analyzing and inverting an magnetotelluric (MT) measurements profile across this area we obtain the deep resistivity distribution. Using the gravity anomaly inversion constrained by the resistivity profile, the density of the basins and the underlying rocks can be calculated. Combined with the measured parameters of rock thermal conductivity, the 2D geothermal conceptual model of Gonghe area is constructed. Then, the unstructured finite element method is used to simulate the heat conduction equation and the geothermal field. Results of this model were calibrated with temperature data for the observation well. A good match was achieved between the measured values and the model's predicted values. At last, geothermal gradient and heat flow distribution of this model are calculated(fig.1.). According to the results of geophysical exploration, there is a low resistance and low density region (d5) below the geothermal field. We recognize that this anomaly is generated by tectonic motion, and this tectonic movement creates a mantle-derived heat upstream channel. So that the anomalous basement heat flow values are higher than in other regions. The model's predicted values simulated using that boundary condition has a good match with the measured values. The simulated heat flow values show that the mantle-derived heat flow migrates through the boundary of the low-resistance low-density anomaly area to the Gonghe geothermal field, with only a small fraction

  2. Coupling spectral-bin cloud microphysics with the MOSAIC aerosol model in WRF-Chem: Methodology and results for marine stratocumulus clouds

    Science.gov (United States)

    Gao, Wenhua; Fan, Jiwen; Easter, R. C.; Yang, Qing; Zhao, Chun; Ghan, Steven J.

    2016-09-01

    Aerosol-cloud interaction processes can be represented more physically with bin cloud microphysics relative to bulk microphysical parameterizations. However, due to computational power limitations in the past, bin cloud microphysics was often run with very simple aerosol treatments. The purpose of this study is to represent better aerosol-cloud interaction processes in the Chemistry version of Weather Research and Forecast model (WRF-Chem) at convection-permitting scales by coupling spectral-bin cloud microphysics (SBM) with the MOSAIC sectional aerosol model. A flexible interface is built that exchanges cloud and aerosol information between them. The interface contains a new bin aerosol activation approach, which replaces the treatments in the original SBM. It also includes the modified aerosol resuspension and in-cloud wet removal processes with the droplet loss tendencies and precipitation fluxes from SBM. The newly coupled system is evaluated for two marine stratocumulus cases over the Southeast Pacific Ocean with either a simplified aerosol setup or full-chemistry. We compare the aerosol activation process in the newly coupled SBM-MOSAIC against the SBM simulation without chemistry using a simplified aerosol setup, and the results show consistent activation rates. A longer time simulation reinforces that aerosol resuspension through cloud drop evaporation plays an important role in replenishing aerosols and impacts cloud and precipitation in marine stratocumulus clouds. Evaluation of the coupled SBM-MOSAIC with full-chemistry using aircraft measurements suggests that the new model works realistically for the marine stratocumulus clouds, and improves the simulation of cloud microphysical properties compared to a simulation using MOSAIC coupled with the Morrison two-moment microphysics.

  3. Microphysical Modeling of Mineral Clouds in GJ1214 b and GJ436 b: Predicting Upper Limits on the Cloud-top Height

    Science.gov (United States)

    Ohno, Kazumasa; Okuzumi, Satoshi

    2018-05-01

    The ubiquity of clouds in the atmospheres of exoplanets, especially of super-Earths, is one of the outstanding issues for the transmission spectra survey. Understanding the formation process of clouds in super-Earths is necessary to interpret the observed spectra correctly. In this study, we investigate the vertical distributions of particle size and mass density of mineral clouds in super-Earths using a microphysical model that takes into account the vertical transport and growth of cloud particles in a self-consistent manner. We demonstrate that the vertical profiles of mineral clouds significantly vary with the concentration of cloud condensation nuclei and atmospheric metallicity. We find that the height of the cloud top increases with increasing metallicity as long as the metallicity is lower than the threshold. If the metallicity is larger than the threshold, the cloud-top height no longer increases appreciably with metallicity because coalescence yields larger particles of higher settling velocities. We apply our cloud model to GJ1214 b and GJ436 b, for which recent transmission observations suggest the presence of high-altitude opaque clouds. For GJ436 b, we show that KCl particles can ascend high enough to explain the observation. For GJ1214 b, by contrast, the height of KCl clouds predicted from our model is too low to explain its flat transmission spectrum. Clouds made of highly porous KCl particles could explain the observations if the atmosphere is highly metal-rich, and hence the particle microstructure might be a key to interpret the flat spectrum of GJ1214 b.

  4. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    Science.gov (United States)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  5. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    Science.gov (United States)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis

  6. Greenland ice sheet model parameters constrained using simulations of the Eemian Interglacial

    Directory of Open Access Journals (Sweden)

    A. Robinson

    2011-04-01

    Full Text Available Using a new approach to force an ice sheet model, we performed an ensemble of simulations of the Greenland Ice Sheet evolution during the last two glacial cycles, with emphasis on the Eemian Interglacial. This ensemble was generated by perturbing four key parameters in the coupled regional climate-ice sheet model and by introducing additional uncertainty in the prescribed "background" climate change. The sensitivity of the surface melt model to climate change was determined to be the dominant driver of ice sheet instability, as reflected by simulated ice sheet loss during the Eemian Interglacial period. To eliminate unrealistic parameter combinations, constraints from present-day and paleo information were applied. The constraints include (i the diagnosed present-day surface mass balance partition between surface melting and ice discharge at the margin, (ii the modeled present-day elevation at GRIP; and (iii the modeled elevation reduction at GRIP during the Eemian. Using these three constraints, a total of 360 simulations with 90 different model realizations were filtered down to 46 simulations and 20 model realizations considered valid. The paleo constraint eliminated more sensitive melt parameter values, in agreement with the surface mass balance partition assumption. The constrained simulations resulted in a range of Eemian ice loss of 0.4–4.4 m sea level equivalent, with a more likely range of about 3.7–4.4 m sea level if the GRIP δ18O isotope record can be considered an accurate proxy for the precipitation-weighted annual mean temperatures.

  7. A Monte Carlo approach to constraining uncertainties in modelled downhole gravity gradiometry applications

    Science.gov (United States)

    Matthews, Samuel J.; O'Neill, Craig; Lackie, Mark A.

    2017-06-01

    Gravity gradiometry has a long legacy, with airborne/marine applications as well as surface applications receiving renewed recent interest. Recent instrumental advances has led to the emergence of downhole gravity gradiometry applications that have the potential for greater resolving power than borehole gravity alone. This has promise in both the petroleum and geosequestration industries; however, the effect of inherent uncertainties in the ability of downhole gravity gradiometry to resolve a subsurface signal is unknown. Here, we utilise the open source modelling package, Fatiando a Terra, to model both the gravity and gravity gradiometry responses of a subsurface body. We use a Monte Carlo approach to vary the geological structure and reference densities of the model within preset distributions. We then perform 100 000 simulations to constrain the mean response of the buried body as well as uncertainties in these results. We varied our modelled borehole to be either centred on the anomaly, adjacent to the anomaly (in the x-direction), and 2500 m distant to the anomaly (also in the x-direction). We demonstrate that gravity gradiometry is able to resolve a reservoir-scale modelled subsurface density variation up to 2500 m away, and that certain gravity gradient components (Gzz, Gxz, and Gxx) are particularly sensitive to this variation in gravity/gradiometry above the level of uncertainty in the model. The responses provided by downhole gravity gradiometry modelling clearly demonstrate a technique that can be utilised in determining a buried density contrast, which will be of particular use in the emerging industry of CO2 geosequestration. The results also provide a strong benchmark for the development of newly emerging prototype downhole gravity gradiometers.

  8. Evaluation and Improvement of Cloud and Convective Parameterizations from Analyses of ARM Observations and Models

    Energy Technology Data Exchange (ETDEWEB)

    Del Genio, Anthony D. [NASA Goddard Inst. for Space Studies (GISS), New York, NY (United States)

    2016-03-11

    Over this period the PI and his performed a broad range of data analysis, model evaluation, and model improvement studies using ARM data. These included cloud regimes in the TWP and their evolution over the MJO; M-PACE IOP SCM-CRM intercomparisons; simulations of convective updraft strength and depth during TWP-ICE; evaluation of convective entrainment parameterizations using TWP-ICE simulations; evaluation of GISS GCM cloud behavior vs. long-term SGP cloud statistics; classification of aerosol semi-direct effects on cloud cover; depolarization lidar constraints on cloud phase; preferred states of the winter Arctic atmosphere, surface, and sub-surface; sensitivity of convection to tropospheric humidity; constraints on the parameterization of mesoscale organization from TWP-ICE WRF simulations; updraft and downdraft properties in TWP-ICE simulated convection; insights from long-term ARM records at Manus and Nauru.

  9. Internet gaming disorder: Inadequate diagnostic criteria wrapped in a constraining conceptual model.

    Science.gov (United States)

    Starcevic, Vladan

    2017-06-01

    Background and aims The paper "Chaos and confusion in DSM-5 diagnosis of Internet Gaming Disorder: Issues, concerns, and recommendations for clarity in the field" by Kuss, Griffiths, and Pontes (in press) critically examines the DSM-5 diagnostic criteria for Internet gaming disorder (IGD) and addresses the issue of whether IGD should be reconceptualized as gaming disorder, regardless of whether video games are played online or offline. This commentary provides additional critical perspectives on the concept of IGD. Methods The focus of this commentary is on the addiction model on which the concept of IGD is based, the nature of the DSM-5 criteria for IGD, and the inclusion of withdrawal symptoms and tolerance as the diagnostic criteria for IGD. Results The addiction framework on which the DSM-5 concept of IGD is based is not without problems and represents only one of multiple theoretical approaches to problematic gaming. The polythetic, non-hierarchical DSM-5 diagnostic criteria for IGD make the concept of IGD unacceptably heterogeneous. There is no support for maintaining withdrawal symptoms and tolerance as the diagnostic criteria for IGD without their substantial revision. Conclusions The addiction model of IGD is constraining and does not contribute to a better understanding of the various patterns of problematic gaming. The corresponding diagnostic criteria need a thorough overhaul, which should be based on a model of problematic gaming that can accommodate its disparate aspects.

  10. An Anatomically Constrained Model for Path Integration in the Bee Brain.

    Science.gov (United States)

    Stone, Thomas; Webb, Barbara; Adden, Andrea; Weddig, Nicolai Ben; Honkanen, Anna; Templin, Rachel; Wcislo, William; Scimeca, Luca; Warrant, Eric; Heinze, Stanley

    2017-10-23

    Path integration is a widespread navigational strategy in which directional changes and distance covered are continuously integrated on an outward journey, enabling a straight-line return to home. Bees use vision for this task-a celestial-cue-based visual compass and an optic-flow-based visual odometer-but the underlying neural integration mechanisms are unknown. Using intracellular electrophysiology, we show that polarized-light-based compass neurons and optic-flow-based speed-encoding neurons converge in the central complex of the bee brain, and through block-face electron microscopy, we identify potential integrator cells. Based on plausible output targets for these cells, we propose a complete circuit for path integration and steering in the central complex, with anatomically identified neurons suggested for each processing step. The resulting model circuit is thus fully constrained biologically and provides a functional interpretation for many previously unexplained architectural features of the central complex. Moreover, we show that the receptive fields of the newly discovered speed neurons can support path integration for the holonomic motion (i.e., a ground velocity that is not precisely aligned with body orientation) typical of bee flight, a feature not captured in any previously proposed model of path integration. In a broader context, the model circuit presented provides a general mechanism for producing steering signals by comparing current and desired headings-suggesting a more basic function for central complex connectivity, from which path integration may have evolved. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Chance-constrained/stochastic linear programming model for acid rain abatement. I. Complete colinearity and noncolinearity

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, J H; McBean, E A; Farquhar, G J

    1985-01-01

    A Linear Programming model is presented for development of acid rain abatement strategies in eastern North America. For a system comprised of 235 large controllable point sources and 83 uncontrolled area sources, it determines the least-cost method of reducing SO/sub 2/ emissions to satisfy maximum wet sulfur deposition limits at 20 sensitive receptor locations. In this paper, the purely deterministic model is extended to a probabilistic form by incorporating the effects of meteorologic variability on the long-range pollutant transport processes. These processes are represented by source-receptor-specific transfer coefficients. Experiments for quantifying the spatial variability of transfer coefficients showed their distributions to be approximately lognormal with logarithmic standard deviations consistently about unity. Three methods of incorporating second-moment random variable uncertainty into the deterministic LP framework are described: Two-Stage Programming Under Uncertainty, Chance-Constrained Programming and Stochastic Linear Programming. A composite CCP-SLP model is developed which embodies the two-dimensional characteristics of transfer coefficient uncertainty. Two probabilistic formulations are described involving complete colinearity and complete noncolinearity for the transfer coefficient covariance-correlation structure. The completely colinear and noncolinear formulations are considered extreme bounds in a meteorologic sense and yield abatement strategies of largely didactic value. Such strategies can be characterized as having excessive costs and undesirable deposition results in the completely colinear case and absence of a clearly defined system risk level (other than expected-value) in the noncolinear formulation.

  12. An Incremental Model for Cloud Adoption: Based on a Study of Regional Organizations

    Directory of Open Access Journals (Sweden)

    Emre Erturk

    2017-11-01

    Full Text Available Many organizations that use cloud computing services intend to increase this commitment. A survey was distributed to organizations in Hawke’s Bay, New Zealand to understand their adoption of cloud solutions, in comparison with global trends and practices. The survey also included questions on the benefits and challenges, and which delivery model(s they have adopted and are planning to adopt. One aim is to contribute to the cloud computing literature and build on the existing adoption models. This study also highlights additional aspects applicable to various organizations (small, medium, large and regional. Finally, recommendations are provided for related future research projects.

  13. Using satellites and global models to investigate aerosol-cloud interactions

    Science.gov (United States)

    Gryspeerdt, E.; Quaas, J.; Goren, T.; Sourdeval, O.; Mülmenstädt, J.

    2017-12-01

    Aerosols are known to impact liquid cloud properties, through both microphysical and radiative processes. Increasing the number concentration of aerosol particles can increase the cloud droplet number concentration (CDNC). Through impacts on precipitation processes, this increase in CDNC may also be able to impact the cloud fraction (CF) and the cloud liquid water path (LWP). Several studies have looked into the effect of aerosols on the CDNC, but as the albedo of a cloudy scene depends much more strongly on LWP and CF, an aerosol influence on these properties could generate a significant radiative forcing. While the impact of aerosols on cloud properties can be seen in case studies involving shiptracks and volcanoes, producing a global estimate of these effects remains challenging due to the confounding effect of local meteorology. For example, relative humidity significantly impacts the aerosol optical depth (AOD), a common satellite proxy for CCN, as well as being a strong control on cloud properties. This can generate relationships between AOD and cloud properties, even when there is no impact of aerosol-cloud interactions. In this work, we look at how aerosol-cloud interactions can be distinguished from the effect of local meteorology in satellite studies. With a combination global climate models and multiple sources of satellite data, we show that the choice of appropriate mediating variables and case studies can be used to develop constraints on the aerosol impact on CF and LWP. This will lead to improved representations of clouds in global climate models and help to reduce the uncertainty in the global impact of anthropogenic aerosols on cloud properties.

  14. Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds

    Science.gov (United States)

    Yun, Yuxing; Penner, Joyce E.

    2012-04-01

    A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.

  15. Cloud ice: A climate model challenge with signs and expectations of progress

    Science.gov (United States)

    Waliser, Duane E.; Li, Jui-Lin F.; Woods, Christopher P.; Austin, Richard T.; Bacmeister, Julio; Chern, Jiundar; Del Genio, Anthony; Jiang, Jonathan H.; Kuang, Zhiming; Meng, Huan; Minnis, Patrick; Platnick, Steve; Rossow, William B.; Stephens, Graeme L.; Sun-Mack, Szedung; Tao, Wei-Kuo; Tompkins, Adrian M.; Vane, Deborah G.; Walker, Christopher; Wu, Dong

    2009-04-01

    Present-day shortcomings in the representation of upper tropospheric ice clouds in general circulation models (GCMs) lead to errors in weather and climate forecasts as well as account for a source of uncertainty in climate change projections. An ongoing challenge in rectifying these shortcomings has been the availability of adequate, high-quality, global observations targeting ice clouds and related precipitating hydrometeors. In addition, the inadequacy of the modeled physics and the often disjointed nature between model representation and the characteristics of the retrieved/observed values have hampered GCM development and validation efforts from making effective use of the measurements that have been available. Thus, even though parameterizations in GCMs accounting for cloud ice processes have, in some cases, become more sophisticated in recent years, this development has largely occurred independently of the global-scale measurements. With the relatively recent addition of satellite-derived products from Aura/Microwave Limb Sounder (MLS) and CloudSat, there are now considerably more resources with new and unique capabilities to evaluate GCMs. In this article, we illustrate the shortcomings evident in model representations of cloud ice through a comparison of the simulations assessed in the Intergovernmental Panel on Climate Change Fourth Assessment Report, briefly discuss the range of global observational resources that are available, and describe the essential components of the model parameterizations that characterize their "cloud" ice and related fields. Using this information as background, we (1) discuss some of the main considerations and cautions that must be taken into account in making model-data comparisons related to cloud ice, (2) illustrate present progress and uncertainties in applying satellite cloud ice (namely from MLS and CloudSat) to model diagnosis, (3) show some indications of model improvements, and finally (4) discuss a number of

  16. Constraining Distributed Catchment Models by Incorporating Perceptual Understanding of Spatial Hydrologic Behaviour

    Science.gov (United States)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    and valley slopes within the catchment are used to identify behavioural models. The process of converting qualitative information into quantitative constraints forces us to evaluate the assumptions behind our perceptual understanding in order to derive robust constraints, and therefore fairly reject models and avoid type II errors. Likewise, consideration needs to be given to the commensurability problem when mapping perceptual understanding to constrain model states.

  17. Photoionization modeling of Magellanic Cloud planetary nebulae. I

    Science.gov (United States)

    Dopita, M. A.; Meatheringham, S. J.

    1991-01-01

    The results of self-consistent photoionization modeling of 38 Magellanic Cloud PNe are presented and used to construct an H-R diagram for the central stars and to obtain both the nebular chemical abundances and the physical parameters of the nebulae. T(eff)s derived from nebular excitation analysis are in agreement with temperatures derived by the classical Zanstra method. There is a linear correlation between log T(eff) and the excitation class. The majority of the central stars in the sample with optically thick nebulae have masses between 0.55 and 0.7 solar mass and are observed during their hydrogen-burning excursion toward high temperatures. Optically thin objects are found scattered throughout the H-R diagram, but tend to have a somewhat smaller mean mass. Type I PN are found to have high core masses and to lie on the descending branch of the evolutionary tracks. The nebular mass of the optically thick objects is closely related to the nebular radius, and PN with nebular masses over one solar are observed.

  18. Long Term Cloud Property Datasets From MODIS and AVHRR Using the CERES Cloud Algorithm

    Science.gov (United States)

    Minnis, Patrick; Bedka, Kristopher M.; Doelling, David R.; Sun-Mack, Sunny; Yost, Christopher R.; Trepte, Qing Z.; Bedka, Sarah T.; Palikonda, Rabindra; Scarino, Benjamin R.; Chen, Yan; hide

    2015-01-01

    Cloud properties play a critical role in climate change. Monitoring cloud properties over long time periods is needed to detect changes and to validate and constrain models. The Clouds and the Earth's Radiant Energy System (CERES) project has developed several cloud datasets from Aqua and Terra MODIS data to better interpret broadband radiation measurements and improve understanding of the role of clouds in the radiation budget. The algorithms applied to MODIS data have been adapted to utilize various combinations of channels on the Advanced Very High Resolution Radiometer (AVHRR) on the long-term time series of NOAA and MetOp satellites to provide a new cloud climate data record. These datasets can be useful for a variety of studies. This paper presents results of the MODIS and AVHRR analyses covering the period from 1980-2014. Validation and comparisons with other datasets are also given.

  19. Design Thinking and Cloud Manufacturing: A Study of Cloud Model Sharing Platform Based on Separated Data Log

    Directory of Open Access Journals (Sweden)

    Zhe Wei

    2013-01-01

    Full Text Available To solve the product data consistency problem which is caused by the portable system that cannot conduct real-time update of product data in mobile environment under the mass customization production mode, a new product data optimistic replication method based on log is presented. This paper focuses on the design thinking provider, probing into a manufacturing resource design thinking cloud platform based on manufacturing resource-locating technologies, and also discuss several application scenarios of cloud locating technologies in the manufacturing environment. The actual demand of manufacturing creates a new mode which is service-oriented and has high efficiency and low consumption. Finally, they are different from the crowd-sourcing application model of Local-Motors. The sharing platform operator is responsible for a master plan for the platform, proposing a open interface standard and establishing a service operation mode.

  20. Aircraft profile measurements of 18O/16O and D/H isotope ratios of cloud condensate and water vapor constrain precipitation efficiency and entrainment rates in tropical clouds

    Science.gov (United States)

    Noone, D. C.; Raudzens Bailey, A.; Toohey, D. W.; Twohy, C. H.; Heymsfield, A.; Rella, C.; Van Pelt, A. D.

    2011-12-01

    Convective clouds play a significant role in the moisture and heat balance of the tropics. The dynamics of organized and isolated convection are a function of the background thermodynamic profile and wind shear, buoyancy sources near the surface and the latent heating inside convective updrafts. The stable oxygen and hydrogen isotope ratios in water vapor and condensate can be used to identify dominant moisture exchanges and aspects of the cloud microphysics that are otherwise difficult to observe. Both the precipitation efficiency and the dilution of cloud updrafts by entrainment can be estimated since the isotopic composition outside the plume is distinct from inside. Measurements of the 18O/16O and D/H isotope ratios were made in July 2011 on 13 research flights of the NCAR C130 aircraft during the ICE-T (Ice in Clouds Experiment - Tropical) field campaign near St Croix. Measurements were made using an instrument based on the Picarro Wave-Length Scanning Cavity Ring Down platform that includes a number of optical, hardware and software modifications to allow measurements to be made at 5 Hz for deployment on aircraft. The measurement system was optimized to make precise measurements of the isotope ratio of liquid and ice cloud condensate by coupling the gas analyzer to the NCAR Counter flow Virtual Impactor inlet. The inlet system provides a particle enhancement while rejecting vapor. Sample air is vigorously heated before flowing into the gas phase analyzer. We present statistics that demonstrate the performance and calibration of the instrument. Measured profiles show that environmental air exhibits significant layering showing controls from boundary layer processes, large scale horizontal advection and regional subsidence. Condensate in clouds is consistent with generally low precipitation efficiency, although there is significant variability in the isotope ratios suggesting heterogeneity within plumes and the stochastic nature of detrainment processes

  1. A Stochastic Framework for Modeling the Population Dynamics of Convective Clouds

    Science.gov (United States)

    Hagos, Samson; Feng, Zhe; Plant, Robert S.; Houze, Robert A.; Xiao, Heng

    2018-02-01

    A stochastic prognostic framework for modeling the population dynamics of convective clouds and representing them in climate models is proposed. The framework follows the nonequilibrium statistical mechanical approach to constructing a master equation for representing the evolution of the number of convective cells of a specific size and their associated cloud-base mass flux, given a large-scale forcing. In this framework, referred to as STOchastic framework for Modeling Population dynamics of convective clouds (STOMP), the evolution of convective cell size is predicted from three key characteristics of convective cells: (i) the probability of growth, (ii) the probability of decay, and (iii) the cloud-base mass flux. STOMP models are constructed and evaluated against CPOL radar observations at Darwin and convection permitting model (CPM) simulations. Multiple models are constructed under various assumptions regarding these three key parameters and the realisms of these models are evaluated. It is shown that in a model where convective plumes prefer to aggregate spatially and the cloud-base mass flux is a nonlinear function of convective cell area, the mass flux manifests a recharge-discharge behavior under steady forcing. Such a model also produces observed behavior of convective cell populations and CPM simulated cloud-base mass flux variability under diurnally varying forcing. In addition to its use in developing understanding of convection processes and the controls on convective cell size distributions, this modeling framework is also designed to serve as a nonequilibrium closure formulations for spectral mass flux parameterizations.

  2. Analysis and Research on Spatial Data Storage Model Based on Cloud Computing Platform

    Science.gov (United States)

    Hu, Yong

    2017-12-01

    In this paper, the data processing and storage characteristics of cloud computing are analyzed and studied. On this basis, a cloud computing data storage model based on BP neural network is proposed. In this data storage model, it can carry out the choice of server cluster according to the different attributes of the data, so as to complete the spatial data storage model with load balancing function, and have certain feasibility and application advantages.

  3. flexCloud: Deployment of the FLEXPART Atmospheric Transport Model as a Cloud SaaS Environment

    Science.gov (United States)

    Morton, Don; Arnold, Dèlia

    2014-05-01

    FLEXPART (FLEXible PARTicle dispersion model) is a Lagrangian transport and dispersion model used by a growing international community. We have used it to simulate and forecast the atmospheric transport of wildfire smoke, volcanic ash and radionuclides. Additionally, FLEXPART may be run in backwards mode to provide information for the determination of emission sources such as nuclear emissions and greenhouse gases. This open source software is distributed in source code form, and has several compiler and library dependencies that users need to address. Although well-documented, getting it compiled, set up, running, and post-processed is often tedious, making it difficult for the inexperienced user. Our interest is in moving scientific modeling and simulation activities from site-specific clusters and supercomputers to a cloud model as a service paradigm. Choosing FLEXPART for our prototyping, our vision is to construct customised IaaS images containing fully-compiled and configured FLEXPART codes, including pre-processing, execution and postprocessing components. In addition, with the inclusion of a small web server in the image, we introduce a web-accessible graphical user interface that drives the system. A further initiative being pursued is the deployment of multiple, simultaneous FLEXPART ensembles in the cloud. A single front-end web interface is used to define the ensemble members, and separate cloud instances are launched, on-demand, to run the individual models and to conglomerate the outputs into a unified display. The outcome of this work is a Software as a Service (Saas) deployment whereby the details of the underlying modeling systems are hidden, allowing modelers to perform their science activities without the burden of considering implementation details.

  4. Tropical Oceanic Precipitation Processes Over Warm Pool: 2D and 3D Cloud Resolving Model Simulations

    Science.gov (United States)

    Tao, W.-K.; Johnson, D.; Simpson, J.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Rainfall is a key link in the hydrologic cycle as well as the primary heat source for the atmosphere. The vertical distribution of convective latent-heat release modulates the large-scale circulations of the topics. Furthermore, changes in the moisture distribution at middle and upper levels of the troposphere can affect cloud distributions and cloud liquid water and ice contents. How the incoming solar and outgoing longwave radiation respond to these changes in clouds is a major factor in assessing climate change. Present large-scale weather and climate model simulate processes only crudely, reducing confidence in their predictions on both global and regional scales. One of the most promising methods to test physical parameterizations used in General Circulation Models (GCMs) and climate models is to use field observations together with Cloud Resolving Models (CRMs). The CRMs use more sophisticated and physically realistic parameterizations of cloud microphysical processes, and allow for their complex interactions with solar and infrared radiative transfer processes. The CRMs can reasonably well resolve the evolution, structure, and life cycles of individual clouds and clouds systems. The major objective of this paper is to investigate the latent heating, moisture and momentum budgets associated with several convective systems developed during the TOGA COARE IFA - westerly wind burst event (late December, 1992). The tool for this study is the Goddard Cumulus Ensemble (GCE) model which includes a 3-class ice-phase microphysics scheme.

  5. Trust-Enhanced Cloud Service Selection Model Based on QoS Analysis.

    Science.gov (United States)

    Pan, Yuchen; Ding, Shuai; Fan, Wenjuan; Li, Jing; Yang, Shanlin

    2015-01-01

    Cloud computing technology plays a very important role in many areas, such as in the construction and development of the smart city. Meanwhile, numerous cloud services appear on the cloud-based platform. Therefore how to how to select trustworthy cloud services remains a significant problem in such platforms, and extensively investigated owing to the ever-growing needs of users. However, trust relationship in social network has not been taken into account in existing methods of cloud service selection and recommendation. In this paper, we propose a cloud service selection model based on the trust-enhanced similarity. Firstly, the direct, indirect, and hybrid trust degrees are measured based on the interaction frequencies among users. Secondly, we estimate the overall similarity by combining the experience usability measured based on Jaccard's Coefficient and the numerical distance computed by Pearson Correlation Coefficient. Then through using the trust degree to modify the basic similarity, we obtain a trust-enhanced similarity. Finally, we utilize the trust-enhanced similarity to find similar trusted neighbors and predict the missing QoS values as the basis of cloud service selection and recommendation. The experimental results show that our approach is able to obtain optimal results via adjusting parameters and exhibits high effectiveness. The cloud services ranking by our model also have better QoS properties than other methods in the comparison experiments.

  6. Sensitivity of aerosol indirect forcing and autoconversion to cloud droplet parameterization: an assessment with the NASA Global Modeling Initiative.

    Science.gov (United States)

    Sotiropoulou, R. P.; Meshkhidze, N.; Nenes, A.

    2006-12-01

    The aerosol indirect forcing is one of the largest sources of uncertainty in assessments of anthropogenic climate change [IPCC, 2001]. Much of this uncertainty arises from the approach used for linking cloud droplet number concentration (CDNC) to precursor aerosol. Global Climate Models (GCM) use a wide range of cloud droplet activation mechanisms ranging from empirical [Boucher and Lohmann, 1995] to detailed physically- based formulations [e.g., Abdul-Razzak and Ghan, 2000; Fountoukis and Nenes, 2005]. The objective of this study is to assess the uncertainties in indirect forcing and autoconversion of cloud water to rain caused by the application of different cloud droplet parameterization mechanisms; this is an important step towards constraining the aerosol indirect effects (AIE). Here we estimate the uncertainty in indirect forcing and autoconversion rate using the NASA Global Model Initiative (GMI). The GMI allows easy interchange of meteorological fields, chemical mechanisms and the aerosol microphysical packages. Therefore, it is an ideal tool for assessing the effect of different parameters on aerosol indirect forcing. The aerosol module includes primary emissions, chemical production of sulfate in clear air and in-cloud aqueous phase, gravitational sedimentation, dry deposition, wet scavenging in and below clouds, and hygroscopic growth. Model inputs include SO2 (fossil fuel and natural), black carbon (BC), organic carbon (OC), mineral dust and sea salt. The meteorological data used in this work were taken from the NASA Data Assimilation Office (DAO) and two different GCMs: the NASA GEOS4 finite volume GCM (FVGCM) and the Goddard Institute for Space Studies version II' (GISS II') GCM. Simulations were carried out for "present day" and "preindustrial" emissions using different meteorological fields (i.e. DAO, FVGCM, GISS II'); cloud droplet number concentration is computed from the correlations of Boucher and Lohmann [1995], Abdul-Razzak and Ghan [2000

  7. Cloud processing of gases and aerosols in the Community Multiscale Air Quality (CMAQ) model: Impacts of extended chemistry

    Science.gov (United States)

    Clouds and fogs can significantly impact the concentration and distribution of atmospheric gases and aerosols through chemistry, scavenging, and transport. This presentation summarizes the representation of cloud processes in the Community Multiscale Air Quality (CMAQ) modeling ...

  8. Commitment Versus Persuasion in the Three-Party Constrained Voter Model

    Science.gov (United States)

    Mobilia, Mauro

    2013-04-01

    In the framework of the three-party constrained voter model, where voters of two radical parties ( A and B) interact with "centrists" ( C and C ζ ), we study the competition between a persuasive majority and a committed minority. In this model, A's and B's are incompatible voters that can convince centrists or be swayed by them. Here, radical voters are more persuasive than centrists, whose sub-population comprises susceptible agents C and a fraction ζ of centrist zealots C ζ . Whereas C's may adopt the opinions A and B with respective rates 1+ δ A and 1+ δ B (with δ A ≥ δ B >0), C ζ 's are committed individuals that always remain centrists. Furthermore, A and B voters can become (susceptible) centrists C with a rate 1. The resulting competition between commitment and persuasion is studied in the mean field limit and for a finite population on a complete graph. At mean field level, there is a continuous transition from a coexistence phase when ζpersuasion, here consensus is reached much slower ( ζpersuasive voters and centrists coexist when δ A > δ B , whereas all species coexist when δ A = δ B . When ζ≥Δ c and the initial density of centrists is low, one finds τ˜ln N (when N≫1). Our analytical findings are corroborated by stochastic simulations.

  9. Constrained structural dynamic model verification using free vehicle suspension testing methods

    Science.gov (United States)

    Blair, Mark A.; Vadlamudi, Nagarjuna

    1988-01-01

    Verification of the validity of a spacecraft's structural dynamic math model used in computing ascent (or in the case of the STS, ascent and landing) loads is mandatory. This verification process requires that tests be carried out on both the payload and the math model such that the ensuing correlation may validate the flight loads calculations. To properly achieve this goal, the tests should be performed with the payload in the launch constraint (i.e., held fixed at only the payload-booster interface DOFs). The practical achievement of this set of boundary conditions is quite difficult, especially with larger payloads, such as the 12-ton Hubble Space Telescope. The development of equations in the paper will show that by exciting the payload at its booster interface while it is suspended in the 'free-free' state, a set of transfer functions can be produced that will have minima that are directly related to the fundamental modes of the payload when it is constrained in its launch configuration.

  10. Constraining models of f(R) gravity with Planck and WiggleZ power spectrum data

    International Nuclear Information System (INIS)

    Dossett, Jason; Parkinson, David; Hu, Bin

    2014-01-01

    In order to explain cosmic acceleration without invoking ''dark'' physics, we consider f(R) modified gravity models, which replace the standard Einstein-Hilbert action in General Relativity with a higher derivative theory. We use data from the WiggleZ Dark Energy survey to probe the formation of structure on large scales which can place tight constraints on these models. We combine the large-scale structure data with measurements of the cosmic microwave background from the Planck surveyor. After parameterizing the modification of the action using the Compton wavelength parameter B 0 , we constrain this parameter using ISiTGR, assuming an initial non-informative log prior probability distribution of this cross-over scale. We find that the addition of the WiggleZ power spectrum provides the tightest constraints to date on B 0 by an order of magnitude, giving log 10 (B 0 ) < −4.07 at 95% confidence limit. Finally, we test whether the effect of adding the lensing amplitude A Lens and the sum of the neutrino mass ∑m ν is able to reconcile current tensions present in these parameters, but find f(R) gravity an inadequate explanation

  11. Ice loading model for Glacial Isostatic Adjustment in the Barents Sea constrained by GRACE gravity observations

    Science.gov (United States)

    Root, Bart; Tarasov, Lev; van der Wal, Wouter

    2014-05-01

    The global ice budget is still under discussion because the observed 120-130 m eustatic sea level equivalent since the Last Glacial Maximum (LGM) can not be explained by the current knowledge of land-ice melt after the LGM. One possible location for the missing ice is the Barents Sea Region, which was completely covered with ice during the LGM. This is deduced from relative sea level observations on Svalbard, Novaya Zemlya and the North coast of Scandinavia. However, there are no observations in the middle of the Barents Sea that capture the post-glacial uplift. With increased precision and longer time series of monthly gravity observations of the GRACE satellite mission it is possible to constrain Glacial Isostatic Adjustment in the center of the Barents Sea. This study investigates the extra constraint provided by GRACE data for modeling the past ice geometry in the Barents Sea. We use CSR release 5 data from February 2003 to July 2013. The GRACE data is corrected for the past 10 years of secular decline of glacier ice on Svalbard, Novaya Zemlya and Frans Joseph Land. With numerical GIA models for a radially symmetric Earth, we model the expected gravity changes and compare these with the GRACE observations after smoothing with a 250 km Gaussian filter. The comparisons show that for the viscosity profile VM5a, ICE-5G has too strong a gravity signal compared to GRACE. The regional calibrated ice sheet model (GLAC) of Tarasov appears to fit the amplitude of the GRACE signal. However, the GRACE data are very sensitive to the ice-melt correction, especially for Novaya Zemlya. Furthermore, the ice mass should be more concentrated to the middle of the Barents Sea. Alternative viscosity models confirm these conclusions.

  12. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  13. Constraining the parameters of the EAP sea ice rheology from satellite observations and discrete element model

    Science.gov (United States)

    Tsamados, Michel; Heorton, Harry; Feltham, Daniel; Muir, Alan; Baker, Steven

    2016-04-01

    The new elastic-plastic anisotropic (EAP) rheology that explicitly accounts for the sub-continuum anisotropy of the sea ice cover has been implemented into the latest version of the Los Alamos sea ice model CICE. The EAP rheology is widely used in the climate modeling scientific community (i.e. CPOM stand alone, RASM high resolution regional ice-ocean model, MetOffice fully coupled model). Early results from sensitivity studies (Tsamados et al, 2013) have shown the potential for an improved representation of the observed main sea ice characteristics with a substantial change of the spatial distribution of ice thickness and ice drift relative to model runs with the reference visco-plastic (VP) rheology. The model contains one new prognostic variable, the local structure tensor, which quantifies the degree of anisotropy of the sea ice, and two parameters that set the time scale of the evolution of this tensor. Observations from high resolution satellite SAR imagery as well as numerical simulation results from a discrete element model (DEM, see Wilchinsky, 2010) have shown that these individual floes can organize under external wind and thermal forcing to form an emergent isotropic sea ice state (via thermodynamic healing, thermal cracking) or an anisotropic sea ice state (via Coulombic failure lines due to shear rupture). In this work we use for the first time in the context of sea ice research a mathematical metric, the Tensorial Minkowski functionals (Schroeder-Turk, 2010), to measure quantitatively the degree of anisotropy and alignment of the sea ice at different scales. We apply the methodology on the GlobICE Envisat satellite deformation product (www.globice.info), on a prototype modified version of GlobICE applied on Sentinel-1 Synthetic Aperture Radar (SAR) imagery and on the DEM ice floe aggregates. By comparing these independent measurements of the sea ice anisotropy as well as its temporal evolution against the EAP model we are able to constrain the

  14. Clouds-radiation interactions in a general circulation model - Impact upon the planetary radiation balance

    Science.gov (United States)

    Smith, Laura D.; Vonder Haar, Thomas H.

    1991-01-01

    Simultaneously conducted observations of the earth radiation budget and the cloud amount estimates, taken during the June 1979 - May 1980 Nimbus 7 mission were used to show interactions between the cloud amount and raidation and to verify a long-term climate simulation obtained with the latest version of the NCAR Community Climate Model (CCM). The parameterization of the radiative, dynamic, and thermodynamic processes produced the mean radiation and cloud quantities that were in reasonable agreement with satellite observations, but at the expense of simulating their short-term fluctuations. The results support the assumption that the inclusion of the cloud liquid water (ice) variable would be the best mean to reduce the blinking of clouds in NCAR CCM.

  15. Applications integration in a hybrid cloud computing environment: modelling and platform

    Science.gov (United States)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  16. Reliability Evaluation for the Surface to Air Missile Weapon Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Deng Jianjun

    2015-01-01

    Full Text Available The fuzziness and randomness is integrated by using digital characteristics, such as Expected value, Entropy and Hyper entropy. The cloud model adapted to reliability evaluation is put forward based on the concept of the surface to air missile weapon. The cloud scale of the qualitative evaluation is constructed, and the quantitative variable and the qualitative variable in the system reliability evaluation are corresponded. The practical calculation result shows that it is more effective to analyze the reliability of the surface to air missile weapon by this way. The practical calculation result also reflects the model expressed by cloud theory is more consistent with the human thinking style of uncertainty.

  17. Constraining soil C cycling with strategic, adaptive action for data and model reporting

    Science.gov (United States)

    Harden, J. W.; Swanston, C.; Hugelius, G.

    2015-12-01

    Regional to global carbon assessments include a variety of models, data sets, and conceptual structures. This includes strategies for representing the role and capacity of soils to sequester, release, and store carbon. Traditionally, many soil carbon data sets emerged from agricultural missions focused on mapping and classifying soils to enhance and protect production of food and fiber. More recently, soil carbon assessments have allowed for more strategic measurement to address the functional and spatially explicit role that soils play in land-atmosphere carbon exchange. While soil data sets are increasingly inter-comparable and increasingly sampled to accommodate global assessments, soils remain poorly constrained or understood with regard to their role in spatio-temporal variations in carbon exchange. A more deliberate approach to rapid improvement in our understanding involves a community-based activity than embraces both a nimble data repository and a dynamic structure for prioritization. Data input and output can be transparent and retrievable as data-derived products, while also being subjected to rigorous queries for merging and harmonization into a searchable, comprehensive, transparent database. Meanwhile, adaptive action groups can prioritize data and modeling needs that emerge through workshops, meta-data analyses or model testing. Our continual renewal of priorities should address soil processes, mechanisms, and feedbacks that significantly influence global C budgets and/or significantly impact the needs and services of regional soil resources that are impacted by C management. In order to refine the International Soil Carbon Network, we welcome suggestions for such groups to be led on topics such as but not limited to manipulation experiments, extreme climate events, post-disaster C management, past climate-soil interactions, or water-soil-carbon linkages. We also welcome ideas for a business model that can foster and promote idea and data sharing.

  18. Evaluating Cloud and Precipitation Processes in Numerical Models using Current and Potential Future Satellite Missions

    Science.gov (United States)

    van den Heever, S. C.; Tao, W. K.; Skofronick Jackson, G.; Tanelli, S.; L'Ecuyer, T. S.; Petersen, W. A.; Kummerow, C. D.

    2015-12-01

    Cloud, aerosol and precipitation processes play a fundamental role in the water and energy cycle. It is critical to accurately represent these microphysical processes in numerical models if we are to better predict cloud and precipitation properties on weather through climate timescales. Much has been learned about cloud properties and precipitation characteristics from NASA satellite missions such as TRMM, CloudSat, and more recently GPM. Furthermore, data from these missions have been successfully utilized in evaluating the microphysical schemes in cloud-resolving models (CRMs) and global models. However, there are still many uncertainties associated with these microphysics schemes. These uncertainties can be attributed, at least in part, to the fact that microphysical processes cannot be directly observed or measured, but instead have to be inferred from those cloud properties that can be measured. Evaluation of microphysical parameterizations are becoming increasingly important as enhanced computational capabilities are facilitating the use of more sophisticated schemes in CRMs, and as future global models are being run on what has traditionally been regarded as cloud-resolving scales using CRM microphysical schemes. In this talk we will demonstrate how TRMM, CloudSat and GPM data have been used to evaluate different aspects of current CRM microphysical schemes, providing examples of where these approaches have been successful. We will also highlight CRM microphysical processes that have not been well evaluated and suggest approaches for addressing such issues. Finally, we will introduce a potential NASA satellite mission, the Cloud and Precipitation Processes Mission (CAPPM), which would facilitate the development and evaluation of different microphysical-dynamical feedbacks in numerical models.

  19. Introducing Subrid-scale Cloud Feedbacks to Radiation for Regional Meteorological and Cllimate Modeling

    Science.gov (United States)

    Convection systems and associated cloudiness directly influence regional and local radiation budgets, and dynamics and thermodynamics through feedbacks. However, most subgrid-scale convective parameterizations in regional weather and climate models do not consider cumulus cloud ...

  20. PREVENTIVE SIGNATURE MODEL FOR SECURE CLOUD DEPLOYMENT THROUGH FUZZY DATA ARRAY COMPUTATION

    Directory of Open Access Journals (Sweden)

    R. Poorvadevi

    2017-01-01

    Full Text Available Cloud computing is a resource pool which offers boundless services by the form of resources to its end users whoever heavily depends on cloud service providers. Cloud is providing the service access across the geographic locations in an efficient way. However it is offering numerous services, client end system is not having adequate methods, security policies and other protocols for using the cloud customer secret level transactions and other privacy related information. So, this proposed model brings the solution for securing the cloud user confidential data, Application deployment and also identifying the genuineness of the user by applying the scheme which is referred as fuzzy data array computation. Fuzzy data array computation provides an effective system is called signature retrieval and evaluation system through which customer’s data can be safeguarded along with their application. This signature system can be implemented on the cloud environment using the cloud sim 3.0 simulator tools. It facilitates the security operation over the data centre and cloud vendor locations in an effective manner.

  1. Advancing cloud lifecycle representation in numerical models using innovative analysis methods that bridge arm observations over a breadth of scales

    Energy Technology Data Exchange (ETDEWEB)

    Tselioudis, George [Columbia Univ., New York, NY (United States)

    2016-03-04

    From its location on the subtropics-midlatitude boundary, the Azores is influenced by both the subtropical high pressure and the midlatitude baroclinic storm regimes, and therefore experiences a wide range of cloud structures, from fair-weather scenes to stratocumulus sheets to deep convective systems. This project combined three types of data sets to study cloud variability in the Azores: a satellite analysis of cloud regimes, a reanalysis characterization of storminess, and a 19-month field campaign that occurred on Graciosa Island. Combined analysis of the three data sets provides a detailed picture of cloud variability and the respective dynamic influences, with emphasis on low clouds that constitute a major uncertainty source in climate model simulations. The satellite cloud regime analysis shows that the Azores cloud distribution is similar to the mean global distribution and can therefore be used to evaluate cloud simulation in global models. Regime analysis of low clouds shows that stratocumulus decks occur under the influence of the Azores high-pressure system, while shallow cumulus clouds are sustained by cold-air outbreaks, as revealed by their preference for post-frontal environments and northwesterly flows. An evaluation of CMIP5 climate model cloud regimes over the Azores shows that all models severely underpredict shallow cumulus clouds, while most models also underpredict the occurrence of stratocumulus cloud decks. It is demonstrated that carefully selected case studies can be related through regime analysis to climatological cloud distributions, and a methodology is suggested utilizing process-resolving model simulations of individual cases to better understand cloud-dynamics interactions and attempt to explain and correct climate model cloud deficiencies.

  2. Clouds in ECMWF's 30 KM Resolution Global Atmospheric Forecast Model (TL639)

    Science.gov (United States)

    Cahalan, R. F.; Morcrette, J. J.

    1999-01-01

    Global models of the general circulation of the atmosphere resolve a wide range of length scales, and in particular cloud structures extend from planetary scales to the smallest scales resolvable, now down to 30 km in state-of-the-art models. Even the highest resolution models do not resolve small-scale cloud phenomena seen, for example, in Landsat and other high-resolution satellite images of clouds. Unresolved small-scale disturbances often grow into larger ones through non-linear processes that transfer energy upscale. Understanding upscale cascades is of crucial importance in predicting current weather, and in parameterizing cloud-radiative processes that control long term climate. Several movie animations provide examples of the temporal and spatial variation of cloud fields produced in 4-day runs of the forecast model at the European Centre for Medium-Range Weather Forecasts (ECMWF) in Reading, England, at particular times and locations of simultaneous measurement field campaigns. model resolution is approximately 30 km horizontally (triangular truncation TL639) with 31 vertical levels from surface to stratosphere. Timestep of the model is about 10 minutes, but animation frames are 3 hours apart, at timesteps when the radiation is computed. The animations were prepared from an archive of several 4-day runs at the highest available model resolution, and archived at ECMWF. Cloud, wind and temperature fields in an approximately 1000 km X 1000 km box were retrieved from the archive, then approximately 60 Mb Vis5d files were prepared with the help of Graeme Kelly of ECMWF, and were compressed into MPEG files each less than 3 Mb. We discuss the interaction of clouds and radiation in the model, and compare the variability of cloud liquid as a function of scale to that seen in cloud observations made in intensive field campaigns. Comparison of high-resolution global runs to cloud-resolving models, and to lower resolution climate models is leading to better

  3. Comparison of three ice cloud optical schemes in climate simulations with community atmospheric model version 5

    Science.gov (United States)

    Zhao, Wenjie; Peng, Yiran; Wang, Bin; Yi, Bingqi; Lin, Yanluan; Li, Jiangnan

    2018-05-01

    A newly implemented Baum-Yang scheme for simulating ice cloud optical properties is compared with existing schemes (Mitchell and Fu schemes) in a standalone radiative transfer model and in the global climate model (GCM) Community Atmospheric Model Version 5 (CAM5). This study systematically analyzes the effect of different ice cloud optical schemes on global radiation and climate by a series of simulations with a simplified standalone radiative transfer model, atmospheric GCM CAM5, and a comprehensive coupled climate model. Results from the standalone radiative model show that Baum-Yang scheme yields generally weaker effects of ice cloud on temperature profiles both in shortwave and longwave spectrum. CAM5 simulations indicate that Baum-Yang scheme in place of Mitchell/Fu scheme tends to cool the upper atmosphere and strengthen the thermodynamic instability in low- and mid-latitudes, which could intensify the Hadley circulation and dehydrate the subtropics. When CAM5 is coupled with a slab ocean model to include simplified air-sea interaction, reduced downward longwave flux to surface in Baum-Yang scheme mitigates ice-albedo feedback in the Arctic as well as water vapor and cloud feedbacks in low- and mid-latitudes, resulting in an overall temperature decrease by 3.0/1.4 °C globally compared with Mitchell/Fu schemes. Radiative effect and climate feedback of the three ice cloud optical schemes documented in this study can be referred for future improvements on ice cloud simulation in CAM5.

  4. A Robust Multi-Scale Modeling System for the Study of Cloud and Precipitation Processes

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    During the past decade, numerical weather and global non-hydrostatic models have started using more complex microphysical schemes originally developed for high resolution cloud resolving models (CRMs) with 1-2 km or less horizontal resolutions. These microphysical schemes affect the dynamic through the release of latent heat (buoyancy loading and pressure gradient) the radiation through the cloud coverage (vertical distribution of cloud species), and surface processes through rainfall (both amount and intensity). Recently, several major improvements of ice microphysical processes (or schemes) have been developed for cloud-resolving model (Goddard Cumulus Ensemble, GCE, model) and regional scale (Weather Research and Forecast, WRF) model. These improvements include an improved 3-ICE (cloud ice, snow and graupel) scheme (Lang et al. 2010); a 4-ICE (cloud ice, snow, graupel and hail) scheme and a spectral bin microphysics scheme and two different two-moment microphysics schemes. The performance of these schemes has been evaluated by using observational data from TRMM and other major field campaigns. In this talk, we will present the high-resolution (1 km) GeE and WRF model simulations and compared the simulated model results with observation from recent field campaigns [i.e., midlatitude continental spring season (MC3E; 2010), high latitude cold-season (C3VP, 2007; GCPEx, 2012), and tropical oceanic (TWP-ICE, 2006)].

  5. Research on Methods for Discovering and Selecting Cloud Infrastructure Services Based on Feature Modeling

    Directory of Open Access Journals (Sweden)

    Huamin Zhu

    2016-01-01

    Full Text Available Nowadays more and more cloud infrastructure service providers are providing large numbers of service instances which are a combination of diversified resources, such as computing, storage, and network. However, for cloud infrastructure services, the lack of a description standard and the inadequate research of systematic discovery and selection methods have exposed difficulties in discovering and choosing services for users. First, considering the highly configurable properties of a cloud infrastructure service, the feature model method is used to describe such a service. Second, based on the description of the cloud infrastructure service, a systematic discovery and selection method for cloud infrastructure services are proposed. The automatic analysis techniques of the feature model are introduced to verify the model’s validity and to perform the matching of the service and demand models. Finally, we determine the critical decision metrics and their corresponding measurement methods for cloud infrastructure services, where the subjective and objective weighting results are combined to determine the weights of the decision metrics. The best matching instances from various providers are then ranked by their comprehensive evaluations. Experimental results show that the proposed methods can effectively improve the accuracy and efficiency of cloud infrastructure service discovery and selection.

  6. Multilayer Perceptron Neural Networks Model for Meteosat Second Generation SEVIRI Daytime Cloud Masking

    Directory of Open Access Journals (Sweden)

    Alireza Taravat

    2015-02-01

    Full Text Available A multilayer perceptron neural network cloud mask for Meteosat Second Generation SEVIRI (Spinning Enhanced Visible and Infrared Imager images is introduced and evaluated. The model is trained for cloud detection on MSG SEVIRI daytime data. It consists of a multi-layer perceptron with one hidden sigmoid layer, trained with the error back-propagation algorithm. The model is fed by six bands of MSG data (0.6, 0.8, 1.6, 3.9, 6.2 and 10.8 μm with 10 hidden nodes. The multiple-layer perceptrons lead to a cloud detection accuracy of 88.96%, when trained to map two predefined values that classify cloud and clear sky. The network was further evaluated using sixty MSG images taken at different dates. The network detected not only bright thick clouds but also thin or less bright clouds. The analysis demonstrated the feasibility of using machine learning models of cloud detection in MSG SEVIRI imagery.

  7. Providing a New Model for Discovering Cloud Services Based on Ontology

    Directory of Open Access Journals (Sweden)

    B. Heydari

    2017-12-01

    Full Text Available Due to its efficient, flexible, and dynamic substructure in information technology and service quality parameters estimation, cloud computing has become one of the most important issues in computer world. Discovering cloud services has been posed as a fundamental issue in reaching out high efficiency. In order to do one’s own operations in cloud space, any user needs to request several various services either simultaneously or according to a working routine. These services can be presented by different cloud producers or different decision-making policies. Therefore, service management is one of the important and challenging issues in cloud computing. With the advent of semantic web and practical services accordingly in cloud computing space, access to different kinds of applications has become possible. Ontology is the core of semantic web and can be used to ease the process of discovering services. A new model based on ontology has been proposed in this paper. The results indicate that the proposed model has explored cloud services based on user search results in lesser time compared to other models.

  8. Dynamical Model for the Zodiacal Cloud and Sporadic Meteors

    Science.gov (United States)

    Nesvorný, David; Janches, Diego; Vokrouhlický, David; Pokorný, Petr; Bottke, William F.; Jenniskens, Peter

    2011-12-01

    The solar system is dusty, and would become dustier over time as asteroids collide and comets disintegrate, except that small debris particles in interplanetary space do not last long. They can be ejected from the solar system by Jupiter, thermally destroyed near the Sun, or physically disrupted by collisions. Also, some are swept by the Earth (and other planets), producing meteors. Here we develop a dynamical model for the solar system meteoroids and use it to explain meteor radar observations. We find that the Jupiter Family Comets (JFCs) are the main source of the prominent concentrations of meteors arriving at the Earth from the helion and antihelion directions. To match the radiant and orbit distributions, as measured by the Canadian Meteor Orbit Radar (CMOR) and Advanced Meteor Orbit Radar (AMOR), our model implies that comets, and JFCs in particular, must frequently disintegrate when reaching orbits with low perihelion distance. Also, the collisional lifetimes of millimeter particles may be longer (gsim 105 yr at 1 AU) than postulated in the standard collisional models (~104 yr at 1 AU), perhaps because these chondrule-sized meteoroids are stronger than thought before. Using observations of the Infrared Astronomical Satellite to calibrate the model, we find that the total cross section and mass of small meteoroids in the inner solar system are (1.7-3.5) × 1011 km2 and ~4 × 1019 g, respectively, in a good agreement with previous studies. The mass input required to keep the zodiacal cloud in a steady state is estimated to be ~104-105 kg s-1. The input is up to ~10 times larger than found previously, mainly because particles released closer to the Sun have shorter collisional lifetimes and need to be supplied at a faster rate. The total mass accreted by the Earth in particles between diameters D = 5 μm and 1 cm is found to be ~15,000 tons yr-1 (factor of two uncertainty), which is a large share of the accretion flux measured by the Long Term Duration

  9. DYNAMICAL MODEL FOR THE ZODIACAL CLOUD AND SPORADIC METEORS

    International Nuclear Information System (INIS)

    Nesvorný, David; Vokrouhlický, David; Pokorný, Petr; Bottke, William F.; Janches, Diego; Jenniskens, Peter

    2011-01-01

    The solar system is dusty, and would become dustier over time as asteroids collide and comets disintegrate, except that small debris particles in interplanetary space do not last long. They can be ejected from the solar system by Jupiter, thermally destroyed near the Sun, or physically disrupted by collisions. Also, some are swept by the Earth (and other planets), producing meteors. Here we develop a dynamical model for the solar system meteoroids and use it to explain meteor radar observations. We find that the Jupiter Family Comets (JFCs) are the main source of the prominent concentrations of meteors arriving at the Earth from the helion and antihelion directions. To match the radiant and orbit distributions, as measured by the Canadian Meteor Orbit Radar (CMOR) and Advanced Meteor Orbit Radar (AMOR), our model implies that comets, and JFCs in particular, must frequently disintegrate when reaching orbits with low perihelion distance. Also, the collisional lifetimes of millimeter particles may be longer (∼> 10 5 yr at 1 AU) than postulated in the standard collisional models (∼10 4 yr at 1 AU), perhaps because these chondrule-sized meteoroids are stronger than thought before. Using observations of the Infrared Astronomical Satellite to calibrate the model, we find that the total cross section and mass of small meteoroids in the inner solar system are (1.7-3.5) × 10 11 km 2 and ∼4 × 10 19 g, respectively, in a good agreement with previous studies. The mass input required to keep the zodiacal cloud in a steady state is estimated to be ∼10 4 -10 5 kg s –1 . The input is up to ∼10 times larger than found previously, mainly because particles released closer to the Sun have shorter collisional lifetimes and need to be supplied at a faster rate. The total mass accreted by the Earth in particles between diameters D = 5 μm and 1 cm is found to be ∼15,000 tons yr –1 (factor of two uncertainty), which is a large share of the accretion flux measured by the

  10. Evaluation of stratocumulus cloud prediction in the Met Office forecast model during VOCALS-REx

    Directory of Open Access Journals (Sweden)

    S. J. Abel

    2010-11-01

    Full Text Available Observations in the subtropical southeast Pacific obtained during the VOCALS-REx field experiment are used to evaluate the representation of stratocumulus cloud in the Met Office forecast model and to identify key areas where model biases exist. Marked variations in the large scale structure of the cloud field were observed during the experiment on both day-to-day and on diurnal timescales. In the remote maritime region the model is shown to have a good representation of synoptically induced variability in both cloud cover and marine boundary layer depth. Satellite observations show a strong diurnal cycle in cloud fraction and liquid water path in the stratocumulus with enhanced clearances of the cloud deck along the Chilean and Peruvian coasts on certain days. The model accurately simulates the phase of the diurnal cycle but is unable to capture the coastal clearing of cloud. Observations along the 20° S latitude line show a gradual increase in the depth of the boundary layer away from the coast. This trend is well captured by the model (typical low bias of 200 m although significant errors exist at the coast where the model marine boundary layer is too shallow and moist. Drizzle in the model responds to changes in liquid water path in a manner that is consistent with previous ship-borne observations in the region although the intensity of this drizzle is likely to be too high, particularly in the more polluted coastal region where higher cloud droplet number concentrations are typical. Another mode of variability in the cloud field that the model is unable to capture are regions of pockets of open cellular convection embedded in the overcast stratocumulus deck and an example of such a feature that was sampled during VOCALS-REx is shown.

  11. A Kinematic Model of Slow Slip Constrained by Tremor-Derived Slip Histories in Cascadia

    Science.gov (United States)

    Schmidt, D. A.; Houston, H.

    2016-12-01

    We explore new ways to constrain the kinematic slip distributions for large slow slip events using constraints from tremor. Our goal is to prescribe one or more slip pulses that propagate across the fault and scale appropriately to satisfy the observations. Recent work (Houston, 2015) inferred a crude representative stress time history at an average point using the tidal stress history, the static stress drop, and the timing of the evolution of tidal sensitivity of tremor over several days of slip. To convert a stress time history into a slip time history, we use simulations to explore the stressing history of a small locked patch due to an approaching rupture front. We assume that the locked patch releases strain through a series of tremor bursts whose activity rate is related to the stressing history. To test whether the functional form of a slip pulse is reasonable, we assume a hypothetical slip time history (Ohnaka pulse) timed with the occurrence of tremor to create a rupture front that propagates along the fault. The duration of the rupture front for a fault patch is constrained by the observed tremor catalog for the 2010 ETS event. The slip amplitude is scaled appropriately to match the observed surface displacements from GPS. Through a forward simulation, we evaluate the ability of the tremor-derived slip history to accurately predict the pattern of surface displacements observed by GPS. We find that the temporal progression of surface displacements are well modeled by a 2-4 day slip pulse, suggesting that some of the longer duration of slip typically found in time-dependent GPS inversions is biased by the temporal smoothing. However, at some locations on the fault, the tremor lingers beyond the passage of the slip pulse. A small percentage (5-10%) of the tremor appears to be activated ahead of the approaching slip pulse, and tremor asperities experience a driving stress on the order of 10 kPa/day. Tremor amplitude, rather than just tremor counts, is needed

  12. Modeling study of cloud droplet nucleation and in-cloud sulfate production during the Sanitation of the Atmosphere (SANA) 2 campaign

    Science.gov (United States)

    Liu, Xiaohong; Seidl, Winfried

    1998-01-01

    Based upon the measurements of vertical profiles of gaseous SO2, H2O2, O3, and meteorological parameters from aircraft and of the aerosol chemical composition and gaseous NH3, HNO3, and SO2 at the surface in southeastern Germany (Melpitz) during the Sanitation of the Atmosphere (SANA) 2 campaign, realistic modeling of cloud droplet nucleation and in-cloud sulfate production was performed with an explicit microphysical cloud model with size-resolved chemistry and cloud top entrainment. For the fair weather cumulus observed during the measurements, the calculated cloud droplet number concentrations could be as high as 2000 cm-3 (and precloud aerosol sulfate up to 9.1 μg m-3), indicating strong sulfur pollution at Melpitz during the campaign. The in-cloud sulfate production is within 1.5-5.0 μg m-3, depending on the initial gaseous NH3 concentration in the parcel. This result shows the necessity of gaseous NH3 vertical profile measurements. Entrainment can reduce the cloud droplet number concentration and cause the distribution of in-cloud produced sulfate to shift toward larger particle sizes. Under the cases we studied, we do not find a significant effect of cloud top gaseous H2O2 entrainment on the in-cloud sulfate production. For the adiabatic cases the departure of bulk water H2O2 from the Henry's law equilibrium is very small. When entrainment included, however, bulk water H2O2 concentrations could be clearly less than the equilibrium values, and the deficiencies are higher (>20%) for droplets larger than 10 μm radius. Our results suggest that entrainment could be one of the important factors to account for the measured H2O2 deficiency in cloud water.

  13. Supporting the search for the CEP location with nonlocal PNJL models constrained by lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Contrera, Gustavo A. [IFLP, UNLP, CONICET, Facultad de Ciencias Exactas, La Plata (Argentina); Gravitation, Astrophysics and Cosmology Group, FCAyG, UNLP, La Plata (Argentina); CONICET, Buenos Aires (Argentina); Grunfeld, A.G. [CONICET, Buenos Aires (Argentina); Comision Nacional de Energia Atomica, Departamento de Fisica, Buenos Aires (Argentina); Blaschke, David [University of Wroclaw, Institute of Theoretical Physics, Wroclaw (Poland); Joint Institute for Nuclear Research, Moscow Region (Russian Federation); National Research Nuclear University (MEPhI), Moscow (Russian Federation)

    2016-08-15

    We investigate the possible location of the critical endpoint in the QCD phase diagram based on nonlocal covariant PNJL models including a vector interaction channel. The form factors of the covariant interaction are constrained by lattice QCD data for the quark propagator. The comparison of our results for the pressure including the pion contribution and the scaled pressure shift Δ P/T {sup 4} vs. T/T{sub c} with lattice QCD results shows a better agreement when Lorentzian form factors for the nonlocal interactions and the wave function renormalization are considered. The strength of the vector coupling is used as a free parameter which influences results at finite baryochemical potential. It is used to adjust the slope of the pseudocritical temperature of the chiral phase transition at low baryochemical potential and the scaled pressure shift accessible in lattice QCD simulations. Our study, albeit presently performed at the mean-field level, supports the very existence of a critical point and favors its location within a region that is accessible in experiments at the NICA accelerator complex. (orig.)

  14. CA-Markov Analysis of Constrained Coastal Urban Growth Modeling: Hua Hin Seaside City, Thailand

    Directory of Open Access Journals (Sweden)

    Rajendra Shrestha

    2013-04-01

    Full Text Available Thailand, a developing country in Southeast Asia, is experiencing rapid development, particularly urban growth as a response to the expansion of the tourism industry. Hua Hin city provides an excellent example of an area where urbanization has flourished due to tourism. This study focuses on how the dynamic urban horizontal expansion of the seaside city of Hua Hin is constrained by the coast, thus making sustainability for this popular tourist destination—managing and planning for its local inhabitants, its visitors, and its sites—an issue. The study examines the association of land use type and land use change by integrating Geo-Information technology, a statistic model, and CA-Markov analysis for sustainable land use planning. The study identifies that the land use types and land use changes from the year 1999 to 2008 have changed as a result of increased mobility; this trend, in turn, has everything to do with urban horizontal expansion. The changing sequences of land use type have developed from forest area to agriculture, from agriculture to grassland, then to bare land and built-up areas. Coastal urban growth has, for a decade, been expanding horizontally from a downtown center along the beach to the western area around the golf course, the southern area along the beach, the southwest grassland area, and then the northern area near the airport.

  15. Constraining the kinematics of metropolitan Los Angeles faults with a slip-partitioning model.

    Science.gov (United States)

    Daout, S; Barbot, S; Peltzer, G; Doin, M-P; Liu, Z; Jolivet, R

    2016-11-16

    Due to the limited resolution at depth of geodetic and other geophysical data, the geometry and the loading rate of the ramp-décollement faults below the metropolitan Los Angeles are poorly understood. Here we complement these data by assuming conservation of motion across the Big Bend of the San Andreas Fault. Using a Bayesian approach, we constrain the geometry of the ramp-décollement system from the Mojave block to Los Angeles and propose a partitioning of the convergence with 25.5 ± 0.5 mm/yr and 3.1 ± 0.6 mm/yr of strike-slip motion along the San Andreas Fault and the Whittier Fault, with 2.7 ± 0.9 mm/yr and 2.5 ± 1.0 mm/yr of updip movement along the Sierra Madre and the Puente Hills thrusts. Incorporating conservation of motion in geodetic models of strain accumulation reduces the number of free parameters and constitutes a useful methodology to estimate the tectonic loading and seismic potential of buried fault networks.

  16. A methodology for constraining power in finite element modeling of radiofrequency ablation.

    Science.gov (United States)

    Jiang, Yansheng; Possebon, Ricardo; Mulier, Stefaan; Wang, Chong; Chen, Feng; Feng, Yuanbo; Xia, Qian; Liu, Yewei; Yin, Ting; Oyen, Raymond; Ni, Yicheng

    2017-07-01

    Radiofrequency ablation (RFA) is a minimally invasive thermal therapy for the treatment of cancer, hyperopia, and cardiac tachyarrhythmia. In RFA, the power delivered to the tissue is a key parameter. The objective of this study was to establish a methodology for the finite element modeling of RFA with constant power. Because of changes in the electric conductivity of tissue with temperature, a nonconventional boundary value problem arises in the mathematic modeling of RFA: neither the voltage (Dirichlet condition) nor the current (Neumann condition), but the power, that is, the product of voltage and current was prescribed on part of boundary. We solved the problem using Lagrange multiplier: the product of the voltage and current on the electrode surface is constrained to be equal to the Joule heating. We theoretically proved the equality between the product of the voltage and current on the surface of the electrode and the Joule heating in the domain. We also proved the well-posedness of the problem of solving the Laplace equation for the electric potential under a constant power constraint prescribed on the electrode surface. The Pennes bioheat transfer equation and the Laplace equation for electric potential augmented with the constraint of constant power were solved simultaneously using the Newton-Raphson algorithm. Three problems for validation were solved. Numerical results were compared either with an analytical solution deduced in this study or with results obtained by ANSYS or experiments. This work provides the finite element modeling of constant power RFA with a firm mathematical basis and opens pathway for achieving the optimal RFA power. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Technical Note: Probabilistically constraining proxy age–depth models within a Bayesian hierarchical reconstruction model

    Directory of Open Access Journals (Sweden)

    J. P. Werner

    2015-03-01

    Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.

  18. Cloud Forecasting and 3-D Radiative Transfer Model Validation using Citizen-Sourced Imagery

    Science.gov (United States)

    Gasiewski, A. J.; Heymsfield, A.; Newman Frey, K.; Davis, R.; Rapp, J.; Bansemer, A.; Coon, T.; Folsom, R.; Pfeufer, N.; Kalloor, J.

    2017-12-01

    Cloud radiative feedback mechanisms are one of the largest sources of uncertainty in global climate models. Variations in local 3D cloud structure impact the interpretation of NASA CERES and MODIS data for top-of-atmosphere radiation studies over clouds. Much of this uncertainty results from lack of knowledge of cloud vertical and horizontal structure. Surface-based data on 3-D cloud structure from a multi-sensor array of low-latency ground-based cameras can be used to intercompare radiative transfer models based on MODIS and other satellite data with CERES data to improve the 3-D cloud parameterizations. Closely related, forecasting of solar insolation and associated cloud cover on time scales out to 1 hour and with spatial resolution of 100 meters is valuable for stabilizing power grids with high solar photovoltaic penetrations. Data for cloud-advection based solar insolation forecasting with requisite spatial resolution and latency needed to predict high ramp rate events obtained from a bottom-up perspective is strongly correlated with cloud-induced fluctuations. The development of grid management practices for improved integration of renewable solar energy thus also benefits from a multi-sensor camera array. The data needs for both 3D cloud radiation modelling and solar forecasting are being addressed using a network of low-cost upward-looking visible light CCD sky cameras positioned at 2 km spacing over an area of 30-60 km in size acquiring imagery on 30 second intervals. Such cameras can be manufactured in quantity and deployed by citizen volunteers at a marginal cost of 200-400 and operated unattended using existing communications infrastructure. A trial phase to understand the potential utility of up-looking multi-sensor visible imagery is underway within this NASA Citizen Science project. To develop the initial data sets necessary to optimally design a multi-sensor cloud camera array a team of 100 citizen scientists using self-owned PDA cameras is being

  19. A model of the magnetosheath magnetic field during magnetic clouds

    Directory of Open Access Journals (Sweden)

    L. Turc

    2014-02-01

    Full Text Available Magnetic clouds (MCs are huge interplanetary structures which originate from the Sun and have a paramount importance in driving magnetospheric storms. Before reaching the magnetosphere, MCs interact with the Earth's bow shock. This may alter their structure and therefore modify their expected geoeffectivity. We develop a simple 3-D model of the magnetosheath adapted to MCs conditions. This model is the first to describe the interaction of MCs with the bow shock and their propagation inside the magnetosheath. We find that when the MC encounters the Earth centrally and with its axis perpendicular to the Sun–Earth line, the MC's magnetic structure remains mostly unchanged from the solar wind to the magnetosheath. In this case, the entire dayside magnetosheath is located downstream of a quasi-perpendicular bow shock. When the MC is encountered far from its centre, or when its axis has a large tilt towards the ecliptic plane, the MC's structure downstream of the bow shock differs significantly from that upstream. Moreover, the MC's structure also differs from one region of the magnetosheath to another and these differences vary with time and space as the MC passes by. In these cases, the bow shock configuration is mainly quasi-parallel. Strong magnetic field asymmetries arise in the magnetosheath; the sign of the magnetic field north–south component may change from the solar wind to some parts of the magnetosheath. We stress the importance of the Bx component. We estimate the regions where the magnetosheath and magnetospheric magnetic fields are anti-parallel at the magnetopause (i.e. favourable to reconnection. We find that the location of anti-parallel fields varies with time as the MCs move past Earth's environment, and that they may be situated near the subsolar region even for an initially northward magnetic field upstream of the bow shock. Our results point out the major role played by the bow shock configuration in modifying or keeping the

  20. A Constrained 3D Density Model of the Upper Crust from Gravity Data Interpretation for Central Costa Rica

    Directory of Open Access Journals (Sweden)

    Oscar H. Lücke

    2010-01-01

    Full Text Available The map of complete Bouguer anomaly of Costa Rica shows an elongated NW-SE trending gravity low in the central region. This gravity low coincides with the geographical region known as the Cordillera Volcánica Central. It is built by geologic and morpho-tectonic units which consist of Quaternary volcanic edifices. For quantitative interpretation of the sources of the anomaly and the characterization of fluid pathways and reservoirs of arc magmatism, a constrained 3D density model of the upper crust was designed by means of forward modeling. The density model is constrained by simplified surface geology, previously published seismic tomography and P-wave velocity models, which stem from wide-angle refraction seismic, as well as results from methods of direct interpretation of the gravity field obtained for this work. The model takes into account the effects and influence of subduction-related Neogene through Quaternary arc magmatism on the upper crust.

  1. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    Science.gov (United States)

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  2. Measurement model and calibration experiment of over-constrained parallel six-dimensional force sensor based on stiffness characteristics analysis

    International Nuclear Information System (INIS)

    Niu, Zhi; Zhao, Yanzhi; Zhao, Tieshi; Cao, Yachao; Liu, Menghua

    2017-01-01

    An over-constrained, parallel six-dimensional force sensor has various advantages, including its ability to bear heavy loads and provide redundant force measurement information. These advantages render the sensor valuable in important applications in the field of aerospace (space docking tests, etc). The stiffness of each component in the over-constrained structure has a considerable influence on the internal force distribution of the structure. Thus, the measurement model chang