WorldWideScience

Sample records for model running operationally

  1. Simulating run-up on steep slopes with operational Boussinesq models; capabilities, spurious effects and instabilities

    Directory of Open Access Journals (Sweden)

    F. Løvholt

    2013-06-01

    Full Text Available Tsunamis induced by rock slides plunging into fjords constitute a severe threat to local coastal communities. The rock slide impact may give rise to highly non-linear waves in the near field, and because the wave lengths are relatively short, frequency dispersion comes into play. Fjord systems are rugged with steep slopes, and modeling non-linear dispersive waves in this environment with simultaneous run-up is demanding. We have run an operational Boussinesq-type TVD (total variation diminishing model using different run-up formulations. Two different tests are considered, inundation on steep slopes and propagation in a trapezoidal channel. In addition, a set of Lagrangian models serves as reference models. Demanding test cases with solitary waves with amplitudes ranging from 0.1 to 0.5 were applied, and slopes were ranging from 10 to 50°. Different run-up formulations yielded clearly different accuracy and stability, and only some provided similar accuracy as the reference models. The test cases revealed that the model was prone to instabilities for large non-linearity and fine resolution. Some of the instabilities were linked with false breaking during the first positive inundation, which was not observed for the reference models. None of the models were able to handle the bore forming during drawdown, however. The instabilities are linked to short-crested undulations on the grid scale, and appear on fine resolution during inundation. As a consequence, convergence was not always obtained. It is reason to believe that the instability may be a general problem for Boussinesq models in fjords.

  2. CMS computing operations during run 1

    CERN Document Server

    Adelman, J; Artieda, J; Bagliese, G; Ballestero, D; Bansal, S; Bauerdick, L; Behrenhof, W; Belforte, S; Bloom, K; Blumenfeld, B; Blyweert, S; Bonacorsi, D; Brew, C; Contreras, L; Cristofori, A; Cury, S; da Silva Gomes, D; Dolores Saiz Santos, M; Dost, J; Dykstra, D; Fajardo Hernandez, E; Fanzango, F; Fisk, I; Flix, J; Georges, A; Gi ffels, M; Gomez-Ceballos, G; Gowdy, S; Gutsche, O; Holzman, B; Janssen, X; Kaselis, R; Kcira, D; Kim, B; Klein, D; Klute, M; Kress, T; Kreuzer, P; Lahi , A; Larson, K; Letts, J; Levin, A; Linacre, J; Linares, J; Liu, S; Luyckx, S; Maes, M; Magini, N; Malta, A; Marra Da Silva, J; Mccartin, J; McCrea, A; Mohapatra, A; Molina, J; Mortensen, T; Padhi, S; Paus, C; Piperov, S; Ralph; Sartirana, A; Sciaba, A; S ligoi, I; Spinoso, V; Tadel, M; Traldi, S; Wissing, C; Wuerthwein, F; Yang, M; Zielinski, M; Zvada, M

    2014-01-01

    During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.

  3. 1987 DOE review: First collider run operation

    Energy Technology Data Exchange (ETDEWEB)

    Childress, S.; Crawford, J.; Dugan, G.; Edwards, H.; Finley, D.A.; Fowler, W.B.; Harrison, M.; Holmes, S.; Makara, J.N.; Malamud, E.

    1987-05-01

    This review covers the operations of the first run of the 1.8 TeV superconducting super collider. The papers enclosed cover: PBAR source status, fixed target operation, Tevatron cryogenic reliability and capacity upgrade, Tevatron Energy upgrade progress and plans, status of the D0 low beta insertion, 1.8 K and 4.7 K refrigeration for low-..beta.. quadrupoles, progress and plans for the LINAC and booster, near term and long term and long term performance improvements.

  4. A luminosity model of RHIC gold runs

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, S.Y.

    2011-11-01

    In this note, we present a luminosity model for RHIC gold runs. The model is applied to the physics fills in 2007 run without cooling, and with the longitudinal cooling applied to one beam only. Having good comparison, the model is used to project a fill with the longitudinal cooling applied to both beams. Further development and possible applications of the model are discussed. To maximize the integrated luminosity, usually the higher beam intensity, smaller longitudinal and transverse emittance, and smaller {beta} are the directions to work on. In past 10 years, the RHIC gold runs have demonstrated a path toward this goal. Most recently, a successful commissioning of the bunched beam stochastic cooling, both longitudinal and transverse, has offered a chance of further RHIC luminosity improvement. With so many factors involved, a luminosity model would be useful to identify and project gains in the machine development. In this article, a preliminary model is proposed. In Section 2, several secondary factors, which are not yet included in the model, are identified based on the RHIC operation condition and experience in current runs. In Section 3, the RHIC beam store parameters used in the model are listed, and validated. In Section 4, the factors included in the model are discussed, and the luminosity model is presented. In Section 5, typical RHIC gold fills without cooling, and with partial cooling are used for comparison with the model. Then a projection of fills with more coolings is shown. In Section 6, further development of the model is discussed.

  5. Cycle Engine Modelling Of Spark Ignition Engine Processes during Wide-Open Throttle (WOT) Engine Operation Running By Gasoline Fuel

    Science.gov (United States)

    Rahim, M. F. Abdul; Rahman, M. M.; Bakar, R. A.

    2012-09-01

    One-dimensional engine model is developed to simulate spark ignition engine processes in a 4-stroke, 4 cylinders gasoline engine. Physically, the baseline engine is inline cylinder engine with 3-valves per cylinder. Currently, the engine's mixture is formed by external mixture formation using piston-type carburettor. The model of the engine is based on one-dimensional equation of the gas exchange process, isentropic compression and expansion, progressive engine combustion process, and accounting for the heat transfer and frictional losses as well as the effect of valves overlapping. The model is tested for 2000, 3000 and 4000 rpm of engine speed and validated using experimental engine data. Results showed that the engine is able to simulate engine's combustion process and produce reasonable prediction. However, by comparing with experimental data, major discrepancy is noticeable especially on the 2000 and 4000 rpm prediction. At low and high engine speed, simulated cylinder pressures tend to under predict the measured data. Whereas the cylinder temperatures always tend to over predict the measured data at all engine speed. The most accurate prediction is obtained at medium engine speed of 3000 rpm. Appropriate wall heat transfer setup is vital for more precise calculation of cylinder pressure and temperature. More heat loss to the wall can lower cylinder temperature. On the hand, more heat converted to the useful work mean an increase in cylinder pressure. Thus, instead of wall heat transfer setup, the Wiebe combustion parameters are needed to be carefully evaluated for better results.

  6. CMS Strip Detector: Operational Experience and Run1 to Run2 Transition

    CERN Document Server

    Butz, Erik Manuel

    2014-01-01

    The CMS silicon strip tracker is the largest silicon detector ever built. It has an active area of 200~m$^2$ of silicon segmented into almost 10 million readout channels. We describe some operational aspects of the system during its first years of operation during the LHC run 1. During the long shutdown 1 of the LHC an extensive work program was carried out on the strip tracker services in order to facilitate operation of the system at sub-zero temperatures in the LHC run~2 and beyond. We will describe these efforts and give a motivation of the choice of run~2 operating temperature. Finally, a brief outlook on the operation of the system in the upcoming run~2 will be given.

  7. Preliminary Results of a U.S. Deep South Modeling Experiment Using NASA SPoRT Initialization Datasets for Operational National Weather Service Local Model Runs

    Science.gov (United States)

    Wood, Lance; Medlin, Jeffrey M.; Case, Jon

    2012-01-01

    A joint collaborative modeling effort among the NWS offices in Mobile, AL, and Houston, TX, and NASA Short-term Prediction Research and Transition (SPoRT) Center began during the 2011-2012 cold season, and continued into the 2012 warm season. The focus was on two frequent U.S. Deep South forecast challenges: the initiation of deep convection during the warm season; and heavy precipitation during the cold season. We wanted to examine the impact of certain NASA produced products on the Weather Research and Forecasting Environmental Modeling System in improving the model representation of mesoscale boundaries such as the local sea-, bay- and land-breezes (which often leads to warm season convective initiation); and improving the model representation of slow moving, or quasi-stationary frontal boundaries (which focus cold season storm cell training and heavy precipitation). The NASA products were: the 4-km Land Information System, a 1-km sea surface temperature analysis, and a 4-km greenness vegetation fraction analysis. Similar domains were established over the southeast Texas and Alabama coastlines, each with an outer grid with a 9 km spacing and an inner nest with a 3 km grid spacing. The model was run at each NWS office once per day out to 24 hours from 0600 UTC, using the NCEP Global Forecast System for initial and boundary conditions. Control runs without the NASA products were made at the NASA SPoRT Center. The NCAR Model Evaluation Tools verification package was used to evaluate both the positive and negative impacts of the NASA products on the model forecasts. Select case studies will be presented to highlight the influence of the products.

  8. Gravitational Baryogenesis in Running Vacuum models

    CERN Document Server

    Oikonomou, V K; Nunes, Rafael C

    2016-01-01

    We study the gravitational baryogenesis mechanism for generating baryon asymmetry in the context of running vacuum models. Regardless if these models can produce a viable cosmological evolution, we demonstrate that they produce a non-zero baryon-to-entropy ratio even if the Universe is filled with conformal matter. This is a sound difference between the running vacuum gravitational baryogenesis and the Einstein-Hilbert one, since in the latter case, the predicted baryon-to-entropy ratio is zero. We consider two running vacuum models and show that the resulting baryon-to-entropy ratio is compatible with the observational data.

  9. New operator assistance features in the CMS Run Control System

    CERN Document Server

    Andre, Jean-marc Olivier; Branson, James; Brummer, Philipp Maximilian; Chaze, Olivier; Cittolin, Sergio; Contescu, Cristian; Craigs, Benjamin Gordon; Darlea, Georgiana Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, Marc; Doualot, Nicolas; Erhan, Samim; Fulcheri, Jonathan F; Gigi, Dominique; Michail Gl; adki; Glege, Frank; Gomez Ceballos, Guillelmo; Hegeman, Jeroen Guido; Holzner, Andre Georg; Janulis, Mindaugas; Jimenez Estupinan, Raul; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrova, Petia; Pieri, Marco; Racz, Attila; Reis, Thomas; Sakulin, Hannes; Schwick, Christoph; Simelevicius, Dainius; Zejdl, Petr

    2017-01-01

    The Run Control System of the Compact Muon Solenoid (CMS) experiment at CERN is a distributed Java web application running on Apache Tomcat servers. During Run-1 of the LHC, many operational procedures have been automated. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following t...

  10. Running PILOT: operational challenges and plans for an Antarctic Observatory

    Science.gov (United States)

    McGrath, Andrew; Saunders, Will; Gillingham, Peter; Ward, David; Storey, John; Lawrence, Jon; Haynes, Roger

    2008-07-01

    We highlight the operational challenges and planned solutions faced by an optical observatory taking advantage of the superior astronomical observing potential of the Antarctic plateau. Unique operational aspects of an Antarctic optical observatory arise from its remoteness, the polar environment and the unusual observing cycle afforded by long continuous periods of darkness and daylight. PILOT is planned to be run with remote observing via satellite communications, and must overcome both limited physical access and data transfer. Commissioning and lifetime operations must deal with extended logistics chains, continual wintertime darkness, extremely low temperatures and frost accumulation amidst other challenging issues considered in the PILOT operational plan, and discussed in this presentation.

  11. The LHC cryogenic operation for first collisions and physics run

    CERN Document Server

    Brodzinski, K; Benda, V; Bremer, J; Casas-Cubillos, J; Claudet, S; Delikaris, D; Ferlin, G; Fernandez Penacoba, G; Perin, A; Pirotte, O; Soubiran, M; Tavian, L; van Weelderen, R; Wagner, U

    2011-01-01

    The Large Hadron Collider (LHC) cryogenic system was progressively and successfully run for the LHC accelerator operation period starting from autumn 2009. The paper recalls the cryogenic system architecture and main operation principles. The system stability during magnets powering and availability periods for high energy beams with first collisions at 3.5 TeV are presented. Treatment of typical problems, weak points of the system and foreseen future consolidations will be discussed.

  12. Operation and Configuration of the LHC in Run 1

    CERN Document Server

    Alemany-Fernandez, R; Drosdal, L; Gorzawski, A; Kain, V; Lamont, M; Macpherson, A; Papotti, G; Pojer, M; Ponce, L; Redaelli, S; Roy, G; Solfaroli Camillocci, M; Venturini, W; Wenninger, J

    2013-01-01

    Between 2010 and 2013 the LHC was operated with protons at beam energies of 3.5 and 4 TeV. The proton beams consisted of single bunches and trains of 150 ns (2010), 75 ns (2011) and 50 ns (2011 and 2012). Performances well beyond what had been expected initially have been achieved with 50 ns beams, culminating in the discovery of a 125 GeV/c2 Higgs boson by the ATLAS and CMS experiments. The nominal bunch spacing of 25 ns was only used for electron-cloud scrubbing runs at injection and for collision tests in view of future operation. The cycle structure evolved over the years, and the operational * for ATLAS and CMS was lowered in steps from 3.5 m (2010) to 0.6 m (2012). Lead ion, mixed proton-lead, intermediate proton energy (1.38 TeV) and high-beta runs were also performed. This note provides an overview of LHC operation between 2010 and 2013. The aim is to document the various operational configurations and highlights of Run 1.

  13. Numerical Modelling of Wave Run-Up

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke;

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  14. Numerical Modelling of Wave Run-Up

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  15. Constructing predictive models of human running.

    Science.gov (United States)

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  16. Constrained Run-to-Run Optimization for Batch Process Based on Support Vector Regression Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    An iterative (run-to-run) optimization method was presented for batch processes under input constraints. Generally it is very difficult to acquire an accurate mechanistic model for a batch process. Because support vector machine is powerful for the problems characterized by small samples, nonlinearity, high dimension and local minima, support vector regression models were developed for the end-point optimization of batch processes. Since there is no analytical way to find the optimal trajectory, an iterative method was used to exploit the repetitive nature of batch processes to determine the optimal operating policy. The optimization algorithm is proved convergent. The numerical simulation shows that the method can improve the process performance through iterations.

  17. Running vacuum cosmological models: linear scalar perturbations

    Science.gov (United States)

    Perico, E. L. D.; Tamayo, D. A.

    2017-08-01

    In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ(H2) or Λ(R). Such models assume an equation of state for the vacuum given by bar PΛ = - bar rhoΛ, relating its background pressure bar PΛ with its mean energy density bar rhoΛ ≡ Λ/8πG. This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely bar rhoΛ = Σibar rhoΛi. Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ(H2) scenario the vacuum is coupled with every matter component, whereas the Λ(R) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.

  18. WIPP Remote Handled Waste Facility: Performance Dry Run Operations

    Energy Technology Data Exchange (ETDEWEB)

    Burrington, T. P.; Britain, R. M.; Cassingham, S. T.

    2003-02-24

    The Remote Handled (RH) TRU Waste Handling Facility at the Waste Isolation Pilot Plant (WIPP) was recently upgraded and modified in preparation for handling and disposal of RH Transuranic (TRU) waste. This modification will allow processing of RH-TRU waste arriving at the WIPP site in two different types of shielded road casks, the RH-TRU 72B and the CNS 10-160B. Washington TRU Solutions (WTS), the WIPP Management and Operation Contractor (MOC), conducted a performance dry run (PDR), beginning August 19, 2002 and successfully completed it on August 24, 2002. The PDR demonstrated that the RHTRU waste handling system works as designed and demonstrated the handling process for each cask, including underground disposal. The purpose of the PDR was to develop and implement a plan that would define in general terms how the WIPP RH-TRU waste handling process would be conducted and evaluated. The PDR demonstrated WIPP operations and support activities required to dispose of RH-TRU waste in the WIPP underground.

  19. Thermodynamical aspects of running vacuum models

    Energy Technology Data Exchange (ETDEWEB)

    Lima, J.A.S. [Universidade de Sao Paulo, Departamento de Astronomia, Sao Paulo (Brazil); Basilakos, Spyros [Academy of Athens, Research Center for Astronomy and Applied Mathematics, Athens (Greece); Sola, Joan [Univ. de Barcelona, High Energy Physics Group, Dept. d' Estructura i Constituents de la Materia, Institut de Ciencies del Cosmos (ICC), Barcelona, Catalonia (Spain)

    2016-04-15

    The thermal history of a large class of running vacuum models in which the effective cosmological term is described by a truncated power series of the Hubble rate, whose dominant term is Λ(H) ∝ H{sup n+2}, is discussed in detail. Specifically, by assuming that the ultrarelativistic particles produced by the vacuum decay emerge into space-time in such a way that its energy density ρ{sub r} ∝ T{sup 4}, the temperature evolution law and the increasing entropy function are analytically calculated. For the whole class of vacuum models explored here we find that the primeval value of the comoving radiation entropy density (associated to effectively massless particles) starts from zero and evolves extremely fast until reaching a maximum near the end of the vacuum decay phase, where it saturates. The late-time conservation of the radiation entropy during the adiabatic FRW phase also guarantees that the whole class of running vacuum models predicts the same correct value of the present day entropy, S{sub 0} ∝ 10{sup 87}-10{sup 88} (in natural units), independently of the initial conditions. In addition, by assuming Gibbons¨CHawking temperature as an initial condition, we find that the ratio between the late-time and primordial vacuum energy densities is in agreement with naive estimates from quantum field theory, namely, ρ{sub Λ0}/ρ{sub ΛI} 10{sup -123}. Such results are independent on the power n and suggests that the observed Universe may evolve smoothly between two extreme, unstable, non-singular de Sitter phases. (orig.)

  20. Short-run and long-run effect of oil consumption on economic growth: ECM model

    Directory of Open Access Journals (Sweden)

    Sofyan Syahnur

    2014-04-01

    Full Text Available The aim of this study is to investigate the effect of oil consumption on economic growth of Aceh in the long-run and short-run by using Error Correction Model (ECM model during the period before the world commodity prices fall of 1985–2008. Four types of oil consumption will be focused on Avtur, Gasoline, Kerosene and Diesel. The data is collected from Central Bureau of Statistics of Aceh (BPS Aceh. The result of this study shows a merely positive effect of oil consumption type diesel to economic growth in Aceh both in the short run and the long run.

  1. Pathways to designing and running an operational flood forecasting system: an adventure game!

    Science.gov (United States)

    Arnal, Louise; Pappenberger, Florian; Ramos, Maria-Helena; Cloke, Hannah; Crochemore, Louise; Giuliani, Matteo; Aalbers, Emma

    2017-04-01

    In the design and building of an operational flood forecasting system, a large number of decisions have to be taken. These include technical decisions related to the choice of the meteorological forecasts to be used as input to the hydrological model, the choice of the hydrological model itself (its structure and parameters), the selection of a data assimilation procedure to run in real-time, the use (or not) of a post-processor, and the computing environment to run the models and display the outputs. Additionally, a number of trans-disciplinary decisions are also involved in the process, such as the way the needs of the users will be considered in the modelling setup and how the forecasts (and their quality) will be efficiently communicated to ensure usefulness and build confidence in the forecasting system. We propose to reflect on the numerous, alternative pathways to designing and running an operational flood forecasting system through an adventure game. In this game, the player is the protagonist of an interactive story driven by challenges, exploration and problem-solving. For this presentation, you will have a chance to play this game, acting as the leader of a forecasting team at an operational centre. Your role is to manage the actions of your team and make sequential decisions that impact the design and running of the system in preparation to and during a flood event, and that deal with the consequences of the forecasts issued. Your actions are evaluated by how much they cost you in time, money and credibility. Your aim is to take decisions that will ultimately lead to a good balance between time and money spent, while keeping your credibility high over the whole process. This game was designed to highlight the complexities behind decision-making in an operational forecasting and emergency response context, in terms of the variety of pathways that can be selected as well as the timescale, cost and timing of effective actions.

  2. Model for radionuclide transport in running waters

    Energy Technology Data Exchange (ETDEWEB)

    Jonsson, Karin; Elert, Mark [Kemakta Konsult AB, Stockholm (Sweden)

    2005-11-15

    Two sites in Sweden are currently under investigation by SKB for their suitability as places for deep repository of radioactive waste, the Forsmark and Simpevarp/Laxemar area. As a part of the safety assessment, SKB has formulated a biosphere model with different sub-models for different parts of the ecosystem in order to be able to predict the dose to humans following a possible radionuclide discharge from a future deep repository. In this report, a new model concept describing radionuclide transport in streams is presented. The main difference from the previous model for running water used by SKB, where only dilution of the inflow of radionuclides was considered, is that the new model includes parameterizations also of the exchange processes present along the stream. This is done in order to be able to investigate the effect of the retention on the transport and to be able to estimate the resulting concentrations in the different parts of the system. The concentrations determined with this new model could later be used for order of magnitude predictions of the dose to humans. The presented model concept is divided in two parts, one hydraulic and one radionuclide transport model. The hydraulic model is used to determine the flow conditions in the stream channel and is based on the assumption of uniform flow and quasi-stationary conditions. The results from the hydraulic model are used in the radionuclide transport model where the concentration is determined in the different parts of the stream ecosystem. The exchange processes considered are exchange with the sediments due to diffusion, advective transport and sedimentation/resuspension and uptake of radionuclides in biota. Transport of both dissolved radionuclides and sorbed onto particulates is considered. Sorption kinetics in the stream water phase is implemented as the time scale of the residence time in the stream water probably is short in comparison to the time scale of the kinetic sorption. In the sediment

  3. Long-run properties of some Danish macroeconometric models

    DEFF Research Database (Denmark)

    Harck, Søren H.

    This paper provides an analytical treatment of various long-run aspects of the MONA model as well as the SMEC model of the Danish economy. More specifically, the analysis lays bare the long-run and steady-state nexus between unemployment and, respectively, inflation and the wage share implied...

  4. An overview of Booster and AGS polarized proton operation during Run 15

    Energy Technology Data Exchange (ETDEWEB)

    Zeno, K. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2015-10-20

    This note is an overview of the Booster and AGS for the 2015 Polarized Proton RHIC run from an operations perspective. There are some notable differences between this and previous runs. In particular, the polarized source intensity was expected to be, and was, higher this year than in previous RHIC runs. The hope was to make use of this higher input intensity by allowing the beam to be scraped down more in the Booster to provide a brighter and smaller beam for the AGS and RHIC. The RHIC intensity requirements were also higher this run than in previous runs, which caused additional challenges because the AGS polarization and emittance are normally intensity dependent.

  5. Effect of sucrose availability and pre-running on the intrinsic value of wheel running as an operant and a reinforcing consequence.

    Science.gov (United States)

    Belke, Terry W; Pierce, W David

    2014-03-01

    The current study investigated the effect of motivational manipulations on operant wheel running for sucrose reinforcement and on wheel running as a behavioral consequence for lever pressing, within the same experimental context. Specifically, rats responded on a two-component multiple schedule of reinforcement in which lever pressing produced the opportunity to run in a wheel in one component of the schedule (reinforcer component) and wheel running produced the opportunity to consume sucrose solution in the other component (operant component). Motivational manipulations involved removal of sucrose contingent on wheel running and providing 1h of pre-session wheel running. Results showed that, in opposition to a response strengthening view, sucrose did not maintain operant wheel running. The motivational operations of withdrawing sucrose or providing pre-session wheel running, however, resulted in different wheel-running rates in the operant and reinforcer components of the multiple schedule; this rate discrepancy revealed the extrinsic reinforcing effects of sucrose on operant wheel running, but also indicated the intrinsic reinforcement value of wheel running across components. Differences in wheel-running rates between components were discussed in terms of arousal, undermining of intrinsic motivation, and behavioral contrast.

  6. Advanced overlay: sampling and modeling for optimized run-to-run control

    Science.gov (United States)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to

  7. Running a distributed virtual observatory: US Virtual Astronomical Observatory operations

    CERN Document Server

    McGlynn, Thomas A; Berriman, G Bruce; Thakar, Aniruddha R

    2012-01-01

    Operation of the US Virtual Astronomical Observatory shares some issues with modern physical observatories, e.g., intimidating data volumes and rapid technological change, and must also address unique concerns like the lack of direct control of the underlying and scattered data resources, and the distributed nature of the observatory itself. In this paper we discuss how the VAO has addressed these challenges to provide the astronomical community with a coherent set of science-enabling tools and services. The distributed nature of our virtual observatory-with data and personnel spanning geographic, institutional and regime boundaries-is simultaneously a major operational headache and the primary science motivation for the VAO. Most astronomy today uses data from many resources. Facilitation of matching heterogeneous datasets is a fundamental reason for the virtual observatory. Key aspects of our approach include continuous monitoring and validation of VAO and VO services and the datasets provided by the commun...

  8. Pessimistic Predicate/Transform Model for Long Running Business Processes

    Institute of Scientific and Technical Information of China (English)

    WANG Jinling; JIN Beihong; LI Jing

    2005-01-01

    Many business processes in enterprise applications are both long running and transactional in nature. However, no current transaction model can provide full transaction support for such long running business processes. This paper proposes a new transaction model, the pessimistic predicate/transform (PP/T) model, which can provide full transaction support for long running business processes. A framework was proposed on the enterprise JavaBeans platform to implement the PP/T model. The framework enables application developers to focus on the business logic, with the underlying platform providing the required transactional semantics. The development and maintenance effort are therefore greatly reduced. Simulations show that the model has a sound concurrency management ability for long running business processes.

  9. WLCG Operations and the First Prolonged LHC Run

    Science.gov (United States)

    Girone, M.; Shiers, J.

    2011-12-01

    By the time of CHEP 2010 we had accumulated just over 6 months' experience with proton-proton data taking, production and analysis at the LHC. This paper addresses the issues seen from the point of view of the WLCG Service. In particular, it answers the following questions: Did the WLCG service delivered quantitatively and qualitatively? Were the "key performance indicators" a reliable and accurate measure of the service quality? Were the inevitable service issues been resolved in a sufficiently rapid fashion? What are the key areas of improvement required not only for long-term sustainable operations, but also to embrace new technologies. It concludes with a summary of our readiness for data taking in the light of real experience.

  10. WLCG Operations and the First Prolonged LHC Run

    CERN Document Server

    Girone, M; CERN. Geneva. IT Department

    2011-01-01

    By the time of CHEP 2010 we had accumulated just over 6 months’ experience with proton-proton data taking, production and analysis at the LHC. This paper addresses the issues seen from the point of view of the WLCG Service. In particular, it answers the following questions: Did the WLCG service delivered quantitatively and qualitatively? Were the "key performance indicators" a reliable and accurate measure of the service quality? Were the inevitable service issues been resolved in a sufficiently rapid fashion? What are the key areas of improvement required not only for long-term sustainable operations, but also to embrace new technologies. It concludes with a summary of our readiness for data taking in the light of real experience.

  11. Operational experience running Hadoop XRootD Fallback

    Science.gov (United States)

    Dost, J. M.; Tadel, A.; Tadel, M.; Würthwein, F.

    2015-12-01

    In April of 2014, the UCSD T2 Center deployed hdfs-xrootd-fallback, a UCSD- developed software system that interfaces Hadoop with XRootD to increase reliability of the Hadoop file system. The hdfs-xrootd-fallback system allows a site to depend less on local file replication and more on global replication provided by the XRootD federation to ensure data redundancy. Deploying the software has allowed us to reduce Hadoop replication on a significant subset of files in our cluster, freeing hundreds of terabytes in our local storage, and to recover HDFS blocks lost due to storage degradation. An overview of the architecture of the hdfs-xrootd-fallback system will be presented, as well as details of our experience operating the service over the past year.

  12. Thermoregulation and endurance running in extinct hominins: Wheeler's models revisited.

    Science.gov (United States)

    Ruxton, Graeme D; Wilkinson, David M

    2011-08-01

    Thermoregulation is often cited as a potentially important influence on the evolution of hominins, thanks to a highly influential series of papers in the Journal of Human Evolution in the 1980s and 1990s by Peter Wheeler. These papers developed quantitative modeling of heat balance between different potential hominins and their environment. Here, we return to these models, update them in line with new developments and measurements in animal thermal biology, and modify them to represent a running hominin rather than the stationary form considered previously. In particular, we use our modified Wheeler model to investigate thermoregulatory aspects of the evolution of endurance running ability. Our model suggests that for endurance running to be possible, a hominin would need locomotive efficiency, sweating rates, and areas of hairless skin similar to modern humans. We argue that these restrictions suggest that endurance running may have been possible (from a thermoregulatory viewpoint) for Homo erectus, but is unlikely for any earlier hominins.

  13. Running Away

    Science.gov (United States)

    ... Emergency Room? What Happens in the Operating Room? Running Away KidsHealth > For Kids > Running Away Print A ... life on the streets. continue The Reality of Running Away When you think about running away, you ...

  14. Modelling surface run-off and trends analysis over India

    Science.gov (United States)

    Gupta, P. K.; Chauhan, S.; Oza, M. P.

    2016-08-01

    The present study is mainly concerned with detecting the trend of run-off over the mainland of India, during a time period of 35 years, from 1971-2005 (May-October). Rainfall, soil texture, land cover types, slope, etc., were processed and run-off modelling was done using the Natural Resources Conservation Service (NRCS) model with modifications and cell size of 5×5 km. The slope and antecedent moisture corrections were incorporated in the existing model. Trend analysis of estimated run-off was done by taking into account different analysis windows such as cell, medium and major river basins, meteorological sub-divisions and elevation zones across India. It was estimated that out of the average 1012.5 mm of rainfall over India (considering the study period of 35 years), 33.8% got converted to surface run-off. An exponential model was developed between the rainfall and the run-off that predicted the run-off with an R 2 of 0.97 and RMSE of 8.31 mm. The run-off trend analysed using the Mann-Kendall test revealed that a significant pattern exists in 22 medium, two major river basins and three meteorological sub-divisions, while there was no evidence of a statistically significant trend in the elevation zones. Among the medium river basins, the highest positive rate of change in the run-off was observed in the Kameng basin (13.6 mm/yr), while the highest negative trend was observed in the Tista upstream basin (-21.4 mm/yr). Changes in run-off provide valuable information for understanding the region's sensitivity to climatic variability.

  15. Modelling surface run-off and trends analysis over India

    Indian Academy of Sciences (India)

    P K Gupta; S Chauhan; M P Oza

    2016-08-01

    The present study is mainly concerned with detecting the trend of run-off over the mainland of India, during a time period of 35 years, from 1971–2005 May–October). Rainfall, soil texture, land cover types, slope, etc., were processed and run-off modelling was done using the Natural Resources ConservationService (NRCS) model with modifications and cell size of 5×5 km. The slope and antecedent moisture corrections were incorporated in the existing model. Trend analysis of estimated run-off was done by taking into account different analysis windows such as cell, medium and major river basins, meteorologicalsub-divisions and elevation zones across India. It was estimated that out of the average 1012.5 mm of rainfall over India (considering the study period of 35 years), 33.8% got converted to surface run-off. An exponential model was developed between the rainfall and the run-off that predicted the run-off with an $R^2$ of 0.97 and RMSE of 8.31 mm. The run-off trend analysed using the Mann–Kendall test revealed that a significant pattern exists in 22 medium, two major river basins and three meteorological subdivisions, while there was no evidence of a statistically significant trend in the elevation zones. Among the medium river basins, the highest positive rate of change in the run-off was observed in the Kameng basin (13.6 mm/yr), while the highest negative trend was observed in the Tista upstream basin (−21.4 mm/yr). Changes in run-off provide valuable information for understanding the region’s sensitivity to climatic variability.

  16. Terror birds on the run: a mechanical model to estimate its maximum running speed

    Science.gov (United States)

    Blanco, R. Ernesto; Jones, Washington W

    2005-01-01

    ‘Terror bird’ is a common name for the family Phorusrhacidae. These large terrestrial birds were probably the dominant carnivores on the South American continent from the Middle Palaeocene to the Pliocene–Pleistocene limit. Here we use a mechanical model based on tibiotarsal strength to estimate maximum running speeds of three species of terror birds: Mesembriornis milneedwardsi, Patagornis marshi and a specimen of Phorusrhacinae gen. The model is proved on three living large terrestrial bird species. On the basis of the tibiotarsal strength we propose that Mesembriornis could have used its legs to break long bones and access their marrow. PMID:16096087

  17. Numerical Modelling of Wave Run-Up: Regular Waves

    DEFF Research Database (Denmark)

    Ramirez, Jorge; Frigaard, Peter; Andersen, Thomas Lykke;

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  18. Numerical Modelling of Wave Run-Up: Regular Waves

    DEFF Research Database (Denmark)

    Ramirez, Jorge; Frigaard, Peter; Andersen, Thomas Lykke

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  19. Long-Run Properties of Large-Scale Macroeconometric Models

    OpenAIRE

    Kenneth F. WALLIS-; John D. WHITLEY

    1987-01-01

    We consider alternative approaches to the evaluation of the long-run properties of dynamic nonlinear macroeconometric models, namely dynamic simulation over an extended database, or the construction and direct solution of the steady-state version of the model. An application to a small model of the UK economy is presented. The model is found to be unstable, but a stable form can be produced by simple alterations to the structure.

  20. Matter density perturbation and power spectrum in running vacuum model

    CERN Document Server

    Geng, Chao-Qiang

    2016-01-01

    We investigate the matter density perturbation $\\delta_m$ and power spectrum $P(k)$ in the running vacuum model (RVM) with the cosmological constant being a function of the Hubble parameter, given by $\\Lambda = \\Lambda_0 + 6 \\sigma H H_0+ 3\

  1. Operation of the upgraded ATLAS Central Trigger Processor during the LHC Run 2

    CERN Document Server

    Bertelsen, H.; Deviveiros, P.O.; Eifert, T.; Galster, G.; Glatzer, J.; Haas, S.; Marzin, A.; Silva Oliveira, M.V.; Pauly, T.; Schmieden, K.; Spiwoks, R.; Stelzer, J.

    2016-01-01

    The ATLAS Central Trigger Processor (CTP) is responsible for forming the Level-1 trigger decision based on the information from the calorimeter and muon trigger processors. In order to cope with the increase of luminosity and physics cross-sections in Run 2, several components of this system have been upgraded. In particular, the number of usable trigger inputs and trigger items have been increased from 160 to 512 and from 256 to 512, respectively. The upgraded CTP also provides extended monitoring capabilities and allows to operate simultaneously up to three independent combinations of sub-detectors with full trigger functionality, which is particularly useful for commissioning, calibration and test runs. The software has also undergone a major upgrade to take advantage of all these new functionalities. An overview of the commissioning and the operation of the upgraded CTP during the LHC Run 2 is given.

  2. 2013 CEF RUN - PHASE 1 DATA ANALYSIS AND MODEL VALIDATION

    Energy Technology Data Exchange (ETDEWEB)

    Choi, A.

    2014-05-08

    Phase 1 of the 2013 Cold cap Evaluation Furnace (CEF) test was completed on June 3, 2013 after a 5-day round-the-clock feeding and pouring operation. The main goal of the test was to characterize the CEF off-gas produced from a nitric-formic acid flowsheet feed and confirm whether the CEF platform is capable of producing scalable off-gas data necessary for the revision of the DWPF melter off-gas flammability model; the revised model will be used to define new safety controls on the key operating parameters for the nitric-glycolic acid flowsheet feeds including total organic carbon (TOC). Whether the CEF off-gas data were scalable for the purpose of predicting the potential flammability of the DWPF melter exhaust was determined by comparing the predicted H{sub 2} and CO concentrations using the current DWPF melter off-gas flammability model to those measured during Phase 1; data were deemed scalable if the calculated fractional conversions of TOC-to-H{sub 2} and TOC-to-CO at varying melter vapor space temperatures were found to trend and further bound the respective measured data with some margin of safety. Being scalable thus means that for a given feed chemistry the instantaneous flow rates of H{sub 2} and CO in the DWPF melter exhaust can be estimated with some degree of conservatism by multiplying those of the respective gases from a pilot-scale melter by the feed rate ratio. This report documents the results of the Phase 1 data analysis and the necessary calculations performed to determine the scalability of the CEF off-gas data. A total of six steady state runs were made during Phase 1 under non-bubbled conditions by varying the CEF vapor space temperature from near 700 to below 300°C, as measured in a thermowell (T{sub tw}). At each steady state temperature, the off-gas composition was monitored continuously for two hours using MS, GC, and FTIR in order to track mainly H{sub 2}, CO, CO{sub 2}, NO{sub x}, and organic gases such as CH{sub 4}. The standard

  3. 2013 CEF RUN - PHASE 1 DATA ANALYSIS AND MODEL VALIDATION

    Energy Technology Data Exchange (ETDEWEB)

    Choi, A.

    2014-05-08

    Phase 1 of the 2013 Cold cap Evaluation Furnace (CEF) test was completed on June 3, 2013 after a 5-day round-the-clock feeding and pouring operation. The main goal of the test was to characterize the CEF off-gas produced from a nitric-formic acid flowsheet feed and confirm whether the CEF platform is capable of producing scalable off-gas data necessary for the revision of the DWPF melter off-gas flammability model; the revised model will be used to define new safety controls on the key operating parameters for the nitric-glycolic acid flowsheet feeds including total organic carbon (TOC). Whether the CEF off-gas data were scalable for the purpose of predicting the potential flammability of the DWPF melter exhaust was determined by comparing the predicted H{sub 2} and CO concentrations using the current DWPF melter off-gas flammability model to those measured during Phase 1; data were deemed scalable if the calculated fractional conversions of TOC-to-H{sub 2} and TOC-to-CO at varying melter vapor space temperatures were found to trend and further bound the respective measured data with some margin of safety. Being scalable thus means that for a given feed chemistry the instantaneous flow rates of H{sub 2} and CO in the DWPF melter exhaust can be estimated with some degree of conservatism by multiplying those of the respective gases from a pilot-scale melter by the feed rate ratio. This report documents the results of the Phase 1 data analysis and the necessary calculations performed to determine the scalability of the CEF off-gas data. A total of six steady state runs were made during Phase 1 under non-bubbled conditions by varying the CEF vapor space temperature from near 700 to below 300°C, as measured in a thermowell (T{sub tw}). At each steady state temperature, the off-gas composition was monitored continuously for two hours using MS, GC, and FTIR in order to track mainly H{sub 2}, CO, CO{sub 2}, NO{sub x}, and organic gases such as CH{sub 4}. The standard

  4. Test of the classic model for predicting endurance running performance.

    Science.gov (United States)

    McLaughlin, James E; Howley, Edward T; Bassett, David R; Thompson, Dixie L; Fitzhugh, Eugene C

    2010-05-01

    To compare the classic physiological variables linked to endurance performance (VO2max, %VO2max at lactate threshold (LT), and running economy (RE)) with peak treadmill velocity (PTV) as predictors of performance in a 16-km time trial. Seventeen healthy, well-trained distance runners (10 males and 7 females) underwent laboratory testing to determine maximal oxygen uptake (VO2max), RE, percentage of maximal oxygen uptake at the LT (%VO2max at LT), running velocity at LT, and PTV. Velocity at VO2max (vVO2max) was calculated from RE and VO2max. Three stepwise regression models were used to determine the best predictors (classic vs treadmill performance protocols) for the 16-km running time trial. Simple Pearson correlations of the variables with 16-km performance showed vVO2max to have the highest correlation (r = -0.972) and %VO2max at the LT the lowest (r = 0.136). The correlation coefficients for LT, VO2max, and PTV were very similar in magnitude (r = -0.903 to r = -0.892). When VO2max, %VO2max at LT, RE, and PTV were entered into SPSS stepwise analysis, VO2max explained 81.3% of the total variance, and RE accounted for an additional 10.7%. vVO2max was shown to be the best predictor of the 16-km performance, accounting for 94.4% of the total variance. The measured velocity at VO2max (PTV) was highly correlated with the estimated velocity at vVO2max (r = 0.8867). Among well-trained subjects heterogeneous in VO2max and running performance, vVO2max is the best predictor of running performance because it integrates both maximal aerobic power and the economy of running. The PTV is linked to the same physiological variables that determine vVO2max.

  5. Arbitrary Symmetric Running Gait Generation for an Underactuated Biped Model

    Science.gov (United States)

    Esmaeili, Mohammad; Macnab, Chris

    2017-01-01

    This paper investigates generating symmetric trajectories for an underactuated biped during the stance phase of running. We use a point mass biped (PMB) model for gait analysis that consists of a prismatic force actuator on a massless leg. The significance of this model is its ability to generate more general and versatile running gaits than the spring-loaded inverted pendulum (SLIP) model, making it more suitable as a template for real robots. The algorithm plans the necessary leg actuator force to cause the robot center of mass to undergo arbitrary trajectories in stance with any arbitrary attack angle and velocity angle. The necessary actuator forces follow from the inverse kinematics and dynamics. Then these calculated forces become the control input to the dynamic model. We compare various center-of-mass trajectories, including a circular arc and polynomials of the degrees 2, 4 and 6. The cost of transport and maximum leg force are calculated for various attack angles and velocity angles. The results show that choosing the velocity angle as small as possible is beneficial, but the angle of attack has an optimum value. We also find a new result: there exist biped running gaits with double-hump ground reaction force profiles which result in less maximum leg force than single-hump profiles. PMID:28118401

  6. Operator Spin Foam Models

    CERN Document Server

    Bahr, Benjamin; Kamiński, Wojciech; Kisielowski, Marcin; Lewandowski, Jerzy

    2010-01-01

    The goal of this paper is to introduce a systematic approach to spin foams. We define operator spin foams, that is foams labelled by group representations and operators, as the main tool. An equivalence relation we impose in the set of the operator spin foams allows to split the faces and the edges of the foams. The consistency with that relation requires introduction of the (familiar for the BF theory) face amplitude. The operator spin foam models are defined quite generally. Imposing a maximal symmetry leads to a family we call natural operator spin foam models. This symmetry, combined with demanding consistency with splitting the edges, determines a complete characterization of a general natural model. It can be obtained by applying arbitrary (quantum) constraints on an arbitrary BF spin foam model. In particular, imposing suitable constraints on Spin(4) BF spin foam model is exactly the way we tend to view 4d quantum gravity, starting with the BC model and continuing with the EPRL or FK models. That makes...

  7. Parallelization and Performance of the NIM Weather Model Running on GPUs

    Science.gov (United States)

    Govett, Mark; Middlecoff, Jacques; Henderson, Tom; Rosinski, James

    2014-05-01

    The Non-hydrostatic Icosahedral Model (NIM) is a global weather prediction model being developed to run on the GPU and MIC fine-grain architectures. The model dynamics, written in Fortran, was initially parallelized for GPUs in 2009 using the F2C-ACC compiler and demonstrated good results running on a single GPU. Subsequent efforts have focused on (1) running efficiently on multiple GPUs, (2) parallelization of NIM for Intel-MIC using openMP, (3) assessing commercial Fortran GPU compilers now available from Cray, PGI and CAPS, (4) keeping the model up to date with the latest scientific development while maintaining a single source performance portable code, and (5) parallelization of two physics packages used in the NIM: the operational Global Forecast System (GFS) used operationally, and the widely used Weather Research and Forecast (WRF) model physics. The presentation will touch on each of these efforts, but highlight improvements in parallel performance of the NIM running on the Titan GPU cluster at ORNL, the ongong parallelization of model physics, and a recent evaluation of commercial GPU compilers using the F2C-ACC compiler as the baseline.

  8. Operational experience with the CMS pixel detector in LHC Run II

    CERN Document Server

    Karancsi, Janos

    2016-01-01

    The CMS pixel detector was repaired successfully, calibrated and commissioned for the second run of Large Hadron Collider during the first long shutdown between 2013 and 2015. The replaced pixel modules were calibrated separately and show the expected behavior of an un-irradiated detector. In 2015, the system performed very well with an even improved spatial resolution compared to 2012. During this time, the operational team faced various challenges including the loss of a sector in one half shell which was only partially recovered. In 2016, the detector is expected to withstand instantaneous luminosities beyond the design limits and will need a combined effort of both online and offline teams in order to provide the high quality data that is required to reach the physics goals of CMS. We present the operational experience gained during the second run of the LHC and show the latest performance results of the CMS pixel detector.

  9. Operating Security System Support for Run-Time Security with a Trusted Execution Environment

    DEFF Research Database (Denmark)

    Gonzalez, Javier

    . In this thesis we introduce run-time security primitives that enable a number of trusted services in the context of Linux. These primitives mediate any action involving sensitive data or sensitive assets in order to guarantee their integrity and confidentiality. We introduce a general mechanism to protect...... in the Linux operating system. We are in the process of making this driver part of the mainline Linux kernel....

  10. Operational Experience, Improvements, and Performance of the CDF Run II Silicon Vertex Detector

    CERN Document Server

    Aaltonen, T; Boveia, A.; Brau, B.; Bolla, G; Bortoletto, D; Calancha, C; Carron, S.; Cihangir, S.; Corbo, M.; Clark, D.; Di Ruzza, B.; Eusebi, R.; Fernandez, J.P.; Freeman, J.C.; Garcia, J.E.; Garcia-Sciveres, M.; Gonzalez, O.; Grinstein, S.; Hartz, M.; Herndon, M.; Hill, C.; Hocker, A.; Husemann, U.; Incandela, J.; Issever, C.; Jindariani, S.; Junk, T.R.; Knoepfel, K.; Lewis, J.D.; Martinez-Ballarin, R.; Mathis, M.; Mattson, M.; Merkel, P; Mondragon, M.N.; Moore, R.; Mumford, J.R.; Nahn, S.; Nielsen, J.; Nelson, T.K.; Pavlicek, V.; Pursley, J.; Redondo, I.; Roser, R.; Schultz, K.; Spalding, J.; Stancari, M.; Stanitzki, M.; Stuart, D.; Sukhanov, A.; Tesarek, R.; Treptow, K.; Wallny, R.; Worm, S.

    2013-01-01

    The Collider Detector at Fermilab (CDF) pursues a broad physics program at Fermilab's Tevatron collider. Between Run II commissioning in early 2001 and the end of operations in September 2011, the Tevatron delivered 12 fb-1 of integrated luminosity of p-pbar collisions at sqrt(s)=1.96 TeV. Many physics analyses undertaken by CDF require heavy flavor tagging with large charged particle tracking acceptance. To realize these goals, in 2001 CDF installed eight layers of silicon microstrip detectors around its interaction region. These detectors were designed for 2--5 years of operation, radiation doses up to 2 Mrad (0.02 Gy), and were expected to be replaced in 2004. The sensors were not replaced, and the Tevatron run was extended for several years beyond its design, exposing the sensors and electronics to much higher radiation doses than anticipated. In this paper we describe the operational challenges encountered over the past 10 years of running the CDF silicon detectors, the preventive measures undertaken, an...

  11. Linking Fish Habitat Modelling and Sediment Transport in Running Waters

    Institute of Scientific and Technical Information of China (English)

    Andreas; EISNER; Silke; WIEPRECHT; Matthias; SCHNEIDER

    2005-01-01

    The assessment of ecological status for running waters is one of the major issues within an integrated river basin management and plays a key role with respect to the implementation of the European Water Frame- work Directive (WFD).One of the tools supporting the development of sustainable river management is physi- cal habitat modeling,e.g.,for fish,because fish population are one of the most important indicators for the e- colngical integrity of rivers.Within physical habitat models hydromorphological ...

  12. Synthane Pilot Plant, Bruceton, Pa. Run report No. 1. Operating period: July--December 1976

    Energy Technology Data Exchange (ETDEWEB)

    1976-01-01

    Test Directive No. 1 provided the operating conditions and process requirements for the first coal to be gasified in the Synthane Pilot Plant. Rosebud coal, which is a western sub-bituminous coal, was chosen by DOE because of its non-caking properties and reactivity. This report summarizes and presents the data obtained. The pilot plant produced gas for a total of 228 hours and gasified 709 tons of Rosebud coal from July 7 to December 20, 1976. Most of this period was spent in achieving process reliability and learning how to operate and control the gasifier. A significant number of equipment and process changes were required to achieve successful operation of the coal grinding and handling facilities, the Petrocarb feed system, and the char handling facilities. A complete revision of all gasifier instrumentation was necessary to achieve good control. Twenty-one test runs were accomplished, the longest of which was 37 hours. During this run, carbon conversions of 57 to 60% were achieved at bed temperatures of 1450 to 1475/sup 0/F. Earlier attempts to operate the gasifier with bed temperatures of 1550 and 1650/sup 0/F resulted in clinker formation in the gasifier and the inability to remove char. Test Directive No. 1 was discontinued in January 1977, without meeting the directive's goals because the process conditions of free fall of coal feed into the Synthane gasifier resulted in excessive quantities of tar and fines carryover into the gas scrubbing area. Each time the gasifier was opened after a run, the internal cyclone dip leg was found to be plugged solidly with hard tar and fines. The gas scrubbing equipment was always badly fouled with char and tar requiring an extensive and difficult cleanout. Packing in the gas scrubber had to be completely changed twice due to extensive fouling.

  13. Energy Operation Model

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-27

    Energy Operation Model (EOM) simulates the operation of the electric grid at the zonal scale, including inter-zonal transmission constraints. It generates the production cost, power generation by plant and category, fuel usage, and locational marginal price (LMP) with a flexible way to constrain the power production by environmental constraints, e.g. heat waves, drought conditions). Different from commercial software such as PROMOD IV where generator capacity and heat rate efficiency can only be adjusted on a monthly basis, EOM calculates capacity impacts and plant efficiencies based on hourly ambient conditions (air temperature and humidity) and cooling water availability for thermal plants. What is missing is a hydro power dispatch.

  14. Designing Green Networks and Network Operations Saving Run-the-Engine Costs

    CERN Document Server

    Minoli, Daniel

    2011-01-01

    In recent years the confluence of socio-political trends toward environmental responsibility and the pressing need to reduce Run-the-Engine (RTE) costs has given birth to a nascent discipline of Green IT. A clear and concise introduction to green networks and green network operations, this book examines analytical measures and discusses virtualization, network computing, and web services as approaches for green data centers and networks. It identifies some strategies for green appliance and end devices and examines the methodical steps that can be taken over time to achieve a seamless migratio

  15. Operation of the enhanced ATLAS First Level Calorimeter Trigger at the start of Run-2

    CERN Document Server

    Palka, Marek; The ATLAS collaboration

    2015-01-01

    In 2015 the LHC will operate with a higher center-of-mass energy and proton beams luminosity. To keep a high trigger efficiency against an increased event rate, part of ATLAS Level-1 Calorimeter Trigger electronics have been re-designed or newly introduced (Pre-Processors, Merging Modules and Topological Processors). Additionally, to achieve the best possible resolution for the reconstructed physics objects, complex calibration and monitoring systems are employed. Hit rates and energy spectra down to channel level, based on reconstructed events, are supervised with the calorimeter trigger hardware. The performance of the upgraded Level-1 Calorimeter Trigger at the beginning of LHC Run-2 is illustrated.

  16. Minimum Bias Trigger Scintillators for ATLAS: Commissioning and Run 2 Initial Operation

    CERN Document Server

    Dano Hoffmann, Maria; The ATLAS collaboration

    2015-01-01

    The Minimum Bias Trigger Scintillators (MBTS) delivered the primary trigger for selecting events from low luminosity proton-proton, lead-lead and lead-proton collisions with the smallest possible bias during LHC Run 1 (2009-2013). Similarly, the MBTS will select events for the first Run 2 physics measurements, for instance charge multiplicity, proton-proton cross section, rapidity gap measurements, etc. at the unprecedented 13 TeV center of mass energy of proton-proton collisions. We will review the upgrades to the MBTS detector that have been implemented during the 2013-2014 shutdown. New scintillators have been installed to replace the radiation damaged ones, a modified optical readout scheme have been adopted to increase the light yield and an improved data acquisition chain has been used to cope with the few issues observed during Run 1 operations. Since late 2014, MBTS have been commissioned during cosmic data taking, first LHC beam splashes and single beam LHC fills. The goal is to have a fully commissi...

  17. Integrating spatio-temporal environmental models for planning ski runs

    NARCIS (Netherlands)

    Pfeffer, Karin

    2003-01-01

    The establishment of ski runs and ski lifts, the action of skiing and maintenance of ski runs may cause considerable environmental impact. Clearly, for improvements to be made in the planning of ski runs in alpine terrain a good understanding of the environmental system and the response of environme

  18. CMS operations for Run II preparation and commissioning of the offline infrastructure

    CERN Document Server

    Cerminara, Gianluca

    2016-01-01

    The restart of the LHC coincided with an intense activity for the CMS experiment. Both at the beginning of Run II in 2015 and the restart of operations in 2016, the collaboration was engaged in an extensive re-commissioning of the CMS data-taking operations. After the long stop, the detector was fully aligned and calibrated. Data streams were redesigned, to fit the priorities dictated by the physics program for 2015 and 2016. A new reconstruction software (both online and offline) was commissioned with early collisions and further developed during the year. A massive campaign of Monte Carlo production was launched, to assist physics analyses. This presentation reviews the main event of this commissioning journey and describes the status of CMS physics performances for 2016.

  19. Operation and performance of the CMS Resistive Plate Chambers during LHC run II

    CERN Document Server

    Eysermans, Jan

    2017-01-01

    The Resitive Plate Chambers (RPC) at the Compact Muon Solenoid (CMS) experiment at the CERN Large Hadron Collider (LHC) provide redundancy to the Drift Tubes in the barrel and Cathode Strip Chambers in the endcap regions. Consisting of 1056 double gap RPC chambers, the main detector parameters and environmental conditions are carefully monitored during the data taking period. At a center of mass energy of 13 TeV, the luminosity reached record levels which was challenging from the operational and performance point of view. In this work, the main operational parameters are discussed and the overall performance of the RPC system is reported for the LHC run II data taking period. With a low amount of inactive chambers, a good and stable detector performance was achieved with high efficiency.

  20. Statistical Design of an Adaptive Synthetic X- Control Chart with Run Rule on Service and Management Operation

    Directory of Open Access Journals (Sweden)

    Shucheng Yu

    2016-01-01

    Full Text Available An improved synthetic X- control chart based on hybrid adaptive scheme and run rule scheme is introduced to enhance the statistical performance of traditional synthetic X- control chart on service and management operation. The proposed scientific hybrid adaptive schemes consider both variable sampling interval and variable sample size scheme. The properties of the proposed chart are obtained using Markov chain approach. An extensive set of numerical results is presented to test the effectiveness of the proposed model in detecting small and moderate shifts in the process mean. The results show that the proposed chart is quicker than the standard synthetic X- chart and CUSUM chart in detecting small and moderate shifts in the process of service and management operation.

  1. Pairwise velocities in the "Running FLRW" cosmological model

    Science.gov (United States)

    Bibiano, Antonio; Croton, Darren J.

    2017-01-01

    We present an analysis of the pairwise velocity statistics from a suite of cosmological N-body simulations describing the "Running Friedmann-Lemaître-Robertson-Walker" (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends ΛCDM with a time-evolving vacuum energy density, ρ _Λ. To enforce local conservation of matter a time-evolving gravitational coupling is also included. Our results constitute the first study of velocities in the R-FLRW cosmology, and we also compare with other dark energy simulations suites, repeating the same analysis. We find a strong degeneracy between the pairwise velocity and σ8 at z = 0 for almost all scenarios considered, which remains even when we look back to epochs as early as z = 2. We also investigate various Coupled Dark Energy models, some of which show minimal degeneracy, and reveal interesting deviations from ΛCDM which could be readily exploited by future cosmological observations to test and further constrain our understanding of dark energy.

  2. Matter density perturbation and power spectrum in running vacuum model

    Science.gov (United States)

    Geng, Chao-Qiang; Lee, Chung-Chi

    2016-10-01

    We investigate the matter density perturbation δm and power spectrum P(k) in the running vacuum model (RVM) with the cosmological constant being a function of the Hubble parameter, given by Λ = Λ0 + 6σHH0 + 3νH2, in which the linear and quadratic terms of H would originate from the QCD vacuum condensation and cosmological renormalization group, respectively. Taking the dark energy perturbation into consideration, we derive the evolution equation for δm and find a specific scale dcr = 2π/kcr, which divides the evolution of the universe into the sub and super-interaction regimes, corresponding to k ≪ kcr and k ≫ kcr, respectively. For the former, the evolution of δm has the same behavior as that in the ΛCDM model, while for the latter, the growth of δm is frozen (greatly enhanced) when ν + σ > ( matter and dark energy. It is clear that the observational data rule out the cases with ν < 0 and ν + σ < 0, while the allowed window for the model parameters is extremely narrow with ν , |σ | ≲ {O}(10^{-7}).

  3. Cosmological models with running cosmological term and decaying dark matter

    Science.gov (United States)

    Szydłowski, Marek; Stachowski, Aleksander

    2017-03-01

    We investigate the dynamics of the generalized ΛCDM model, which the Λ term is running with the cosmological time. On the example of the model Λ(t) =Λbare + α2/t2 we show the existence of a mechanism of the modification of the scaling law for energy density of dark matter: ρdm ∝a - 3 + λ(t). We use an approach developed by Urbanowski in which properties of unstable vacuum states are analyzed from the point of view of the quantum theory of unstable states. We discuss the evolution of Λ(t) term and pointed out that during the cosmic evolution there is a long phase in which this term is approximately constant. We also present the statistical analysis of both the Λ(t) CDM model with dark energy and decaying dark matter and the ΛCDM standard cosmological model. We use data such as Planck, SNIa, BAO, H(z) and AP test. While for the former we find the best fit value of the parameter Ωα2,0 is negative (energy transfer is from the dark matter to dark energy sector) and the parameter Ωα2,0 belongs to the interval (- 0 . 000040 , - 0 . 000383) at 2- σ level. The decaying dark matter causes to lowering a mass of dark matter particles which are lighter than CDM particles and remain relativistic. The rate of the process of decaying matter is estimated. Our model is consistent with the decaying mechanism producing unstable particles (e.g. sterile neutrinos) for which α2 is negative.

  4. Matter density perturbation and power spectrum in running vacuum model

    Science.gov (United States)

    Geng, Chao-Qiang; Lee, Chung-Chi

    2017-01-01

    We investigate the matter density perturbation δm and power spectrum P(k) in the running vacuum model, with the cosmological constant being a function of the Hubble parameter, given by Λ = Λ0 + 6σHH0 + 3νH2, in which the linear and quadratic terms of H would originate from the QCD vacuum condensation and cosmological renormalization group, respectively. Taking the dark energy perturbation into consideration, we derive the evolution equation for δm and find a specific scale dcr = 2π/kcr, which divides the evolution of the universe into the sub-interaction and super-interaction regimes, corresponding to k ≪ kcr and k ≫ kcr, respectively. For the former, the evolution of δm has the same behaviour as that in the Λ cold dark model, while for the latter, the growth of δm is frozen (greatly enhanced) when ν + σ > (extremely narrow with ν , |σ | ≲ O(10^{-7}).

  5. First evidence of running cosmic vacuum: challenging the concordance model

    CERN Document Server

    Sola, Joan; Perez, Javier de Cruz

    2016-01-01

    Despite the fact that a rigid $\\Lambda$-term is a fundamental building block of the concordance $\\Lambda$CDM model, we show that a large class of cosmological scenarios with dynamical vacuum energy density $\\rho_{\\Lambda}$ and/or gravitational coupling $G$, together with a possible non-conservation of matter, are capable of seriously challenging the traditional phenomenological success of the $\\Lambda$CDM. In this Letter, we discuss these "running vacuum models" (RVM's), in which $\\rho_{\\Lambda}=\\rho_{\\Lambda}(H)$ consists of a nonvanishing constant term and a series of powers of the Hubble rate. Such generic structure is potentially linked to the quantum field theoretical description of the expanding Universe. By performing an overall fit to the cosmological observables $SNIa+BAO+H(z)+LSS+BBN+CMB$ (in which the WMAP9, Planck 2013 and Planck 2015 data are taken into account), we find that the RVM's appear definitely more favored than the $\\Lambda$CDM, namely at an unprecedented level of $\\sim 4\\sigma$, implyi...

  6. Modeling and simulation of Cobot based on double over-running clutches

    Institute of Scientific and Technical Information of China (English)

    DONG Yu-hong; ZHANG Li-xun

    2008-01-01

    In order to analyze characteristics of Cobot cooperation with a human in a shared workspacce, the model of a non-holonormic constraint joint mechanism and its control model were constructed based on double o-ver-running clutches. The simulation analysis was carried out and it validated passive and constraint features of the joint mechanism. In terms of Cobot components, the control model of Cobot following a desired trajectory was built up. The simulation studies illustrate that the Cobot can track a desired trajectory and possess passive and constraint features; a human supplies operation force that makes Cobot move, and a computer system con-trois its motion trajectory. So it can meet the requirements of Cobot collaboration with an operator. The Cobot model can be used in applications of material moving, parts assembly and some situations requiring man-ma-chine cooperation and so on.

  7. W-026 integrated engineering cold run operational test report for balance of plant (BOP)

    Energy Technology Data Exchange (ETDEWEB)

    Kersten, J.K.

    1998-02-24

    This Cold Run test is designed to demonstrate the functionality of systems necessary to move waste drums throughout the plant using approved procedures, and the compatibility of these systems to function as an integrated process. This test excludes all internal functions of the gloveboxes. In the interest of efficiency and support of the facility schedule, the initial revision of the test (rev 0) was limited to the following: Receipt and storage of eight overpacked drums, four LLW and four TRU; Receipt, routing, and staging of eleven empty drums to the process area where they will be used later in this test; Receipt, processing, and shipping of two verification drums (Route 9); Receipt, processing, and shipping of two verification drums (Route 1). The above listed operations were tested using the rev 0 test document, through Section 5.4.25. The document was later revised to include movement of all staged drums to and from the LLW and TRU process and RWM gloveboxes. This testing was performed using Sections 5.5 though 5.11 of the rev 1 test document. The primary focus of this test is to prove the functionality of automatic operations for all mechanical and control processes listed. When necessary, the test demonstrates manual mode operations as well. Though the gloveboxes are listed, only waste and empty drum movement to, from, and between the gloveboxes was tested.

  8. How run-of-river operation affects hydropower generation and value.

    Science.gov (United States)

    Jager, Henriette I; Bevelhimer, Mark S

    2007-12-01

    Regulated rivers in the United States are required to support human water uses while preserving aquatic ecosystems. However, the effectiveness of hydropower license requirements nationwide has not been demonstrated. One requirement that has become more common is "run-of-river" (ROR) operation, which restores a natural flow regime. It is widely believed that ROR requirements (1) are mandated to protect aquatic biota, (2) decrease hydropower generation per unit flow, and (3) decrease energy revenue. We tested these three assumptions by reviewing hydropower projects with license-mandated changes from peaking to ROR operation. We found that ROR operation was often prescribed in states with strong water-quality certification requirements and migratory fish species. Although benefits to aquatic resources were frequently cited, changes were often motivated by other considerations. After controlling for climate, the overall change in annual generation efficiency across projects because of the change in operation was not significant. However, significant decreases were detected at one quarter of individual hydropower projects. As expected, we observed a decrease in flow during peak demand at 7 of 10 projects. At the remaining projects, diurnal fluctuations actually increased because of operation of upstream storage projects. The economic implications of these results, including both producer costs and ecologic benefits, are discussed. We conclude that regional-scale studies of hydropower regulation, such as this one, are long overdue. Public dissemination of flow data, license provisions, and monitoring data by way of on-line access would facilitate regional policy analysis while increasing regulatory transparency and providing feedback to decision makers.

  9. First Evidence of Running Cosmic Vacuum: Challenging the Concordance Model

    Science.gov (United States)

    Solà, Joan; Gómez-Valent, Adrià; de Cruz Pérez, Javier

    2017-02-01

    Despite the fact that a rigid {{Λ }}-term is a fundamental building block of the concordance ΛCDM model, we show that a large class of cosmological scenarios with dynamical vacuum energy density {ρ }{{Λ }} together with a dynamical gravitational coupling G or a possible non-conservation of matter, are capable of seriously challenging the traditional phenomenological success of the ΛCDM. In this paper, we discuss these “running vacuum models” (RVMs), in which {ρ }{{Λ }}={ρ }{{Λ }}(H) consists of a nonvanishing constant term and a series of powers of the Hubble rate. Such generic structure is potentially linked to the quantum field theoretical description of the expanding universe. By performing an overall fit to the cosmological observables SN Ia+BAO+H(z)+LSS+BBN+CMB (in which the WMAP9, Planck 2013, and Planck 2015 data are taken into account), we find that the class of RVMs appears significantly more favored than the ΛCDM, namely, at an unprecedented level of ≳ 4.2σ . Furthermore, the Akaike and Bayesian information criteria confirm that the dynamical RVMs are strongly preferred compared to the conventional rigid {{Λ }}-picture of the cosmic evolution.

  10. The running coupling of the minimal sextet composite Higgs model

    CERN Document Server

    Fodor, Zoltan; Kuti, Julius; Mondal, Santanu; Nogradi, Daniel; Wong, Chik Him

    2015-01-01

    We compute the renormalized running coupling of SU(3) gauge theory coupled to N_f = 2 flavors of massless Dirac fermions in the 2-index-symmetric (sextet) representation. This model is of particular interest as a minimal realization of the strongly interacting composite Higgs scenario. A recently proposed finite volume gradient flow scheme is used. The calculations are performed at several lattice spacings with two different implementations of the gradient flow allowing for a controlled continuum extrapolation and particular attention is paid to estimating the systematic uncertainties. For small values of the renormalized coupling our results for the beta-function agree with perturbation theory. For moderate couplings we observe a downward deviation relative to the 2-loop beta-function but in the coupling range where the continuum extrapolation is fully under control we do not observe an infrared fixed point. The explored range includes the locations of the zero of the 3-loop and the 4-loop beta-functions in ...

  11. Effects of Yaw Error on Wind Turbine Running Characteristics Based on the Equivalent Wind Speed Model

    Directory of Open Access Journals (Sweden)

    Shuting Wan

    2015-06-01

    Full Text Available Natural wind is stochastic, being characterized by its speed and direction which change randomly and frequently. Because of the certain lag in control systems and the yaw body itself, wind turbines cannot be accurately aligned toward the wind direction when the wind speed and wind direction change frequently. Thus, wind turbines often suffer from a series of engineering issues during operation, including frequent yaw, vibration overruns and downtime. This paper aims to study the effects of yaw error on wind turbine running characteristics at different wind speeds and control stages by establishing a wind turbine model, yaw error model and the equivalent wind speed model that includes the wind shear and tower shadow effects. Formulas for the relevant effect coefficients Tc, Sc and Pc were derived. The simulation results indicate that the effects of the aerodynamic torque, rotor speed and power output due to yaw error at different running stages are different and that the effect rules for each coefficient are not identical when the yaw error varies. These results may provide theoretical support for optimizing the yaw control strategies for each stage to increase the running stability of wind turbines and the utilization rate of wind energy.

  12. Modelling of Muscle Force Distributions During Barefoot and Shod Running

    Directory of Open Access Journals (Sweden)

    Sinclair Jonathan

    2015-09-01

    Full Text Available Research interest in barefoot running has expanded considerably in recent years, based around the notion that running without shoes is associated with a reduced incidence of chronic injuries. The aim of the current investigation was to examine the differences in the forces produced by different skeletal muscles during barefoot and shod running. Fifteen male participants ran at 4.0 m·s-1 (± 5%. Kinematics were measured using an eight camera motion analysis system alongside ground reaction force parameters. Differences in sagittal plane kinematics and muscle forces between footwear conditions were examined using repeated measures or Freidman’s ANOVA. The kinematic analysis showed that the shod condition was associated with significantly more hip flexion, whilst barefoot running was linked with significantly more flexion at the knee and plantarflexion at the ankle. The examination of muscle kinetics indicated that peak forces from Rectus femoris, Vastus medialis, Vastus lateralis, Tibialis anterior were significantly larger in the shod condition whereas Gastrocnemius forces were significantly larger during barefoot running. These observations provide further insight into the mechanical alterations that runners make when running without shoes. Such findings may also deliver important information to runners regarding their susceptibility to chronic injuries in different footwear conditions.

  13. Modelling of Muscle Force Distributions During Barefoot and Shod Running.

    Science.gov (United States)

    Sinclair, Jonathan; Atkins, Stephen; Richards, Jim; Vincent, Hayley

    2015-09-29

    Research interest in barefoot running has expanded considerably in recent years, based around the notion that running without shoes is associated with a reduced incidence of chronic injuries. The aim of the current investigation was to examine the differences in the forces produced by different skeletal muscles during barefoot and shod running. Fifteen male participants ran at 4.0 m·s-1 (± 5%). Kinematics were measured using an eight camera motion analysis system alongside ground reaction force parameters. Differences in sagittal plane kinematics and muscle forces between footwear conditions were examined using repeated measures or Freidman's ANOVA. The kinematic analysis showed that the shod condition was associated with significantly more hip flexion, whilst barefoot running was linked with significantly more flexion at the knee and plantarflexion at the ankle. The examination of muscle kinetics indicated that peak forces from Rectus femoris, Vastus medialis, Vastus lateralis, Tibialis anterior were significantly larger in the shod condition whereas Gastrocnemius forces were significantly larger during barefoot running. These observations provide further insight into the mechanical alterations that runners make when running without shoes. Such findings may also deliver important information to runners regarding their susceptibility to chronic injuries in different footwear conditions.

  14. Dynamical system approach to running Λ cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Stachowski, Aleksander [Jagiellonian University, Astronomical Observatory, Krakow (Poland); Szydlowski, Marek [Jagiellonian University, Astronomical Observatory, Krakow (Poland); Jagiellonian University, Mark Kac Complex Systems Research Centre, Krakow (Poland)

    2016-11-15

    We study the dynamics of cosmological models with a time dependent cosmological term. We consider five classes of models; two with the non-covariant parametrization of the cosmological term Λ: Λ(H)CDM cosmologies, Λ(a)CDM cosmologies, and three with the covariant parametrization of Λ: Λ(R)CDM cosmologies, where R(t) is the Ricci scalar, Λ(φ)-cosmologies with diffusion, Λ(X)-cosmologies, where X = (1)/(2)g{sup αβ}∇{sub α}∇{sub β}φ is a kinetic part of the density of the scalar field. We also consider the case of an emergent Λ(a) relation obtained from the behaviour of trajectories in a neighbourhood of an invariant submanifold. In the study of the dynamics we used dynamical system methods for investigating how an evolutionary scenario can depend on the choice of special initial conditions. We show that the methods of dynamical systems allow one to investigate all admissible solutions of a running Λ cosmology for all initial conditions. We interpret Alcaniz and Lima's approach as a scaling cosmology. We formulate the idea of an emergent cosmological term derived directly from an approximation of the exact dynamics. We show that some non-covariant parametrization of the cosmological term like Λ(a), Λ(H) gives rise to the non-physical behaviour of trajectories in the phase space. This behaviour disappears if the term Λ(a) is emergent from the covariant parametrization. (orig.)

  15. Operational Risk Modeling

    Directory of Open Access Journals (Sweden)

    Gabriela ANGHELACHE

    2011-06-01

    Full Text Available Losses resulting from operational risk events from a complex interaction between organizational factors, personal and market participants that do not fit a simple classification scheme. Taking into account past losses (ex. Barings, Daiwa, etc. we can say that operational risk is a major financial losses in the banking sector, although until recently have been underestimated, considering that they are generally minor, note setting survival of a bank.

  16. Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager

    Science.gov (United States)

    Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.

    2012-01-01

    GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.

  17. Predictive modelling of running and dwell times in railway traffic

    NARCIS (Netherlands)

    Kecman, P.; Goverde, R.M.P.

    2015-01-01

    Accurate estimation of running and dwell times is important for all levels of planning and control of railway traffic. The availability of historical track occupation data with a high degree of granularity inspired a data-driven approach for estimating these process times. In this paper we present

  18. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Anghelache

    2006-01-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  19. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Mitrut

    2006-03-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  20. Price Dispersion and Short Run Equilibrium in a Queuing Model

    OpenAIRE

    Michael Sattinger

    2003-01-01

    Price dispersion is analyzed in the context of a queuing market where customers enter queues to acquire a good or service and may experience delays. With menu costs, price dispersion arises and can persist in the medium and long run. The queuing market rations goods in the same way whether firm prices are optimal or not. Price dispersion reduces the rate at which customers get the good and reduces customer welfare.

  1. The ATLAS Run-2 Trigger Menu for higher luminosities: Design, Performance and Operational Aspects

    CERN Document Server

    Ruiz-Martinez, Aranzazu; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment aims at recording about 1 kHz of physics collisions, starting with an LHC design bunch crossing rate of 40 MHz. To reduce the massive background rate while maintaining a high selection efficiency for rare physics events (such as beyond the Standard Model physics), a two-level trigger system is used. Events are selected based on physics signatures such as presence of energetic leptons, photons, jets or large missing energy. The trigger system exploits topological information, as well as multi-variate methods to carry out the necessary physics filtering. In total, the ATLAS online selection consists of thousands of different individual triggers. A trigger menu is a compilation of these triggers which specifies the physics algorithms to be used during data taking and the bandwidth a given trigger is allocated. Trigger menus reflect not only the physics goals of the collaboration for a given run, but also take into consideration the instantaneous luminosity of the LHC and limitations from the...

  2. Commissioning and Initial Run-2 Operation of the ATLAS Minimum Bias Trigger Scintillators

    CERN Document Server

    Dano Hoffmann, Maria; The ATLAS collaboration

    2015-01-01

    The Minimum Bias Trigger Scintillators (MBTS) are sub-detectors in ATLAS delivering the primary trigger for selecting events from low luminosity proton-proton, lead-lead and lead-proton collisions with the smallest possible bias. The MBTS have undergone a complete replacement before LHC Run-2 and several improvements are implemented in the layout. Since 2014 the MBTS have been commissioned with cosmic radiation and first LHC Run-2 beam splash events. We summarise the outcome of the commissioning.

  3. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  4. Academic Education Chain Operation Model

    NARCIS (Netherlands)

    Ruskov, Petko; Ruskov, Andrey

    2007-01-01

    This paper presents an approach for modelling the educational processes as a value added chain. It is an attempt to use a business approach to interpret and compile existing business and educational processes towards reference models and suggest an Academic Education Chain Operation Model. The model

  5. Modeling the Frequency of Cyclists’ Red-Light Running Behavior Using Bayesian PG Model and PLN Model

    Directory of Open Access Journals (Sweden)

    Yao Wu

    2016-01-01

    Full Text Available Red-light running behaviors of bicycles at signalized intersection lead to a large number of traffic conflicts and high collision potentials. The primary objective of this study is to model the cyclists’ red-light running frequency within the framework of Bayesian statistics. Data was collected at twenty-five approaches at seventeen signalized intersections. The Poisson-gamma (PG and Poisson-lognormal (PLN model were developed and compared. The models were validated using Bayesian p values based on posterior predictive checking indicators. It was found that the two models have a good fit of the observed cyclists’ red-light running frequency. Furthermore, the PLN model outperformed the PG model. The model estimated results showed that the amount of cyclists’ red-light running is significantly influenced by bicycle flow, conflict traffic flow, pedestrian signal type, vehicle speed, and e-bike rate. The validation result demonstrated the reliability of the PLN model. The research results can help transportation professionals to predict the expected amount of the cyclists’ red-light running and develop effective guidelines or policies to reduce red-light running frequency of bicycles at signalized intersections.

  6. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    Science.gov (United States)

    Bonacorsi, D.; Boccali, T.; Giordano, D.; Girone, M.; Neri, M.; Magini, N.; Kuznetsov, V.; Wildish, T.

    2015-12-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for

  7. Two-Higgs-doublet model of type II confronted with the LHC run I and run II data

    Science.gov (United States)

    Wang, Lei; Zhang, Feng; Han, Xiao-Fang

    2017-06-01

    We examine the parameter space of the two-Higgs-doublet model of type II after imposing the relevant theoretical and experimental constraints from the precision electroweak data, B -meson decays, and the LHC run I and run II data. We find that the searches for Higgs bosons via the τ+τ- , W W , Z Z , γ γ , h h , h Z , H Z , and A Z channels can give strong constraints on the C P -odd Higgs A and heavy C P -even Higgs H , and the parameter space excluded by each channel is respectively carved out in detail assuming that either mA or mH are fixed to 600 or 700 GeV in the scans. The surviving samples are discussed in two different regions. (i) In the standard model-like coupling region of the 125 GeV Higgs, mA is allowed to be as low as 350 GeV, and a strong upper limit is imposed on tan β . mH is allowed to be as low as 200 GeV for the appropriate values of tan β , sin (β -α ), and mA, but is required to be larger than 300 GeV for mA=700 GeV . (ii) In the wrong-sign Yukawa coupling region of the 125 GeV Higgs, the b b ¯→A /H →τ+τ- channel can impose the upper limits on tan β and sin (β -α ), and the A →h Z channel can give the lower limits on tan β and sin (β -α ). mA and mH are allowed to be as low as 60 and 200 GeV, respectively, but 320 GeV

  8. Improving NPP availability using thermalhydraulic integral plant models. Assessment and application of turbine run back scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Reventos, F. [ANACNV, l' Hospitalet de l' Infant, Tarragona (Spain)]|[Technical University of Catalonia, UPC (Spain); Llopis, C.; Pretel, C. [Technical University of Catalonia, UPC (Spain); Posada, J.M.; Moreno, P. [Pablo Moreno S.A. (Spain)

    2001-07-01

    ANAV is the utility responsible of Asco and Vandellos Nuclear Power Plants, a two-unit and a single unit 1000 MW PWR plant, respectively. Both plants, Asco and Vandellos, are in normal operation since 1983 and 1987 and have undergone different important improvements like: steam generators and turbine substitution, power up-rating... Best estimate simulation by means of the thermal-hydraulic integral models of operating nuclear power plants are today impressively helpful for utilities in their purpose of improving availability and keeping safety level. ANAV is currently using Relap5/mod3.2 models of both plants for different purposes related to safety, operation, engineering and training. Turbine run-back system is designed to avoid reactor trips, and it does so in the existing plants, when the key parameters are correctly adjusted. The fine adjustment of such parameters was traditionally performed following the results of control simulators. Such simulators used a fully developed set of control equations and a quite simplified thermal-hydraulic feed-back. Boundary scenarios were considered in order to overcome the difficulties generated by simplification. (author)

  9. Operation and Performance of the Upgraded CMS Calorimeter Trigger in LHC Run 2

    CERN Document Server

    AUTHOR|(CDS)2071552

    2015-01-01

    The Large Hadron Collider (LHC) at CERN is preparing for the physics program for Run 2. The center-of-mass energy has risen from 8 to 13 TeV and the instantaneous luminosity will increase for both proton and heavy-ion running. This will make it more challenging to trigger on interesting events since the number of interactions per crossing (pile-up) and the overall trigger rate will be significantly larger than LHC Run 1. The Compact Muon Solenoid (CMS) experiment has installed a two-stage upgrade to their Calorimeter Trigger to ensure that the trigger rates can be controlled and the thresholds can stay low, so that physics data collection will not be compromised. The first-stage upgrade is installed and includes new electronics and duplicated optical links so that the LHC Run 1 CMS calorimeter trigger is still functional and algorithms can be developed while data taking continues. The second-stage will fully replace the calorimeter trigger at CMS with AMC form-factor boards and an optical link system, and...

  10. Operation of the upgraded ATLAS Central Trigger Processor during the LHC Run 2

    DEFF Research Database (Denmark)

    Bertelsen, H.; Montoya, G. Carrillo; Deviveiros, P. O.

    2016-01-01

    The ATLAS Central Trigger Processor (CTP) is responsible for forming the Level-1 trigger decision based on the information from the calorimeter and muon trigger processors. In order to cope with the increase of luminosity and physics cross-sections in Run 2, several components of this system have...

  11. Operation of the DC current transformer intensity monitors at FNAL during run II

    Energy Technology Data Exchange (ETDEWEB)

    Crisp, J.; Fellenz, B.; Heikkinen, D.; Ibrahim, M.A.; Meyer, T.; Vogel, G.; /Fermilab

    2012-01-01

    Circulating beam intensity measurements at FNAL are provided by five DC current transformers (DCCT), one per machine. With the exception of the DCCT in the Recycler, all DCCT systems were designed and built at FNAL. This paper presents an overview of both DCCT systems, including the sensor, the electronics, and the front-end instrumentation software, as well as their performance during Run II.

  12. Long-run growth rate in a random multiplicative model

    Energy Technology Data Exchange (ETDEWEB)

    Pirjol, Dan [Institute for Physics and Nuclear Engineering, 077125 Bucharest (Romania)

    2014-08-01

    We consider the long-run growth rate of the average value of a random multiplicative process x{sub i+1} = a{sub i}x{sub i} where the multipliers a{sub i}=1+ρexp(σW{sub i}₋1/2 σ²t{sub i}) have Markovian dependence given by the exponential of a standard Brownian motion W{sub i}. The average value (x{sub n}) is given by the grand partition function of a one-dimensional lattice gas with two-body linear attractive interactions placed in a uniform field. We study the Lyapunov exponent λ=lim{sub n→∞}1/n log(x{sub n}), at fixed β=1/2 σ²t{sub n}n, and show that it is given by the equation of state of the lattice gas in thermodynamical equilibrium. The Lyapunov exponent has discontinuous partial derivatives along a curve in the (ρ, β) plane ending at a critical point (ρ{sub C}, β{sub C}) which is related to a phase transition in the equivalent lattice gas. Using the equivalence of the lattice gas with a bosonic system, we obtain the exact solution for the equation of state in the thermodynamical limit n → ∞.

  13. Model based control for run-of-river system. Part 2: Comparison of control structures

    Directory of Open Access Journals (Sweden)

    Liubomyr Vytvytskyi

    2015-10-01

    Full Text Available Optimal operation and control of a run-of-river hydro power plant depend on good knowledge of the elements of the plant in the form of models. Both the control architecture of the system, i.e. the choice of inputs and outputs, and to what degree a model is used, will affect the achievable control performance. Here, a model of a river reach based on the Saint Venant equations for open channel flow illustrates the dynamics of the run-of-river system. The hyperbolic partial differential equations are discretized using the Kurganov-Petrova central upwind scheme - see Part I for details. A comparison is given of achievable control performance using two alternative control signals: the inlet or the outlet volumetric flow rates to the system, in combination with a number of different control structures such as PI control, PI control with Smith predictor, and predictive control. The control objective is to keep the level just in front of the dam as high as possible, and with little variation in the level to avoid overflow over the dam. With a step change in the volumetric inflow to the river reach (disturbance and using the volumetric outflow as the control signal, PI control gives quite good performance. Model predictive control (MPC gives superior control in the sense of constraining the variation in the water level, at a cost of longer computational time and thus constraints on possible sample time. Details on controller tuning are given. With volumetric inflow to the river reach as control signal and outflow (production as disturbance, this introduces a considerable time delay in the control signal. Because of nonlinearity in the system (varying time delay, etc., it is difficult to achieve stable closed loop performance using a simple PI controller. However, by combining a PI controller with a Smith predictor based on a simple integrator + fixed time delay model, stable closed loop operation is possible with decent control performance. Still, an MPC

  14. Biases in modeled surface snow BC mixing ratios in prescribed-aerosol climate model runs

    OpenAIRE

    Doherty, S. J.; C. M. Bitz; M. G. Flanner

    2014-01-01

    Black carbon (BC) in snow lowers its albedo, increasing the absorption of sunlight, leading to positive radiative forcing, climate warming and earlier snowmelt. A series of recent studies have used prescribed-aerosol deposition flux fields in climate model runs to assess the forcing by black carbon in snow. In these studies, the prescribed mass deposition flux of BC to surface snow is decoupled from the mass deposition flux of snow water to the surface. Here we compare progn...

  15. Running Large-Scale Air Pollution Models on Parallel Computers

    DEFF Research Database (Denmark)

    Georgiev, K.; Zlatev, Z.

    2000-01-01

    Proceedings of the 23rd NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held 28 September - 2 October 1998, in Varna, Bulgaria.......Proceedings of the 23rd NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held 28 September - 2 October 1998, in Varna, Bulgaria....

  16. Modelling of Batch Process Operations

    DEFF Research Database (Denmark)

    2011-01-01

    Here a batch cooling crystalliser is modelled and simulated as is a batch distillation system. In the batch crystalliser four operational modes of the crystalliser are considered, namely: initial cooling, nucleation, crystal growth and product removal. A model generation procedure is shown that s...

  17. Renormalisation running of masses and mixings in UED models

    CERN Document Server

    Cornell, A S; Liu, Lu-Xin; Tarhini, Ahmad

    2012-01-01

    We review the Universal Extra-Dimensional Model compactified on a S1/Z2 orbifold, and the renormalisation group evolution of quark and lepton masses, mixing angles and phases both in the UED extension of the Standard Model and of the Minimal Supersymmetric Standard Model. We consider two typical scenarios: all matter fields propagating in the bulk, and matter fields constrained to the brane. The resulting renormalisation group evolution equations in these scenarios are compared with the existing results in the literature, together with their implications.

  18. Short-Run Asset Selection using a Logistic Model

    Directory of Open Access Journals (Sweden)

    Walter Gonçalves Junior

    2011-06-01

    Full Text Available Investors constantly look for significant predictors and accurate models to forecast future results, whose occasional efficacy end up being neutralized by market efficiency. Regardless, such predictors are widely used for seeking better (and more unique perceptions. This paper aims to investigate to what extent some of the most notorious indicators have discriminatory power to select stocks, and if it is feasible with such variables to build models that could anticipate those with good performance. In order to do that, logistical regressions were conducted with stocks traded at Bovespa using the selected indicators as explanatory variables. Investigated in this study were the outputs of Bovespa Index, liquidity, the Sharpe Ratio, ROE, MB, size and age evidenced to be significant predictors. Also examined were half-year, logistical models, which were adjusted in order to check the potential acceptable discriminatory power for the asset selection.

  19. Dynamical system approach to running $\\Lambda$ cosmological models

    CERN Document Server

    Stachowski, Aleksander

    2016-01-01

    We discussed the dynamics of cosmological models in which the cosmological constant term is a time dependent function through the scale factor $a(t)$, Hubble function $H(t)$, Ricci scalar $R(t)$ and scalar field $\\phi(t)$. We considered five classes of models; two non-covariant parametrization of $\\Lambda$: 1) $\\Lambda(H)$CDM cosmologies where $H(t)$ is the Hubble parameter, 2) $\\Lambda(a)$CDM cosmologies where $a(t)$ is the scale factor, and three covariant parametrization of $\\Lambda$: 3) $\\Lambda(R)$CDM cosmologies, where $R(t)$ is the Ricci scalar, 4) $\\Lambda(\\phi)$-cosmologies with diffusion, 5) $\\Lambda(X)$-cosmologies, where $X=\\frac{1}{2}g^{\\alpha\\beta}\

  20. Operation and Performance of the ATLAS Level-1 Calorimeter and Topological Triggers in Run 2

    CERN Document Server

    Weber, Sebastian Mario; The ATLAS collaboration

    2017-01-01

    In Run 2 at CERN's Large Hadron Collider, the ATLAS detector uses a two-level trigger system to reduce the event rate from the nominal collision rate of 40 MHz to the event storage rate of 1 kHz, while preserving interesting physics events. The first step of the trigger system, Level-1, reduces the event rate to 100 kHz within a latency of less than $2.5$ $\\mu\\text{s}$. One component of this system is the Level-1 Calorimeter Trigger (L1Calo), which uses coarse-granularity information from the electromagnetic and hadronic calorimeters to identify regions of interest corresponding to electrons, photons, taus, jets, and large amounts of transverse energy and missing transverse energy. In these proceedings, we discuss improved features and performance of the L1Calo system in the challenging, high-luminosity conditions provided by the LHC in Run 2. A new dynamic pedestal correction algorithm reduces pile-up effects and the use of variable thresholds and isolation criteria for electromagnetic objects allows for opt...

  1. RHIC polarized proton-proton operation at 100 GeV in Run 15

    Energy Technology Data Exchange (ETDEWEB)

    Schoefer, V. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Aschenauer, E. C. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Atoian, G. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Blaskiewicz, M. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Brown, K. A. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Bruno, D. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Connolly, R. [Brookhaven National Laboratory (BNL), Upton, NY (United States); D Ottavio, T. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Drees, K. A. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Dutheil, Y. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Fischer, W. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Gardner, C. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Gu, X. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Hayes, T. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Huang, H. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Laster, J. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Liu, C. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Luo, Y. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Makdisi, Y. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Marr, G. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Marusic, A. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Meot, F. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Mernick, K. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Michnoff, R. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Marusic, A. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Minty, M. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Montag, C. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Morris, J. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Narayan, G. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Nemesure, S. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Pile, P. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Poblaguev, A. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Ranjbar, V. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Robert-Demolaize, G. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Roser, T. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Schmidke, W. B. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Severino, F. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Shrey, T. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Smith, K. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Steski, D. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Tepikian, S. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Trbojevic, D. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Tsoupas, N. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Tuozzolo, J. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Wang, G. [Brookhaven National Laboratory (BNL), Upton, NY (United States); White, S. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Yip, K. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Zaltsman, A. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Zelenski, A. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Zeno, K. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Zhang, S. Y. [Brookhaven National Laboratory (BNL), Upton, NY (United States)

    2015-05-03

    The first part of RHIC Run 15 consisted of ten weeks of polarized proton on proton collisions at a beam energy of 100 GeV at two interaction points. In this paper we discuss several of the upgrades to the collider complex that allowed for improved performance. The largest effort consisted in commissioning of the electron lenses, one in each ring, which are designed to compensate one of the two beam-beam interactions experienced by the proton bunches. The e-lenses raise the per bunch intensity at which luminosity becomes beam-beam limited. A new lattice was designed to create the phase advances necessary for a beam-beam compensation with the e-lens, which also has an improved off-momentum dynamic aperture relative to previous runs. In order to take advantage of the new, higher intensity limit without suffering intensity driven emittance deterioration, other features were commissioned including a continuous transverse bunch-by-bunch damper in RHIC and a double harmonic RF cature scheme in the Booster. Other high intensity protections include improvements to the abort system and the installation of masks to intercept beam lost due to abort kicker pre-fires.

  2. Making Deformable Template Models Operational

    DEFF Research Database (Denmark)

    Fisker, Rune

    2000-01-01

    Deformable template models are a very popular and powerful tool within the field of image processing and computer vision. This thesis treats this type of models extensively with special focus on handling their common difficulties, i.e. model parameter selection, initialization and optimization...... published during the Ph.D. project. To put these articles into the general context of deformable template models and to pass on an overview of the deformable template model literature, the thesis starts with a compact survey of the deformable template model literature with special focus on representation....... A proper handling of the common difficulties is essential for making the models operational by a non-expert user, which is a requirement for intensifying and commercializing the use of deformable template models. The thesis is organized as a collection of the most important articles, which has been...

  3. Implementation of the ATLAS Run 2 event data model

    CERN Document Server

    Buckley, Andrew; Elsing, Markus; Gillberg, Dag Ingemar; Koeneke, Karsten; Krasznahorkay, Attila; Moyse, Edward; Nowak, Marcin; Snyder, Scott; van Gemmeren, Peter

    2015-01-01

    During the 2013--2014 shutdown of the Large Hadron Collider, ATLAS switched to a new event data model for analysis, called the xAOD. A key feature of this model is the separation of the object data from the objects themselves (the `auxiliary store'). Rather being stored as member variables of the analysis classes, all object data are stored separately, as vectors of simple values. Thus, the data are stored in a `structure of arrays' format, while the user still can access it as an `array of structures'. This organization allows for on-demand partial reading of objects, the selective removal of object properties, and the addition of arbitrary user-defined properties in a uniform manner. It also improves performance by increasing the locality of memory references in typical analysis code. The resulting data structures can be written to ROOT files with data properties represented as simple ROOT tree branches. This talk will focus on the design and implementation of the auxiliary store and its interaction with RO...

  4. mr: A C++ library for the matching and running of the Standard Model parameters

    Science.gov (United States)

    Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.

    2016-09-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  5. mr: a C++ library for the matching and running of the Standard Model parameters

    CERN Document Server

    Kniehl, Bernd A; Veretin, Oleg L

    2016-01-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the $\\overline{\\mathrm{MS}}$ renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  6. Searching For Exotic Physics Beyond the Standard Model: Extrapolation Until the End of Run-3

    CERN Document Server

    Genest, Marie-Hel\\`ene; The ATLAS collaboration

    2017-01-01

    The prospects of looking for exotic beyond-the-Standard-Model physics with the ATLAS and CMS detectors at the LHC in the rest of Run-2 and in Run-3 will be reviewed. A few selected analyses will be discussed, showing the gain in sensitivity that can be achieved by accumulating more data and comparing the current limits with the predicted reach. Some limiting factors will be identified, along with ideas on how to improve on the searches.

  7. ATLAS Liquid Argon Calorimeters Operation and Data Quality During the 2016 Proton Run

    CERN Document Server

    Pascuzzi, Vincent; The ATLAS collaboration

    2017-01-01

    ATLAS operated with high efficiency during the 2016 pp data-taking period with 25ns bunch spacing at ⎷s = 13 TeV, recording approximately 34 fb-1 of good physics data. The Liquid Argon (LAr) Calorimeters contributed to to this effort by providing a high data quality efficiency. This poster highlights the overall status, operations, data quality and performance of the LAr Calorimeters in 2016.

  8. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  9. Biases in modeled surface snow BC mixing ratios in prescribed aerosol climate model runs

    OpenAIRE

    Doherty, S. J.; C. M. Bitz; M. G. Flanner

    2014-01-01

    A series of recent studies have used prescribed aerosol deposition flux fields in climate model runs to assess forcing by black carbon in snow. In these studies, the prescribed mass deposition flux of BC to surface snow is decoupled from the mass deposition flux of snow water to the surface. Here we use a series of offline calculations to show that this approach results, on average, in a~factor of about 1.5–2.5 high bias in annual-mean surface snow BC mixing ratios in three ...

  10. Modeling driver stop/run behavior at the onset of a yellow indication considering driver run tendency and roadway surface conditions.

    Science.gov (United States)

    Elhenawy, Mohammed; Jahangiri, Arash; Rakha, Hesham A; El-Shawarby, Ihab

    2015-10-01

    The ability to model driver stop/run behavior at signalized intersections considering the roadway surface condition is critical in the design of advanced driver assistance systems. Such systems can reduce intersection crashes and fatalities by predicting driver stop/run behavior. The research presented in this paper uses data collected from two controlled field experiments on the Smart Road at the Virginia Tech Transportation Institute (VTTI) to model driver stop/run behavior at the onset of a yellow indication for different roadway surface conditions. The paper offers two contributions. First, it introduces a new predictor related to driver aggressiveness and demonstrates that this measure enhances the modeling of driver stop/run behavior. Second, it applies well-known artificial intelligence techniques including: adaptive boosting (AdaBoost), random forest, and support vector machine (SVM) algorithms as well as traditional logistic regression techniques on the data in order to develop a model that can be used by traffic signal controllers to predict driver stop/run decisions in a connected vehicle environment. The research demonstrates that by adding the proposed driver aggressiveness predictor to the model, there is a statistically significant increase in the model accuracy. Moreover the false alarm rate is significantly reduced but this reduction is not statistically significant. The study demonstrates that, for the subject data, the SVM machine learning algorithm performs the best in terms of optimum classification accuracy and false positive rates. However, the SVM model produces the best performance in terms of the classification accuracy only.

  11. Operational experience of running multicasing gas compression trains on a North Sea platform

    Energy Technology Data Exchange (ETDEWEB)

    Hancock, W.P.

    1986-07-01

    This paper describes the difficulties of operating multicasing compression trains in parallel and the special problems associated with a very-high-pressure centrifugal gas-injection compressor. The Statfjord B platform features one of the most complex gas-compression systems in the world for an offshore platform and uses some of the most advanced centrifugal gas compressors and aeroderivative gas turbines currently available. Four different gases flashing from the crude in flash drums of cascading pressures are recompressed and injected into the producing reservoir at pressures as high as 45 000 MPa (450 bar gauge). Conservation of valuable associated gas from offshore oil-production facilities demands high gas-compression efficiencies. Therefore, timely resolution of operational problems is paramount. This paper details the operational problems and their resolutions, which helped the Statfjord B platform attain and exceed its design output.

  12. Data-driven modelling of vertical dynamic excitation of bridges induced by people running

    Science.gov (United States)

    Racic, Vitomir; Morin, Jean Benoit

    2014-02-01

    With increasingly popular marathon events in urban environments, structural designers face a great deal of uncertainty when assessing dynamic performance of bridges occupied and dynamically excited by people running. While the dynamic loads induced by pedestrians walking have been intensively studied since the infamous lateral sway of the London Millennium Bridge in 2000, reliable and practical descriptions of running excitation are still very rare and limited. This interdisciplinary study has addressed the issue by bringing together a database of individual running force signals recorded by two state-of-the-art instrumented treadmills and two attempts to mathematically describe the measurements. The first modelling strategy is adopted from the available design guidelines for human walking excitation of structures, featuring perfectly periodic and deterministic characterisation of pedestrian forces presentable via Fourier series. This modelling approach proved to be inadequate for running loads due to the inherent near-periodic nature of the measured signals, a great inter-personal randomness of the dominant Fourier amplitudes and the lack of strong correlation between the amplitudes and running footfall rate. Hence, utilising the database established and motivated by the existing models of wind and earthquake loading, speech recognition techniques and a method of replicating electrocardiogram signals, this paper finally presents a numerical generator of random near-periodic running force signals which can reliably simulate the measurements. Such a model is an essential prerequisite for future quality models of dynamic loading induced by individuals, groups and crowds running under a wide range of conditions, such as perceptibly vibrating bridges and different combinations of visual, auditory and tactile cues.

  13. Higher-order effects in asset-pricing models with long-run risks

    NARCIS (Netherlands)

    Pohl, W.; Schmedders, K.; Wilms, Ole

    2017-01-01

    This paper shows that the latest generation of asset pricing models with long-run risk exhibits economically significant nonlinearities, and thus the ubiquitous Campbell--Shiller log-linearization can generate large numerical errors. These errors in turn translate to considerable errors in the model

  14. Operation of the 56 MHz superconducting RF cavity in RHIC during run 14

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Q. [Brookhaven National Lab. (BNL), Upton, NY (United States); Belomestnykh, S. [Brookhaven National Lab. (BNL), Upton, NY (United States); Stony Brook Univ., NY (United States); Ben-Zvi, I. [Brookhaven National Lab. (BNL), Upton, NY (United States); Stony Brook Univ., NY (United States); Blaskiewicz, M. [Brookhaven National Lab. (BNL), Upton, NY (United States); Hayes, T. [Brookhaven National Lab. (BNL), Upton, NY (United States); Mernick, K. [Brookhaven National Lab. (BNL), Upton, NY (United States); Severino, F. [Brookhaven National Lab. (BNL), Upton, NY (United States); Smith, K. [Brookhaven National Lab. (BNL), Upton, NY (United States); Zaltsman, A. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2015-09-11

    A 56 MHz superconducting RF cavity was designed and installed in the Relativistic Heavy Ion Collider (RHIC). It is the first superconducting quarter wave resonator (QWR) operating in a high-energy storage ring. We discuss herein the cavity operation with Au+Au collisions, and with asymmetrical Au+He3 collisions. The cavity is a storage cavity, meaning that it becomes active only at the energy of experiment, after the acceleration cycle is completed. With the cavity at 300 kV, an improvement in luminosity was detected from direct measurements, and the bunch length has been reduced. The uniqueness of the QWR demands an innovative design of the higher order mode dampers with high-pass filters, and a distinctive fundamental mode damper that enables the cavity to be bypassed during the acceleration stage.

  15. Running Effects on Lepton Mixing Angles in Flavour Models with Type I Seesaw

    CERN Document Server

    Lin, Y; Paris, A

    2009-01-01

    We study renormalization group running effects on neutrino mixing patterns when a (type I) seesaw model is implemented by suitable flavour symmetries. We are particularly interested in mass-independent mixing patterns to which the widely studied tribimaximal mixing pattern belongs. In this class of flavour models, the running contribution from neutrino Yukawa coupling, which is generally dominant at energies above the seesaw threshold, can be absorbed by a small shift on neutrino mass eigenvalues leaving mixing angles unchanged. Consequently, in the whole running energy range, the change in mixing angles is due to the contribution coming from charged lepton sector. Subsequently, we analyze in detail these effects in an explicit flavour model for tribimaximal neutrino mixing based on an A4 discrete symmetry group. We find that for normally ordered light neutrinos, the tribimaximal prediction is essentially stable under renormalization group evolution. On the other hand, in the case of inverted hierarchy, the d...

  16. Machine Protection at the LHC – Experience of Three Years Running and Outlook for Operation at Nominal Energy

    CERN Document Server

    Wollmann, D; Wenninger, J; Zerlauth, M

    2013-01-01

    With more than 22 fb-1 integrated luminosity delivered to the experiments ATLAS and CMS, the LHC surpassed the results of 2011 by more than a factor 5. This was achieved at 4 TeV, with intensities of ~2e14 p per beam. The uncontrolled loss of only a small fraction of the stored beam is sufficient to damage parts of the superconducting magnet system, accelerator equipment or the particle physics experiments. To protect against such losses, a correct functioning of the complex LHC machine protection (MP) systems through the operational cycle is essential. Operating with up to 140 MJ stored beam energy was only possible due to the experience and confidence gained in the two previous running periods, where the intensity was slowly increased. In this paper the 2012 performance of the MP systems is discussed. The strategy applied for a fast, but safe, intensity ramp up and the monitoring of the MP systems during stable running periods are presented. Weaknesses in the reliability of the MP systems, set-up procedures...

  17. Operation and Performance of a new microTCA-based CMS Calorimeter Trigger in LHC Run 2

    CERN Document Server

    Klabbers, Pamela Renee

    2016-01-01

    The Large Hadron Collider (LHC) at CERN is currently increasing the instantaneous luminosity for p-p collisions. In LHC Run 2, the center-of-mass energy has gone from 8 to 13 TeV and the instantaneous luminosity will approximately double for proton collisions. This will make it even more challenging to trigger on interesting events since the number of interactions per crossing (pileup) and the overall trigger rate will be significantly larger than in LHC Run 1. The Compact Muon Solenoid (CMS) experiment has installed the second stage of a two-stage upgrade to the Calorimeter Trigger to ensure that the trigger rates can be controlled and the thresholds kept low, so that physics data will not be compromised. The stage-1, which replaced the original CMS Global Calorimeter Trigger, operated successfully in 2015. The completely new stage-2 has replaced the entire calorimeter trigger in 2016 with AMC form-factor boards and optical links operating in a microTCA chassis. It required that updates to the calorimet...

  18. Models of production runs for multiple products in flexible manufacturing system

    Directory of Open Access Journals (Sweden)

    Ilić Oliver

    2011-01-01

    Full Text Available How to determine economic production runs (EPR for multiple products in flexible manufacturing systems (FMS is considered in this paper. Eight different although similar, models are developed and presented. The first four models are devoted to the cases when no shortage is allowed. The other four models are some kind of generalization of the previous ones when shortages may exist. The numerical examples are given as the illustration of the proposed models.

  19. Running Linux

    CERN Document Server

    Dalheimer, Matthias Kalle

    2006-01-01

    The fifth edition of Running Linux is greatly expanded, reflecting the maturity of the operating system and the teeming wealth of software available for it. Hot consumer topics such as audio and video playback applications, groupware functionality, and spam filtering are covered, along with the basics in configuration and management that always made the book popular.

  20. NUMERICAL SIMULATION OF SOLITARY WAVE RUN-UP AND OVERTOPPING USING BOUSSINESQ-TYPE MODEL

    Institute of Scientific and Technical Information of China (English)

    TSUNG Wen-Shuo; HSIAO Shih-Chun; LIN Ting-Chieh

    2012-01-01

    In this article,the use of a high-order Boussinesq-type model and sets of laboratory experiments in a large scale flume of breaking solitary waves climbing up slopes with two inclinations are presented to study the shoreline behavior of breaking and non-breaking solitary waves on plane slopes.The scale effect on run-up height is briefly discussed.The model simulation capability is well validated against the available laboratory data and present experiments.Then,serial numerical tests are conducted to study the shoreline motion correlated with the effects of beach slope and wave nonlinearity for breaking and non-breaking waves.The empirical formula proposed by Hsiao et al.for predicting the maximum run-up height of a breaking solitary wave on plane slopes with a wide range of slope inclinations is confirmed to be cautious.Furthermore,solitary waves impacting and overtopping an impermeable sloping seawall at various water depths are investigated.Laboratory data of run-up height,shoreline motion,free surface elevation and overtopping discharge are presented.Comparisons of run-up,run-down,shoreline trajectory and wave overtopping discharge are made.A fairly good agreement is seen between numerical results and experimental data.It elucidates that the present depth-integrated model can be used as an efficient tool for predicting a wide spectrum of coastal problems.

  1. Running a distributed virtual observatory: U.S. Virtual Astronomical Observatory operations

    Science.gov (United States)

    McGlynn, Thomas A.; Hanisch, Robert J.; Berriman, G. Bruce; Thakar, Aniruddha R.

    2012-09-01

    Operation of the US Virtual Astronomical Observatory shares some issues with modern physical observatories, e.g., intimidating data volumes and rapid technological change, and must also address unique concerns like the lack of direct control of the underlying and scattered data resources, and the distributed nature of the observatory itself. In this paper we discuss how the VAO has addressed these challenges to provide the astronomical community with a coherent set of science-enabling tools and services. The distributed nature of our virtual observatory-with data and personnel spanning geographic, institutional and regime boundaries-is simultaneously a major operational headache and the primary science motivation for the VAO. Most astronomy today uses data from many resources. Facilitation of matching heterogeneous datasets is a fundamental reason for the virtual observatory. Key aspects of our approach include continuous monitoring and validation of VAO and VO services and the datasets provided by the community, monitoring of user requests to optimize access, caching for large datasets, and providing distributed storage services that allow user to collect results near large data repositories. Some elements are now fully implemented, while others are planned for subsequent years. The distributed nature of the VAO requires careful attention to what can be a straightforward operation at a conventional observatory, e.g., the organization of the web site or the collection and combined analysis of logs. Many of these strategies use and extend protocols developed by the international virtual observatory community. Our long-term challenge is working with the underlying data providers to ensure high quality implementation of VO data access protocols (new and better 'telescopes'), assisting astronomical developers to build robust integrating tools (new 'instruments'), and coordinating with the research community to maximize the science enabled.

  2. Comparison of Particle Flow Code and Smoothed Particle Hydrodynamics Modelling of Landslide Run outs

    Science.gov (United States)

    Preh, A.; Poisel, R.; Hungr, O.

    2009-04-01

    In most continuum mechanics methods modelling the run out of landslides the moving mass is divided into a number of elements, the velocities of which can be established by numerical integration of Newtońs second law (Lagrangian solution). The methods are based on fluid mechanics modelling the movements of an equivalent fluid. In 2004, McDougall and Hungr presented a three-dimensional numerical model for rapid landslides, e.g. debris flows and rock avalanches, called DAN3D.The method is based on the previous work of Hungr (1995) and is using an integrated two-dimensional Lagrangian solution and meshless Smooth Particle Hydrodynamics (SPH) principle to maintain continuity. DAN3D has an open rheological kernel, allowing the use of frictional (with constant porepressure ratio) and Voellmy rheologies and gives the possibility to change material rheology along the path. Discontinuum (granular) mechanics methods model the run out mass as an assembly of particles moving down a surface. Each particle is followed exactly as it moves and interacts with the surface and with its neighbours. Every particle is checked on contacts with every other particle in every time step using a special cell-logic for contact detection in order to reduce the computational effort. The Discrete Element code PFC3D was adapted in order to make possible discontinuum mechanics models of run outs. Punta Thurwieser Rock Avalanche and Frank Slide were modelled by DAN as well as by PFC3D. The simulations showed correspondingly that the parameters necessary to get results coinciding with observations in nature are completely different. The maximum velocity distributions due to DAN3D reveal that areas of different maximum flow velocity are next to each other in Punta Thurwieser run out whereas the distribution of maximum flow velocity shows almost constant maximum flow velocity over the width of the run out regarding Frank Slide. Some 30 percent of total kinetic energy is rotational kinetic energy in

  3. Lunar Landing Operational Risk Model

    Science.gov (United States)

    Mattenberger, Chris; Putney, Blake; Rust, Randy; Derkowski, Brian

    2010-01-01

    Characterizing the risk of spacecraft goes beyond simply modeling equipment reliability. Some portions of the mission require complex interactions between system elements that can lead to failure without an actual hardware fault. Landing risk is currently the least characterized aspect of the Altair lunar lander and appears to result from complex temporal interactions between pilot, sensors, surface characteristics and vehicle capabilities rather than hardware failures. The Lunar Landing Operational Risk Model (LLORM) seeks to provide rapid and flexible quantitative insight into the risks driving the landing event and to gauge sensitivities of the vehicle to changes in system configuration and mission operations. The LLORM takes a Monte Carlo based approach to estimate the operational risk of the Lunar Landing Event and calculates estimates of the risk of Loss of Mission (LOM) - Abort Required and is Successful, Loss of Crew (LOC) - Vehicle Crashes or Cannot Reach Orbit, and Success. The LLORM is meant to be used during the conceptual design phase to inform decision makers transparently of the reliability impacts of design decisions, to identify areas of the design which may require additional robustness, and to aid in the development and flow-down of requirements.

  4. Newly developed dope-free coatings help improve running operations in remote protected areas

    Energy Technology Data Exchange (ETDEWEB)

    Santi, Nestor J.; Gallo, Ernesto A. [TENARIS (Brazil)

    2008-07-01

    The Oil and Gas industry has been evolving in a permanent way to reach new sources of energy or to produce in the existing ones in a more efficient way, triggering in such a way the development of new drilling, completion and production techniques, equipment and processes; among these equipment, pipes and connections are not the exception, and the requirements on material and connections performance and reliability have been increased as well. The complexity of the new wells is not only related to the architecture of the well but also to the type of environments that are being found such as H2S, CO2, high pressure and/or high temperature; therefore, for these cases, connections have to be special premium connections threaded in most of the cases on highly alloyed materials (Ni-Cr alloys). Additionally, most of the regions under exploration are offshore and/or in remote areas of the planet which are considered untouchable due to economic reasons (fishing) or preservation (endangered flora and fauna) for instance Alaska, North Atlantic, North Sea, etc. For these areas, new environmental restrictions are applied which make it difficult for the operators to use standard practices. Among the recent solutions developed for Oil and Gas industry aiming to help with the protection of the environment are the dope-free coatings. These coatings are applied on tubing and casing connections providing a real greener alternative to traditional thread compounds, while maintaining the performance of the connections, for different materials as carbon steels, 13Cr and Corrosion Resistance Alloys (Ni, Cr). In spite of being a technically sound solution, the elimination of thread compounds may lead to potential operational problems such as galling, difficulties in making-up due to low temperature, etc. In addition, it is also necessary to evaluate the interaction between the dry coatings and the different connections to be used, as the designs have to be able to allocate the coating

  5. Search for the standard model Higgs boson produced in vector boson fusion and decaying to bottom quarks using the Run1 and 2015 Run2 data samples.

    CERN Document Server

    Chernyavskaya, Nadezda

    2016-01-01

    A search for the standard model Higgs boson is presented in the Vector Boson Fusion production channel with decay to bottom quarks. A data sample comprising 2.2 fb$^-1$ of proton-proton collision at $\\sqrt{s}$ = 13 TeV collected during the 2015 running period has been analyzed. Production upper limits at 95\\% Confidence Level are derived for a Higgs boson mass of 125 GeV, as well as the fitted signal strength relative to the expectation for the standard model Higgs boson. Results are also combined with the ones obtained with Run1 sqrt(s) = 8 TeV data collected in 2012.

  6. Impact of data assimilation of physical variables on the spring bloom from TOPAZ operational runs in the North Atlantic

    Directory of Open Access Journals (Sweden)

    A. Samuelsen

    2009-12-01

    Full Text Available A reanalysis of the North Atlantic spring bloom in 2007 was produced using the real-time analysis from the TOPAZ North Atlantic and Arctic forecasting system. The TOPAZ system uses a hybrid coordinate general circulation ocean model and assimilates physical observations: sea surface anomalies, sea surface temperatures, and sea-ice concentrations using the Ensemble Kalman Filter. This ocean model was coupled to an ecosystem model, NORWECOM (Norwegian Ecological Model System, and the TOPAZ-NORWECOM coupled model was run throughout the spring and summer of 2007. The ecosystem model was run online, restarting from analyzed physical fields (result after data assimilation every 7 days. Biological variables were not assimilated in the model. The main purpose of the study was to investigate the impact of physical data assimilation on the ecosystem model. This was determined by comparing the results to those from a model without assimilation of physical data. The regions of focus are the North Atlantic and the Arctic Ocean. Assimilation of physical variables does not affect the results from the ecosystem model significantly. The differences between the weekly mean values of chlorophyll are normally within 5–10% during the summer months, and the maximum difference of ~20% occurs in the Arctic, also during summer. Special attention was paid to the nutrient input from the North Atlantic to the Nordic Seas and the impact of ice-assimilation on the ecosystem. The ice-assimilation increased the phytoplankton concentration: because there was less ice in the assimilation run, this increased both the mixing of nutrients during winter and the area where production could occur during summer. The forecast was also compared to remotely sensed chlorophyll, climatological nutrients, and in-situ data. The results show that the model reproduces a realistic annual cycle, but the chlorophyll concentrations tend to be between 0.1 and 1.0 mg chla/m3 too

  7. Strong Lensing Probabilities in a Cosmological Model with a Running Primordial Power Spectrum

    CERN Document Server

    Zhang, T J; Yang, Z L; He, X T; Zhang, Tong-Jie; Chen, Da-Ming; Yang, Zhi-Liang; He, Xiang-Tao

    2004-01-01

    The combination of the first-year Wilkinson Microwave Anisotropy Probe (WMAP) data with other finer scale cosmic microwave background (CMB) experiments (CBI and ACBAR) and two structure formation measurements (2dFGRS and Lyman $\\alpha$ forest) suggest a $\\Lambda$CDM cosmological model with a running spectral power index of primordial density fluctuations. Motivated by this new result on the index of primordial power spectrum, we present the first study on the predicted lensing probabilities of image separation in a spatially flat $\\Lambda$CDM model with a running spectral index (RSI-$\\Lambda$CDM model). It is shown that the RSI-$\\Lambda$CDM model suppress the predicted lensing probabilities on small splitting angles of less than about 4$^{''}$ compared with that of standard power-law $\\Lambda$CDM (PL-$\\Lambda$CDM) model.

  8. Running the running

    CERN Document Server

    Cabass, Giovanni; Melchiorri, Alessandro; Pajer, Enrico; Silk, Joseph

    2016-01-01

    We use the recent observations of Cosmic Microwave Background temperature and polarization anisotropies provided by the Planck satellite experiment to place constraints on the running $\\alpha_\\mathrm{s} = \\mathrm{d}n_{\\mathrm{s}} / \\mathrm{d}\\log k$ and the running of the running $\\beta_{\\mathrm{s}} = \\mathrm{d}\\alpha_{\\mathrm{s}} / \\mathrm{d}\\log k$ of the spectral index $n_{\\mathrm{s}}$ of primordial scalar fluctuations. We find $\\alpha_\\mathrm{s}=0.011\\pm0.010$ and $\\beta_\\mathrm{s}=0.027\\pm0.013$ at $68\\%\\,\\mathrm{CL}$, suggesting the presence of a running of the running at the level of two standard deviations. We find no significant correlation between $\\beta_{\\mathrm{s}}$ and foregrounds parameters, with the exception of the point sources amplitude at $143\\,\\mathrm{GHz}$, $A^{PS}_{143}$, which shifts by half sigma when the running of the running is considered. We further study the cosmological implications of this anomaly by including in the analysis the lensing amplitude $A_L$, the curvature parameter ...

  9. The Run up Tsunami Modeling in Bengkulu using the Spatial Interpolation of Kriging Technique

    Directory of Open Access Journals (Sweden)

    Yulian Fauzi

    2014-12-01

    Full Text Available This research aims to design a tsunami hazard zone with the scenario of tsunami run-up height variation based on land use, slope and distance from the shoreline. The method used in this research is spatial modelling with GIS via Ordinary Kriging interpolation technique. Kriging interpolation method that is the best in this study is shown by Circular Kriging method with good semivariogram and RMSE values which are small compared to other RMSE kriging methods. The results shows that the area affected by the tsunami inundation run-up height, slope and land use. In the run-up to 30 meters, flooded areas are about 3,148.99 hectares or 20.7% of the total area of the city of Bengkulu.

  10. The ATLAS Run-2 Trigger: Design, Menu, Performance and Operational Aspects

    CERN Document Server

    Machado Miguens, Joana; The ATLAS collaboration

    2016-01-01

    The LHC, at design capacity, has a bunch-crossing rate of 40 MHz whereas the ATLAS experiment has an average recording rate of about 1000 Hz. To reduce the rate of events but still maintain high efficiency of selecting rare events such as physics signals beyond the Standard Model, a two-level trigger system is used in ATLAS. Events are selected based on physics signatures such as presence of energetic leptons, photons, jets or large missing energy. Despite the limited time available for processing collision events, the trigger system is able to exploit topological informations, as well as using multi-variate methods. In total, the ATLAS trigger systems consists of thousands of different individual triggers. The ATLAS trigger menu specifies which triggers are used during data taking and how much rate a given trigger is allocated. This menu reflects not only the physics goals of the collaboration but also takes into consideration the instantaneous luminosity of the LHC and the design limits of the ATLAS detecto...

  11. The ATLAS Run-2 Trigger Menu for higher luminosities: Design, Performance and Operational Aspects

    CERN Document Server

    Montejo Berlingen, Javier; The ATLAS collaboration

    2017-01-01

    The LHC, at design capacity, has a bunch-crossing rate of 40 MHz whereas the ATLAS experiment has an average recording rate of about 1 kHz. To reduce the rate of events, but maintain high selection efficiency for rare events such as physics signals beyond the Standard Model, a two-level trigger system is used. Events are selected based on physics signatures such as presence of energetic leptons, photons, jets or large missing energy. Despite the limited time available for processing collision events the trigger system is able to exploit topological information, as well as using multi-variate methods. In total, the ATLAS trigger systems consists of thousands of different individual triggers. The ATLAS trigger menu specifies which triggers are used during data taking and how much rate a given trigger is allocated. This menu reflects not only the physics goals of the collaboration but also takes into consideration the instantaneous luminosity of the LHC and the design limits of the ATLAS detector and offline pro...

  12. The ATLAS Run-2 Trigger: Design, Menu, Performance and Operational Aspects

    CERN Document Server

    Martin, Tim; The ATLAS collaboration

    2016-01-01

    The LHC, at design capacity, has a bunch-crossing rate of 40 MHz whereas the ATLAS experiment at the LHC has an average recording rate of about 1000 Hz. To reduce the rate of events but still maintain a high efficiency of selecting rare events such as physics signals beyond the Standard Model, a two-level trigger system is used in ATLAS. Events are selected based on physics signatures such as presence of energetic leptons, photons, jets or large missing energy. Despite the limited time available for processing collision events, the trigger system is able to exploit topological information, as well as using multi-variate methods. In total, the ATLAS trigger system consists of thousands of different individual triggers. The ATLAS trigger menu specifies which triggers are used during data taking and how much rate a given trigger is allocated. This menu reflects not only the physics goals of the collaboration but also takes the instantaneous luminosity of the LHC, the design limits of the ATLAS detector and the o...

  13. A model-experiment comparison of system dynamics for human walking and running.

    Science.gov (United States)

    Lipfert, Susanne W; Günther, Michael; Renjewski, Daniel; Grimmer, Sten; Seyfarth, Andre

    2012-01-07

    The human musculo-skeletal system comprises high complexity which makes it difficult to identify underlying basic principles of bipedal locomotion. To tackle this challenge, a common approach is to strip away complexity and formulate a reductive model. With utter simplicity a bipedal spring-mass model gives good predictions of the human gait dynamics, however, it has not been fully investigated whether center of mass motion over time of walking and running is comparable between the model and the human body over a wide range of speed. To test the model's ability in this respect, we compare sagittal center of mass trajectories of model and human data for speeds ranging from 0.5 m/s to 4 m/s. For simulations, system parameters and initial conditions are extracted from experimental observations of 28 subjects. The leg parameters stiffness and length are extracted from functional fitting to the subjects' leg force-length curves. With small variations of the touch-down angle of the leg and the vertical position of the center of mass at apex, we find successful spring-mass simulations for moderate walking and medium running speeds. Predictions of the sagittal center of mass trajectories and ground reaction forces are good, but their amplitudes are overestimated, while contact time is underestimated. At faster walking speeds and slower running speeds we do not find successful model locomotion with the extent of allowed parameter variation. We conclude that the existing limitations may be improved by adding complexity to the model.

  14. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    KAUST Repository

    Castruccio, Stefano

    2014-03-01

    The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as pattern scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. It may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.

  15. Can neuromuscular fatigue explain running strategies and performance in ultra-marathons?: the flush model.

    Science.gov (United States)

    Millet, Guillaume Y

    2011-06-01

    While the industrialized world adopts a largely sedentary lifestyle, ultra-marathon running races have become increasingly popular in the last few years in many countries. The ability to run long distances is also considered to have played a role in human evolution. This makes the issue of ultra-long distance physiology important. In the ability to run multiples of 10 km (up to 1000 km in one stage), fatigue resistance is critical. Fatigue is generally defined as strength loss (i.e. a decrease in maximal voluntary contraction [MVC]), which is known to be dependent on the type of exercise. Critical task variables include the intensity and duration of the activity, both of which are very specific to ultra-endurance sports. They also include the muscle groups involved and the type of muscle contraction, two variables that depend on the sport under consideration. The first part of this article focuses on the central and peripheral causes of the alterations to neuromuscular function that occur in ultra-marathon running. Neuromuscular function evaluation requires measurements of MVCs and maximal electrical/magnetic stimulations; these provide an insight into the factors in the CNS and the muscles implicated in fatigue. However, such measurements do not necessarily predict how muscle function may influence ultra-endurance running and whether this has an effect on speed regulation during a real competition (i.e. when pacing strategies are involved). In other words, the nature of the relationship between fatigue as measured using maximal contractions/stimulation and submaximal performance limitation/regulation is questionable. To investigate this issue, we are suggesting a holistic model in the second part of this article. This model can be applied to all endurance activities, but is specifically adapted to ultra-endurance running: the flush model. This model has the following four components: (i) the ball-cock (or buoy), which can be compared with the rate of perceived

  16. Recent updates in the aerosol component of the C-IFS model run by ECMWF

    Science.gov (United States)

    Remy, Samuel; Boucher, Olivier; Hauglustaine, Didier; Kipling, Zak; Flemming, Johannes

    2017-04-01

    The Composition-Integrated Forecast System (C-IFS) is a global atmospheric composition forecasting tool, run by ECMWF within the framework of the Copernicus Atmospheric Monitoring Service (CAMS). The aerosol model of C-IFS is a simple bulk scheme that forecasts 5 species: dust, sea-salt, black carbon, organic matter and sulfate. Three bins represent the dust and sea-salt, for the super-coarse, coarse and fine mode of these species (Morcrette et al., 2009). This talk will present recent updates of the aerosol model, and also introduce forthcoming developments. It will also present the impact of these changes as measured scores against AERONET Aerosol Optical Depth (AOD) and Airbase PM10 observations. The next cycle of C-IFS will include a mass fixer, because the semi-Lagrangian advection scheme used in C-IFS is not mass-conservative. C-IFS now offers the possibility to emit biomass-burning aerosols at an injection height that is provided by a new version of the Global Fire Assimilation System (GFAS). Secondary Organic Aerosols (SOA) production will be scaled on non-biomass burning CO fluxes. This approach allows to represent the anthropogenic contribution to SOA production; it brought a notable improvement in the skill of the model, especially over Europe. Lastly, the emissions of SO2 are now provided by the MACCity inventory instead of and older version of the EDGAR dataset. The seasonal and yearly variability of SO2 emissions are better captured by the MACCity dataset. Upcoming developments of the aerosol model of C-IFS consist mainly in the implementation of a nitrate and ammonium module, with 2 bins (fine and coarse) for nitrate. Nitrate and ammonium sulfate particle formation from gaseous precursors is represented following Hauglustaine et al. (2014); formation of coarse nitrate over pre-existing sea-salt or dust particles is also represented. This extension of the forward model improved scores over heavily populated areas such as Europe, China and Eastern

  17. Tsunami generation, propagation, and run-up with a high-order Boussinesq model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.

    2009-01-01

    In this work we extend a high-order Boussinesq-type (finite difference) model, capable of simulating waves out to wavenumber times depth kh landslide-induced tsunamis. The extension is straight forward, requiring only....... The Boussinesq-type model is then used to simulate numerous tsunami-type events generated from submerged landslides, in both one and two horizontal dimensions. The results again compare well against previous experiments and/or numerical simulations. The new extension compliments recently developed run...

  18. A multiobjective reinforcement learning approach to water resources systems operation: Pareto frontier approximation in a single run

    Science.gov (United States)

    Castelletti, A.; Pianosi, F.; Restelli, M.

    2013-06-01

    The operation of large-scale water resources systems often involves several conflicting and noncommensurable objectives. The full characterization of tradeoffs among them is a necessary step to inform and support decisions in the absence of a unique optimal solution. In this context, the common approach is to consider many single objective problems, resulting from different combinations of the original problem objectives, each one solved using standard optimization methods based on mathematical programming. This scalarization process is computationally very demanding as it requires one optimization run for each trade-off and often results in very sparse and poorly informative representations of the Pareto frontier. More recently, bio-inspired methods have been applied to compute an approximation of the Pareto frontier in one single run. These methods allow to acceptably cover the full extent of the Pareto frontier with a reasonable computational effort. Yet, the quality of the policy obtained might be strongly dependent on the algorithm tuning and preconditioning. In this paper we propose a novel multiobjective Reinforcement Learning algorithm that combines the advantages of the above two approaches and alleviates some of their drawbacks. The proposed algorithm is an extension of fitted Q-iteration (FQI) that enables to learn the operating policies for all the linear combinations of preferences (weights) assigned to the objectives in a single training process. The key idea of multiobjective FQI (MOFQI) is to enlarge the continuous approximation of the value function, that is performed by single objective FQI over the state-decision space, also to the weight space. The approach is demonstrated on a real-world case study concerning the optimal operation of the HoaBinh reservoir on the Da river, Vietnam. MOFQI is compared with the reiterated use of FQI and a multiobjective parameterization-simulation-optimization (MOPSO) approach. Results show that MOFQI provides a

  19. Modeling the milling tool wear by using an evolutionary SVM-based model from milling runs experimental data

    Science.gov (United States)

    Nieto, Paulino José García; García-Gonzalo, Esperanza; Vilán, José Antonio Vilán; Robleda, Abraham Segade

    2015-12-01

    The main aim of this research work is to build a new practical hybrid regression model to predict the milling tool wear in a regular cut as well as entry cut and exit cut of a milling tool. The model was based on Particle Swarm Optimization (PSO) in combination with support vector machines (SVMs). This optimization mechanism involved kernel parameter setting in the SVM training procedure, which significantly influences the regression accuracy. Bearing this in mind, a PSO-SVM-based model, which is based on the statistical learning theory, was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. To accomplish the objective of this study, the experimental dataset represents experiments from runs on a milling machine under various operating conditions. In this way, data sampled by three different types of sensors (acoustic emission sensor, vibration sensor and current sensor) were acquired at several positions. A second aim is to determine the factors with the greatest bearing on the milling tool flank wear with a view to proposing milling machine's improvements. Firstly, this hybrid PSO-SVM-based regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the flank wear (output variable) and input variables (time, depth of cut, feed, etc.). Indeed, regression with optimal hyperparameters was performed and a determination coefficient of 0.95 was obtained. The agreement of this model with experimental data confirmed its good performance. Secondly, the main advantages of this PSO-SVM-based model are its capacity to produce a simple, easy-to-interpret model, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, the main conclusions of this study are exposed.

  20. Human and avian running on uneven ground: a model-based comparison

    Science.gov (United States)

    Birn-Jeffery, A. V.; Blum, Y.

    2016-01-01

    Birds and humans are successful bipedal runners, who have individually evolved bipedalism, but the extent of the similarities and differences of their bipedal locomotion is unknown. In turn, the anatomical differences of their locomotor systems complicate direct comparisons. However, a simplifying mechanical model, such as the conservative spring–mass model, can be used to describe both avian and human running and thus, provides a way to compare the locomotor strategies that birds and humans use when running on level and uneven ground. Although humans run with significantly steeper leg angles at touchdown and stiffer legs when compared with cursorial ground birds, swing-leg adaptations (leg angle and leg length kinematics) used by birds and humans while running appear similar across all types of uneven ground. Nevertheless, owing to morphological restrictions, the crouched avian leg has a greater range of leg angle and leg length adaptations when coping with drops and downward steps than the straight human leg. On the other hand, the straight human leg seems to use leg stiffness adaptation when coping with obstacles and upward steps unlike the crouched avian leg posture. PMID:27655670

  1. Status of the Inert Doublet Model of dark matter after Run-1 of the LHC

    CERN Document Server

    Goudelis, Andreas

    2015-01-01

    The Inert Doublet Model (IDM) is one of the simplest extensions of the Standard Model that can provide a viable dark matter (DM) candidate. Despite its simplicity, it predicts a versatile phenomenology both for cosmology and for the Large Hadron Collider. We briefly summarize the status of searches for IDM dark matter in direct DM detection experiments and the LHC, focusing on the impact of the latter on the model parameter space. In particular, we discuss the consequences of the Higgs boson discovery as well as those of searches for dileptons accompanied by missing transverse energy during the first LHC Run and comment on the prospects of probing some of the hardest to test regions of the IDM parameter space during the 13 TeV Run.

  2. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2013-01-01

    The focus of Run Coordination during LS1 is to monitor closely the advance of maintenance and upgrade activities, to smooth interactions between subsystems and to ensure that all are ready in time to resume operations in 2015 with a fully calibrated and understood detector. After electricity and cooling were restored to all equipment, at about the time of the last CMS week, recommissioning activities were resumed for all subsystems. On 7 October, DCS shifts began 24/7 to allow subsystems to remain on to facilitate operations. That culminated with the Global Run in November (GriN), which   took place as scheduled during the week of 4 November. The GriN has been the first centrally managed operation since the beginning of LS1, and involved all subdetectors but the Pixel Tracker presently in a lab upstairs. All nights were therefore dedicated to long stable runs with as many subdetectors as possible. Among the many achievements in that week, three items may be highlighted. First, the Strip...

  3. NASA SPoRT Initialization Datasets for Local Model Runs in the Environmental Modeling System

    Science.gov (United States)

    Case, Jonathan L.; LaFontaine, Frank J.; Molthan, Andrew L.; Carcione, Brian; Wood, Lance; Maloney, Joseph; Estupinan, Jeral; Medlin, Jeffrey M.; Blottman, Peter; Rozumalski, Robert A.

    2011-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed several products for its National Weather Service (NWS) partners that can be used to initialize local model runs within the Weather Research and Forecasting (WRF) Environmental Modeling System (EMS). These real-time datasets consist of surface-based information updated at least once per day, and produced in a composite or gridded product that is easily incorporated into the WRF EMS. The primary goal for making these NASA datasets available to the WRF EMS community is to provide timely and high-quality information at a spatial resolution comparable to that used in the local model configurations (i.e., convection-allowing scales). The current suite of SPoRT products supported in the WRF EMS include a Sea Surface Temperature (SST) composite, a Great Lakes sea-ice extent, a Greenness Vegetation Fraction (GVF) composite, and Land Information System (LIS) gridded output. The SPoRT SST composite is a blend of primarily the Moderate Resolution Imaging Spectroradiometer (MODIS) infrared and Advanced Microwave Scanning Radiometer for Earth Observing System data for non-precipitation coverage over the oceans at 2-km resolution. The composite includes a special lake surface temperature analysis over the Great Lakes using contributions from the Remote Sensing Systems temperature data. The Great Lakes Environmental Research Laboratory Ice Percentage product is used to create a sea-ice mask in the SPoRT SST composite. The sea-ice mask is produced daily (in-season) at 1.8-km resolution and identifies ice percentage from 0 100% in 10% increments, with values above 90% flagged as ice.

  4. Repo Runs

    NARCIS (Netherlands)

    Martin, A.; Skeie, D.; von Thadden, E.L.

    2010-01-01

    This paper develops a model of financial institutions that borrow short- term and invest into long-term marketable assets. Because these financial intermediaries perform maturity transformation, they are subject to runs. We endogenize the profits of the intermediary and derive distinct liquidity and

  5. Effect of sucrose availability on wheel-running as an operant and as a reinforcing consequence on a multiple schedule: Additive effects of extrinsic and automatic reinforcement.

    Science.gov (United States)

    Belke, Terry W; Pierce, W David

    2015-07-01

    As a follow up to Belke and Pierce's (2014) study, we assessed the effects of repeated presentation and removal of sucrose solution on the behavior of rats responding on a two-component multiple schedule. Rats completed 15 wheel turns (FR 15) for either 15% or 0% sucrose solution in the manipulated component and lever pressed 10 times on average (VR 10) for an opportunity to complete 15 wheel turns (FR 15) in the other component. In contrast to our earlier study, the components advanced based on time (every 8min) rather than completed responses. Results showed that in the manipulated component wheel-running rates were higher and the latency to initiate running longer when sucrose was present (15%) compared to absent (0% or water); the number of obtained outcomes (sucrose/water), however, did not differ with the presentation and withdrawal of sucrose. For the wheel-running as reinforcement component, rates of wheel turns, overall lever-pressing rates, and obtained wheel-running reinforcements were higher, and postreinforcement pauses shorter, when sucrose was present (15%) than absent (0%) in manipulated component. Overall, our findings suggest that wheel-running rate regardless of its function (operant or reinforcement) is maintained by automatically generated consequences (automatic reinforcement) and is increased as an operant by adding experimentally arranged sucrose reinforcement (extrinsic reinforcement). This additive effect on operant wheel-running generalizes through induction or arousal to the wheel-running as reinforcement component, increasing the rate of responding for opportunities to run and the rate of wheel-running per opportunity.

  6. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2013-01-01

    Since the LHC ceased operations in February, a lot has been going on at Point 5, and Run Coordination continues to monitor closely the advance of maintenance and upgrade activities. In the last months, the Pixel detector was extracted and is now stored in the pixel lab in SX5; the beam pipe has been removed and ME1/1 removal has started. We regained access to the vactank and some work on the RBX of HB has started. Since mid-June, electricity and cooling are back in S1 and S2, allowing us to turn equipment back on, at least during the day. 24/7 shifts are not foreseen in the next weeks, and safety tours are mandatory to keep equipment on overnight, but re-commissioning activities are slowly being resumed. Given the (slight) delays accumulated in LS1, it was decided to merge the two global runs initially foreseen into a single exercise during the week of 4 November 2013. The aim of the global run is to check that we can run (parts of) CMS after several months switched off, with the new VME PCs installed, th...

  7. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2015-10-01

    Full Text Available was performed, using as a springboard seven models of cyber- attack, and resulted in the development of what is described as a canonical model. Keywords: Offensive cyber operations; Process models; Rational reconstructions; Canonical models; Structured...

  8. Changes in spring-mass model parameters and energy cost during track running to exhaustion.

    Science.gov (United States)

    Slawinski, Jean; Heubert, Richard; Quievre, Jacques; Billat, Véronique; Hanon, Christine; Hannon, Christine

    2008-05-01

    The purpose of this study was to determine whether exhaustion modifies the stiffness characteristics, as defined in the spring-mass model, during track running. We also investigated whether stiffer runners are also the most economical. Nine well-trained runners performed an exhaustive exercise over 2000 meters on an indoor track. This exhaustive exercise was preceded by a warm-up and was followed by an active recovery. Throughout all the exercises, the energy cost of running (Cr) was measured. Vertical and leg stiffness was measured with a force plate (Kvert and Kleg, respectively) integrated into the track. The results show that Cr increases significantly after the 2000-meter run (0.192 +/- 0.006 to 0.217 +/- 0.013 mL x kg(-1) x m(-1)). However, Kvert and Kleg remained constant (32.52 +/- 6.42 to 32.59 +/- 5.48 and 11.12 +/- 2.76 to 11.14 +/- 2.48 kN.m, respectively). An inverse correlation was observed between Cr and Kleg, but only during the 2000-meter exercise (r = -0.67; P < or = 0.05). During the warm-up or the recovery, Cr and Kleg, were not correlated (r = 0.354; P = 0.82 and r = 0.21; P = 0.59, respectively). On track, exhaustion induced by a 2000-meter run has no effect on Kleg or Kvert. The inverse correlation was only observed between Cr and Kleg during the 2000-meter run and not before or after the exercise, suggesting that the stiffness of the runner may be not associated with the Cr.

  9. Strange matter and strange stars in a thermodynamically self-consistent perturbation model with running coupling and running strange quark mass

    CERN Document Server

    Xu, J F; Liu, F; Hou, D F; Chen, L W

    2015-01-01

    A quark model with running coupling and running strange quark mass, which is thermodynamically self-consistent at both high and lower densities, is presented and applied to study properties of strange quark matter and structure of compact stars. An additional term to the thermodynamic potential density is determined by meeting the fundamental differential equation of thermodynamics. It plays an important role in comparatively lower density and ignorable at extremely high density, acting as a chemical-potential dependent bag constant. In this thermodynamically enhanced perturbative QCD model, strange quark matter still has the possibility of being absolutely stable, while the pure quark star has a sharp surface with a maximum mass as large as about 2 times the solar mass and a maximum radius of about 11 kilometers.

  10. Impacts of the driver's bounded rationality on the traffic running cost under the car-following model

    Science.gov (United States)

    Tang, Tie-Qiao; Luo, Xiao-Feng; Liu, Kai

    2016-09-01

    The driver's bounded rationality has significant influences on the micro driving behavior and researchers proposed some traffic flow models with the driver's bounded rationality. However, little effort has been made to explore the effects of the driver's bounded rationality on the trip cost. In this paper, we use our recently proposed car-following model to study the effects of the driver's bounded rationality on his running cost and the system's total cost under three traffic running costs. The numerical results show that considering the driver's bounded rationality will enhance his each running cost and the system's total cost under the three traffic running costs.

  11. AschFlow - A dynamic landslide run-out model for medium scale hazard analysis.

    Science.gov (United States)

    Luna, Byron Quan; Blahut, Jan; van Asch, Theo; van Westen, Cees; Kappes, Melanie

    2015-04-01

    Landslides and debris flow hazard assessments require a scale-dependent analysis in order to mitigate damage and other negative consequences at the respective scales of occurrence. Medium or large scale landslide run-out modelling for many possible landslide initiation areas has been a cumbersome task in the past. This arises from the difficulty to precisely define the location and volume of the released mass and from the inability of the run-out models to compute the displacement with a large amount of individual initiation areas (computational exhaustive). Most of the existing physically based run-out models have complications in handling such situations and therefore empirical methods have been used as a practical mean to predict landslides mobility at a medium scale (1:10,000 to 1:50,000). In this context, a simple medium scale numerical model for rapid mass movements in urban and mountainous areas was developed. The deterministic nature of the approach makes it possible to calculate the velocity, height and increase in mass by erosion, resulting in the estimation of various forms of impacts exerted by debris flows at the medium scale The established and implemented model ("AschFlow") is a 2-D one-phase continuum model that simulates, the entrainment, spreading and deposition process of a landslide or debris flow at a medium scale. The flow is thus treated as a single phase material, whose behavior is controlled by rheology (e.g. Voellmy or Bingham). The developed regional model "AschFlow" was applied and evaluated in well documented areas with known past debris flow events.

  12. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    CERN Document Server

    Bonacorsi, D; Giordano, D; Girone, M; Neri, M; Magini, N; Kuznetsov, V; Wildish, T

    2015-01-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This...

  13. Kinetic study of run-away burn in ICF capsule using a quasi-1D model

    Science.gov (United States)

    Huang, Chengkun; Molvig, K.; Albright, B. J.; Dodd, E. S.; Hoffman, N. M.; Vold, E. L.; Kagan, G.

    2016-10-01

    The effect of reduced fusion reactivity resulting from the loss of fuel ions in the Gamow peak in the ignition, run-away burn and disassembly stages of an inertial confinement fusion D-T capsule is investigated with a quasi-1D hybrid model that includes kinetic ions, fluid electrons and Planckian radiation photons. The fuel ion loss through the Knudsen effect at the fuel-pusher interface is accounted for by a local-loss model developed in Molvig et al.. The tail refilling and relaxation of the fuel ion distribution are evolved with a nonlinear Fokker-Planck solver. The Krokhin & Rozanov model is used for the finite alpha range beyond the fuel region, while alpha heating to the fuel ions and the fluid electrons is modeled kinetically. For an energetic pusher (40kJ), the simulation shows that the reduced fusion reactivity can lead to substantially lower ion temperature during run-away burn, while the final yield decreases more modestly. Possible improvements to the present model, including the non-Planckian radiation emission and alpha-driven fuel disassembly, are discussed. Work performed under the auspices of the U.S. DOE by the LANS, LLC, Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. Work supported by the ASC TBI project at LANL.

  14. Improving vacuum gas oil hydrotreating operation via a lumped parameter dynamic simulation modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Remesat, D.

    2008-07-01

    Although hydrotreating has become a large part of refining operations for sour crudes, refiners rarely achieve their run lengths and crude throughput objectives for vacuum gas oil (VGO) hydrotreaters. This shortfall in performance can be attributed to crude flow changes, feed compositional changes, sulphur and metals changes, or hydrogen partial pressure changes, all of which reduce the effectiveness of the catalysts that remove sulphur from the crude oil streams. Although some proprietary steady state models exist to indicate performance enhancement during operation, they have not been widely used and it is not certain whether they would be effective in simulating the process with disturbances over the run length of the process. This study used publicly unattainable data gathered from 14 operating hydrotreaters and developed a lumped parameter dynamic model, using both Excel and HYSYS software, for industrial refinery/upgrader VGO hydrotreaters. The model takes proprietary and public steady state hydrotreater models and successfully applies it to a commercial dynamic simulation package. The model tracks changes in intrinsic reaction rate based on catalyst deactivation, wetting efficiency, feed properties and operating conditions to determine operating temperature, outlet sulphur composition and chemical hydrogen consumed. The model simulates local disturbances, and represents the start, middle and end operating zones during hydrotreater run length. This correlative, partially predictive model demonstrates the economic benefits of increasing hydrogen to improve the operation of a hydrotreater by increasing run length and/or improving crude processing.

  15. Large Scale Model Test Investigation on Wave Run-Up in Irregular Waves at Slender Piles

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke

    2013-01-01

    from high speed video recordings. Based on the measured run-up heights different types of prediction formulae for run-up in irregular waves were evaluated. In conclusion scale effects on run-up levels seems small except for differences in spray. However, run-up of individual waves is difficult...

  16. RUN COORDINATION

    CERN Multimedia

    M. Chamizo

    2012-01-01

      On 17th January, as soon as the services were restored after the technical stop, sub-systems started powering on. Since then, we have been running 24/7 with reduced shift crew — Shift Leader and DCS shifter — to allow sub-detectors to perform calibration, noise studies, test software upgrades, etc. On 15th and 16th February, we had the first Mid-Week Global Run (MWGR) with the participation of most sub-systems. The aim was to bring CMS back to operation and to ensure that we could run after the winter shutdown. All sub-systems participated in the readout and the trigger was provided by a fraction of the muon systems (CSC and the central RPC wheel). The calorimeter triggers were not available due to work on the optical link system. Initial checks of different distributions from Pixels, Strips, and CSC confirmed things look all right (signal/noise, number of tracks, phi distribution…). High-rate tests were done to test the new CSC firmware to cure the low efficiency ...

  17. Modeling human operator involvement in robotic systems

    NARCIS (Netherlands)

    Wewerinke, P.H.

    1991-01-01

    A modeling approach is presented to describe complex manned robotic systems. The robotic system is modeled as a (highly) nonlinear, possibly time-varying dynamic system including any time delays in terms of optimal estimation, control and decision theory. The role of the human operator(s) is modeled

  18. eWaterCycle: A global operational hydrological forecasting model

    Science.gov (United States)

    van de Giesen, Nick; Bierkens, Marc; Donchyts, Gennadii; Drost, Niels; Hut, Rolf; Sutanudjaja, Edwin

    2015-04-01

    Development of an operational hyper-resolution hydrological global model is a central goal of the eWaterCycle project (www.ewatercycle.org). This operational model includes ensemble forecasts (14 days) to predict water related stress around the globe. Assimilation of near-real time satellite data is part of the intended product that will be launched at EGU 2015. The challenges come from several directions. First, there are challenges that are mainly computer science oriented but have direct practical hydrological implications. For example, we aim to make use as much as possible of existing standards and open-source software. For example, different parts of our system are coupled through the Basic Model Interface (BMI) developed in the framework of the Community Surface Dynamics Modeling System (CSDMS). The PCR-GLOBWB model, built by Utrecht University, is the basic hydrological model that is the engine of the eWaterCycle project. Re-engineering of parts of the software was needed for it to run efficiently in a High Performance Computing (HPC) environment, and to be able to interface using BMI, and run on multiple compute nodes in parallel. The final aim is to have a spatial resolution of 1km x 1km, which is currently 10 x 10km. This high resolution is computationally not too demanding but very memory intensive. The memory bottleneck becomes especially apparent for data assimilation, for which we use OpenDA. OpenDa allows for different data assimilation techniques without the need to build these from scratch. We have developed a BMI adaptor for OpenDA, allowing OpenDA to use any BMI compatible model. To circumvent memory shortages which would result from standard applications of the Ensemble Kalman Filter, we have developed a variant that does not need to keep all ensemble members in working memory. At EGU, we will present this variant and how it fits well in HPC environments. An important step in the eWaterCycle project was the coupling between the hydrological and

  19. Hydrologic and water-quality characterization and modeling of the Chenoweth Run basin, Jefferson County, Kentucky

    Science.gov (United States)

    Martin, Gary R.; Zarriello, Phillip J.; Shipp, Allison A.

    2001-01-01

    Rainfall, streamflow, and water-quality data collected in the Chenoweth Run Basin during February 1996?January 1998, in combination with the available historical sampling data, were used to characterize hydrologic conditions and to develop and calibrate a Hydrological Simulation Program?Fortran (HSPF) model for continuous simulation of rainfall, streamflow, suspended-sediment, and total-orthophosphate (TPO4) transport relations. Study results provide an improved understanding of basin hydrology and a hydrologic-modeling framework with analytical tools for use in comprehensive waterresource planning and management. Chenoweth Run Basin, encompassing 16.5 mi2 in suburban eastern Jefferson County, Kentucky, contains expanding urban development, particularly in the upper third of the basin. Historical water-quality problems have interfered with designated aquatic-life and recreation uses in the stream main channel (approximately 9 mi in length) and have been attributed to organic enrichment, nutrients, metals, and pathogens in urban runoff and wastewater inflows. Hydrologic conditions in Jefferson County are highly varied. In the Chenoweth Run Basin, as in much of the eastern third of the county, relief is moderately sloping to steep. Also, internal drainage in pervious areas is impeded by the shallow, fine-textured subsoils that contain abundant silts and clays. Thus, much of the precipitation here tends to move rapidly as overland flow and (or) shallow subsurface flow (interflow) to the stream channels. Data were collected at two streamflowgaging stations, one rain gage, and four waterquality- sampling sites in the basin. Precipitation, streamflow, and, consequently, constituent loads were above normal during the data-collection period of this study. Nonpoint sources contributed the largest portion of the sediment loads. However, the three wastewatertreatment plants (WWTP?s) were the source of the majority of estimated total phosphorus (TP) and TPO4 transport

  20. Dilepton constraints in the Inert Doublet Model from Run 1 of the LHC

    CERN Document Server

    Belanger, G; Goudelis, A; Herrmann, B; Kraml, S; Sengupta, D

    2015-01-01

    Searches in final states with two leptons plus missing transverse energy, targeting supersymmetric particles or invisible decays of the Higgs boson, were performed during Run 1 of the LHC. Recasting the results of these analyses in the context of the Inert Doublet Model (IDM) using MadAnalysis 5, we show that they provide constraints on inert scalars that significantly extend previous limits from LEP. Moreover, these LHC constraints allow to test the IDM in the limit of very small Higgs-inert scalar coupling, where the constraints from direct detection of dark matter and the invisible Higgs width vanish.

  1. ROTI-OPERATIONAL INSTRUCTIONAL MODEL

    Directory of Open Access Journals (Sweden)

    H. Barker,

    2012-02-01

    Full Text Available The instructional model presented here is a combination of systems used by the United States Navy and R. F. Mager's Criteria Referenced Instruction Model for Analysis Design and Implementation. The author has taken what he believes is the best components from each system and established a working model.

  2. Comparison of a priori calibration models for respiratory inductance plethysmography during running.

    Science.gov (United States)

    Leutheuser, Heike; Heyde, Christian; Gollhofer, Albert; Eskofier, Bjoern M

    2014-01-01

    Respiratory inductive plethysmography (RIP) has been introduced as an alternative for measuring ventilation by means of body surface displacement (diameter changes in rib cage and abdomen). Using a posteriori calibration, it has been shown that RIP may provide accurate measurements for ventilatory tidal volume under exercise conditions. Methods for a priori calibration would facilitate the application of RIP. Currently, to the best knowledge of the authors, none of the existing ambulant procedures for RIP calibration can be used a priori for valid subsequent measurements of ventilatory volume under exercise conditions. The purpose of this study is to develop and validate a priori calibration algorithms for ambulant application of RIP data recorded in running exercise. We calculated Volume Motion Coefficients (VMCs) using seven different models on resting data and compared the root mean squared error (RMSE) of each model applied on running data. Least squares approximation (LSQ) without offset of a two-degree-of-freedom model achieved the lowest RMSE value. In this work, we showed that a priori calibration of RIP exercise data is possible using VMCs calculated from 5 min resting phase where RIP and flowmeter measurements were performed simultaneously. The results demonstrate that RIP has the potential for usage in ambulant applications.

  3. Building and Running the Yucca Mountain Total System Performance Model in a Quality Environment

    Energy Technology Data Exchange (ETDEWEB)

    D.A. Kalinich; K.P. Lee; J.A. McNeish

    2005-01-09

    A Total System Performance Assessment (TSPA) model has been developed to support the Safety Analysis Report (SAR) for the Yucca Mountain High-Level Waste Repository. The TSPA model forecasts repository performance over a 20,000-year simulation period. It has a high degree of complexity due to the complexity of its underlying process and abstraction models. This is reflected in the size of the model (a 27,000 element GoldSim file), its use of dynamic-linked libraries (14 DLLs), the number and size of its input files (659 files totaling 4.7 GB), and the number of model input parameters (2541 input database entries). TSPA model development and subsequent simulations with the final version of the model were performed to a set of Quality Assurance (QA) procedures. Due to the complexity of the model, comments on previous TSPAs, and the number of analysts involved (22 analysts in seven cities across four time zones), additional controls for the entire life-cycle of the TSPA model, including management, physical, model change, and input controls were developed and documented. These controls did not replace the QA. procedures, rather they provided guidance for implementing the requirements of the QA procedures with the specific intent of ensuring that the model development process and the simulations performed with the final version of the model had sufficient checking, traceability, and transparency. Management controls were developed to ensure that only management-approved changes were implemented into the TSPA model and that only management-approved model runs were performed. Physical controls were developed to track the use of prototype software and preliminary input files, and to ensure that only qualified software and inputs were used in the final version of the TSPA model. In addition, a system was developed to name, file, and track development versions of the TSPA model as well as simulations performed with the final version of the model.

  4. Modeling Operating Modes during Plant Life Cycle

    DEFF Research Database (Denmark)

    Jørgensen, Sten Bay; Lind, Morten

    2012-01-01

    Modelling process plants during normal operation requires a set a basic assumptions to define the desired functionalities which lead to fullfillment of the operational goal(-s) for the plant. However during during start-up and shut down as well as during batch operation an ensemble of interrelate...

  5. Hubble expansion and structure formation in the "running FLRW model" of the cosmic evolution

    CERN Document Server

    Grande, Javier; Basilakos, Spyros; Plionis, Manolis

    2011-01-01

    A new class of FLRW cosmological models with time-evolving fundamental parameters should emerge naturally from a description of the expansion of the universe based on the first principles of quantum field theory and string theory. Within this general paradigm, one expects that both the gravitational Newton's coupling, G, and the cosmological term, Lambda, should not be strictly constant but appear rather as smooth functions of the Hubble rate. This scenario ("running FLRW model") predicts, in a natural way, the existence of dynamical dark energy without invoking the participation of extraneous scalar fields. In this paper, we perform a detailed study of these models in the light of the latest cosmological data, which serves to illustrate the phenomenological viability of the new dark energy paradigm as a serious alternative to the traditional scalar field approaches. By performing a joint likelihood analysis of the recent SNIa data, the CMB shift parameter, and the BAOs traced by the Sloan Digital Sky Survey,...

  6. Population growth and economic development in the very long run: a simulation model of three revolutions.

    Science.gov (United States)

    Steinmann, G; Komlos, J

    1988-08-01

    The authors propose an economic model capable of simulating the 4 main historical stages of civilization: hunting, agricultural, industrial, and postindustrial. An output-maximizing society to respond to changes in factor endowments by switching technologies. Changes in factor proportions arise through population growth and capital accumulation. A slow rate of exogenous technical process is assumed. The model synthesizes Malthusian and Boserupian notions of the effect of population growth on per capita output. Initially the capital-diluting effect of population growth dominates. As population density increases, however, and a threshold is reached, the Boserupian effect becomes crucial, and a technological revolution occurs. The cycle is thereafter repeated. After the second economic revolution, however, the Malthusian constraint dissolves permanently, as population growth can continue without being constrained by diminishing returns to labor. By synthesizing Malthusian and Boserupian notions, the model is able to capture the salient features of economic development in the very long run.

  7. Modeling and simulation with operator scaling

    CERN Document Server

    Cohen, Serge; Rosinski, Jan

    2009-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical applications. A classification of operator stable Levy processes in two dimensions is provided according to their exponents and symmetry groups. We conclude with some remarks and extensions to general operator self-similar processes.

  8. A Scalable Version of the Navy Operational Global Atmospheric Prediction System Spectral Forecast Model

    Directory of Open Access Journals (Sweden)

    Thomas E. Rosmond

    2000-01-01

    Full Text Available The Navy Operational Global Atmospheric Prediction System (NOGAPS includes a state-of-the-art spectral forecast model similar to models run at several major operational numerical weather prediction (NWP centers around the world. The model, developed by the Naval Research Laboratory (NRL in Monterey, California, has run operational at the Fleet Numerical Meteorological and Oceanographic Center (FNMOC since 1982, and most recently is being run on a Cray C90 in a multi-tasked configuration. Typically the multi-tasked code runs on 10 to 15 processors with overall parallel efficiency of about 90%. resolution is T159L30, but other operational and research applications run at significantly lower resolutions. A scalable NOGAPS forecast model has been developed by NRL in anticipation of a FNMOC C90 replacement in about 2001, as well as for current NOGAPS research requirements to run on DOD High-Performance Computing (HPC scalable systems. The model is designed to run with message passing (MPI. Model design criteria include bit reproducibility for different processor numbers and reasonably efficient performance on fully shared memory, distributed memory, and distributed shared memory systems for a wide range of model resolutions. Results for a wide range of processor numbers, model resolutions, and different vendor architectures are presented. Single node performance has been disappointing on RISC based systems, at least compared to vector processor performance. This is a common complaint, and will require careful re-examination of traditional numerical weather prediction (NWP model software design and data organization to fully exploit future scalable architectures.

  9. Changes in spring-mass model characteristics during repeated running sprints.

    Science.gov (United States)

    Girard, Olivier; Micallef, Jean-Paul; Millet, Grégoire P

    2011-01-01

    This study investigated fatigue-induced changes in spring-mass model characteristics during repeated running sprints. Sixteen active subjects performed 12 × 40 m sprints interspersed with 30 s of passive recovery. Vertical and anterior-posterior ground reaction forces were measured at 5-10 m and 30-35 m and used to determine spring-mass model characteristics. Contact (P Stride frequency (P  0.05) increased with time. As a result, vertical stiffness decreased (P  0.05). Changes in vertical stiffness were correlated (r > 0.7; P stride frequency. When compared to 5-10 m, most of ground reaction force-related parameters were higher (P stride frequency, vertical and leg stiffness were lower (P run-based sprints are repeated, which alters impact parameters. Maintaining faster stride frequencies through retaining higher vertical stiffness is a prerequisite to improve performance during repeated sprinting.

  10. Following an Optimal Batch Bioreactor Operations Model

    DEFF Research Database (Denmark)

    Ibarra-Junquera, V.; Jørgensen, Sten Bay; Virgen-Ortíz, J.J.;

    2012-01-01

    The problem of following an optimal batch operation model for a bioreactor in the presence of uncertainties is studied. The optimal batch bioreactor operation model (OBBOM) refers to the bioreactor trajectory for nominal cultivation to be optimal. A multiple-variable dynamic optimization of fed-b...

  11. Why operational risk modelling creates inverse incentives

    NARCIS (Netherlands)

    Doff, R.

    2015-01-01

    Operational risk modelling has become commonplace in large international banks and is gaining popularity in the insurance industry as well. This is partly due to financial regulation (Basel II, Solvency II). This article argues that operational risk modelling is fundamentally flawed, despite efforts

  12. Tsunami generation, propagation, and run-up with a high-order Boussinesq model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.

    2009-01-01

    In this work we extend a high-order Boussinesq-type (finite difference) model, capable of simulating waves out to wavenumber times depth kh tsunamis. The extension is straight forward, requiring only...... show that the long-time (fully nonlinear) evolution of waves resulting from an upthrusted bottom can eventually result in true solitary waves, consistent with theoretical predictions. It is stressed, however, that the nonlinearity used far exceeds that typical of geophysical tsunamis in the open ocean....... The Boussinesq-type model is then used to simulate numerous tsunami-type events generated from submerged landslides, in both one and two horizontal dimensions. The results again compare well against previous experiments and/or numerical simulations. The new extension compliments recently developed run...

  13. A two-runners model: optimization of running strategies according to the physiological parameters

    CERN Document Server

    Aftalion, Amandine

    2015-01-01

    In order to describe the velocity and the anaerobic energy of two runners competing against each other for middle-distance races, we present a mathematical model relying on an optimal control problem for a system of ordinary differential equations. The model is based on energy conservation and on Newton's second law: resistive forces, propulsive forces and variations in the maximal oxygen uptake are taken into account. The interaction between the runners provides a minimum for staying one meter behind one's competitor. We perform numerical simulations and show how a runner can win a race against someone stronger by taking advantage of staying behind, or how he can improve his personal record by running behind someone else. Our simulations show when it is the best time to overtake, depending on the difference between the athletes. Finally, we compare our numerical results with real data from the men's 1500 -- m finals of different competitions.

  14. Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI

    Science.gov (United States)

    Donato, David I.

    2017-01-01

    In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.

  15. Effects of intermediate scales on renormalization group running of fermion observables in an SO(10) model

    CERN Document Server

    Meloni, Davide; Riad, Stella

    2014-01-01

    In the context of non-supersymmetric SO(10) models, we analyze the renormalization group equations for the fermions (including neutrinos) from the GUT energy scale down to the electroweak energy scale, explicitly taking into account the effects of an intermediate energy scale induced by a Pati--Salam gauge group. To determine the renormalization group running, we use a numerical minimization procedure based on a nested sampling algorithm that randomly generates the values of 19 model parameters at the GUT scale, evolves them, and finally constructs the values of the physical observables and compares them to the existing experimental data at the electroweak scale. We show that the evolved fermion masses and mixings present sizable deviations from the values obtained without including the effects of the intermediate scale.

  16. Effects of intermediate scales on renormalization group running of fermion observables in an SO(10) model

    Science.gov (United States)

    Meloni, Davide; Ohlsson, Tommy; Riad, Stella

    2014-12-01

    In the context of non-supersymmetric SO(10) models, we analyze the renormalization group equations for the fermions (including neutrinos) from the GUT energy scale down to the electroweak energy scale, explicitly taking into account the effects of an intermediate energy scale induced by a Pati-Salam gauge group. To determine the renormalization group running, we use a numerical minimization procedure based on a nested sampling algorithm that randomly generates the values of 19 model parameters at the GUT scale, evolves them, and finally constructs the values of the physical observables and compares them to the existing experimental data at the electroweak scale. We show that the evolved fermion masses and mixings present sizable deviations from the values obtained without including the effects of the intermediate scale.

  17. Minkowski space pion model inspired by lattice QCD running quark mass

    Science.gov (United States)

    Mello, Clayton S.; de Melo, J. P. B. C.; Frederico, T.

    2017-03-01

    The pion structure in Minkowski space is described in terms of an analytic model of the Bethe-Salpeter amplitude combined with Euclidean Lattice QCD results. The model is physically motivated to take into account the running quark mass, which is fitted to Lattice QCD data. The pion pseudoscalar vertex is associated to the quark mass function, as dictated by dynamical chiral symmetry breaking requirements in the limit of vanishing current quark mass. The quark propagator is analyzed in terms of a spectral representation, and it shows a violation of the positivity constraints. The integral representation of the pion Bethe-Salpeter amplitude is also built. The pion space-like electromagnetic form factor is calculated with a quark electromagnetic current, which satisfies the Ward-Takahashi identity to ensure current conservation. The results for the form factor and weak decay constant are found to be consistent with the experimental data.

  18. Classically conformal U(1 ) ' extended standard model, electroweak vacuum stability, and LHC Run-2 bounds

    Science.gov (United States)

    Das, Arindam; Oda, Satsuki; Okada, Nobuchika; Takahashi, Dai-suke

    2016-06-01

    We consider the minimal U(1 ) ' extension of the standard model (SM) with the classically conformal invariance, where an anomaly-free U(1 ) ' gauge symmetry is introduced along with three generations of right-handed neutrinos and a U(1 ) ' Higgs field. Since the classically conformal symmetry forbids all dimensional parameters in the model, the U(1 ) ' gauge symmetry is broken by the Coleman-Weinberg mechanism, generating the mass terms of the U(1 ) ' gauge boson (Z' boson) and the right-handed neutrinos. Through a mixing quartic coupling between the U(1 ) ' Higgs field and the SM Higgs doublet field, the radiative U(1 ) ' gauge symmetry breaking also triggers the breaking of the electroweak symmetry. In this model context, we first investigate the electroweak vacuum instability problem in the SM. Employing the renormalization group equations at the two-loop level and the central values for the world average masses of the top quark (mt=173.34 GeV ) and the Higgs boson (mh=125.09 GeV ), we perform parameter scans to identify the parameter region for resolving the electroweak vacuum instability problem. Next we interpret the recent ATLAS and CMS search limits at the LHC Run-2 for the sequential Z' boson to constrain the parameter region in our model. Combining the constraints from the electroweak vacuum stability and the LHC Run-2 results, we find a bound on the Z' boson mass as mZ'≳3.5 TeV . We also calculate self-energy corrections to the SM Higgs doublet field through the heavy states, the right-handed neutrinos and the Z' boson, and find the naturalness bound as mZ'≲7 TeV , in order to reproduce the right electroweak scale for the fine-tuning level better than 10%. The resultant mass range of 3.5 TeV ≲mZ'≲7 TeV will be explored at the LHC Run-2 in the near future.

  19. Operator algebra of orbifold models

    Energy Technology Data Exchange (ETDEWEB)

    Dijkgraaf, R.; Vafa, C.; Verlinde, E.; Verlinde, H.

    1989-07-01

    We analyze the chiral properties of (orbifold) conformal field theories which are obtained from a given conformal field theory by modding out by a finite symmetry group. For a class of orbifolds, we derive the fusion rules by studying the modular transformation properties of the one-loop characters. The results are illustrated with explicit calculations of toroidal and c=1 models.

  20. A comprehensive operational semantics of the SCOOP programming model

    CERN Document Server

    Morandi, Benjamin; Meyer, Bertrand

    2011-01-01

    Operational semantics has established itself as a flexible but rigorous means to describe the meaning of programming languages. Oftentimes, it is felt necessary to keep a semantics small, for example to facilitate its use for model checking by avoiding state space explosion. However, omitting many details in a semantics typically makes results valid for a limited core language only, leaving a wide gap towards any real implementation. In this paper we present a full-fledged semantics of the concurrent object-oriented programming language SCOOP (Simple Concurrent Object-Oriented Programming). The semantics has been found detailed enough to guide an implementation of the SCOOP compiler and runtime system, and to detect and correct a variety of errors and ambiguities in the original informal specification and prototype implementation. In our formal specification, we use abstract data types with preconditions and axioms to describe the state, and introduce a number of special run-time operations to model the runti...

  1. Modelling of flexi-coil springs with rubber-metal pads in a locomotive running gear

    Directory of Open Access Journals (Sweden)

    Michálek T.

    2015-06-01

    Full Text Available Nowadays, flexi-coil springs are commonly used in the secondary suspension stage of railway vehicles. Lateral stiffness of these springs is influenced by means of their design parameters (number of coils, height, mean diameter of coils, wire diameter etc. and it is often suitable to modify this stiffness in such way, that the suspension shows various lateral stiffness in different directions (i.e., longitudinally vs. laterally in the vehicle-related coordinate system. Therefore, these springs are often supplemented with some kind of rubber-metal pads. This paper deals with modelling of the flexi-coil springs supplemented with tilting rubber-metal tilting pads applied in running gear of an electric locomotive as well as with consequences of application of that solution of the secondary suspension from the point of view of the vehicle running performance. This analysis is performed by means of multi-body simulations and the description of lateral stiffness characteristics of the springs is based on results of experimental measurements of these characteristics performed in heavy laboratories of the Jan Perner Transport Faculty of the University of Pardubice.

  2. Modelled operation of the Shetlands Islands power system comparing computational and human operators` load forecasts

    Energy Technology Data Exchange (ETDEWEB)

    Hill, D.C. [University Coll. of North Wales, Menai Bridge (United Kingdom). School of Ocean Science; Infield, D.G. [Loughborough Univ. of Technology (United Kingdom). Dept. of Electronic and Electrical Engineering

    1995-11-01

    A load forecasting technique, based upon an autoregressive (AR) method is presented. Its use for short term load forecasting is assessed by direct comparison with real forecasts made by human operators of the Lerwick power station on the Shetland Islands. A substantial improvement in load prediction, as measured by a reduction of RMS error, is demonstrated. Shetland has a total installed capacity of about 68 MW, and an average load (1990) of around 20 MW. Although the operators could forecast the load for a few distinct hours better than the AR method, results from simulations of the scheduling and operation of the generating plant show that the AR forecasts provide increased overall system performance. A detailed model of the island power system, which includes plant scheduling, was run using the AR and Lerwick operators` forecasts as input to the scheduling routine. A reduction in plant cycling, underloading and fuel consumption was obtained using the AR forecasts rather than the operators` forecasts in simulations over a 28 day study period. It is concluded that the load forecasting method presented could be of benefit to the operators of such mesoscale power systems. (author)

  3. Operational Models of Infrastructure Resilience

    Science.gov (United States)

    2015-01-01

    following the loss of these components. Although our exposition sometimes personifies the attacker, we emphasize that our purpose is simply to discover...following a catastrophic event.” Reed et al.(67) present resilience scoring met- rics and build on the work of Haimes(58) in using input-output...to simplify exposition —we have included much more complicated investment considerations in other such models), where it “costs” one unit of de- fense

  4. Probabilistic landslide run-out assessment with a 2-D dynamic numerical model using a Monte Carlo method

    Science.gov (United States)

    Cepeda, Jose; Luna, Byron Quan; Nadim, Farrokh

    2013-04-01

    An essential component of a quantitative landslide hazard assessment is establishing the extent of the endangered area. This task requires accurate prediction of the run-out behaviour of a landslide, which includes the estimation of the run-out distance, run-out width, velocities, pressures, and depth of the moving mass and the final configuration of the deposits. One approach to run-out modelling is to reproduce accurately the dynamics of the propagation processes. A number of dynamic numerical models are able to compute the movement of the flow over irregular topographic terrains (3-D) controlled by a complex interaction between mechanical properties that may vary in space and time. Given the number of unknown parameters and the fact that most of the rheological parameters cannot be measured in the laboratory or field, the parametrization of run-out models is very difficult in practice. For this reason, the application of run-out models is mostly used for back-analysis of past events and very few studies have attempted to achieve forward predictions. Consequently all models are based on simplified descriptions that attempt to reproduce the general features of the failed mass motion through the use of parameters (mostly controlling shear stresses at the base of the moving mass) which account for aspects not explicitly described or oversimplified. The uncertainties involved in the run-out process have to be approached in a stochastic manner. It is of significant importance to develop methods for quantifying and properly handling the uncertainties in dynamic run-out models, in order to allow a more comprehensive approach to quantitative risk assessment. A method was developed to compute the variation in run-out intensities by using a dynamic run-out model (MassMov2D) and a probabilistic framework based on a Monte Carlo simulation in order to analyze the effect of the uncertainty of input parameters. The probability density functions of the rheological parameters

  5. RUN COORDINATION

    CERN Multimedia

    G. Rakness.

    2013-01-01

    After three years of running, in February 2013 the era of sub-10-TeV LHC collisions drew to an end. Recall, the 2012 run had been extended by about three months to achieve the full complement of high-energy and heavy-ion physics goals prior to the start of Long Shutdown 1 (LS1), which is now underway. The LHC performance during these exciting years was excellent, delivering a total of 23.3 fb–1 of proton-proton collisions at a centre-of-mass energy of 8 TeV, 6.2 fb–1 at 7 TeV, and 5.5 pb–1 at 2.76 TeV. They also delivered 170 μb–1 lead-lead collisions at 2.76 TeV/nucleon and 32 nb–1 proton-lead collisions at 5 TeV/nucleon. During these years the CMS operations teams and shift crews made tremendous strides to commission the detector, repeatedly stepping up to meet the challenges at every increase of instantaneous luminosity and energy. Although it does not fully cover the achievements of the teams, a way to quantify their success is the fact that that...

  6. Prosthetic model, but not stiffness or height, affects the metabolic cost of running for athletes with unilateral transtibial amputations.

    Science.gov (United States)

    Beck, Owen N; Taboga, Paolo; Grabowski, Alena M

    2017-07-01

    Running-specific prostheses enable athletes with lower limb amputations to run by emulating the spring-like function of biological legs. Current prosthetic stiffness and height recommendations aim to mitigate kinematic asymmetries for athletes with unilateral transtibial amputations. However, it is unclear how different prosthetic configurations influence the biomechanics and metabolic cost of running. Consequently, we investigated how prosthetic model, stiffness, and height affect the biomechanics and metabolic cost of running. Ten athletes with unilateral transtibial amputations each performed 15 running trials at 2.5 or 3.0 m/s while we measured ground reaction forces and metabolic rates. Athletes ran using three different prosthetic models with five different stiffness category and height combinations per model. Use of an Ottobock 1E90 Sprinter prosthesis reduced metabolic cost by 4.3 and 3.4% compared with use of Freedom Innovations Catapult [fixed effect (β) = -0.177; P forces, prolonged ground contact times (β = -4.349; P = 0.012), and decreased leg stiffness (β = 0.071; P forces (β = 0.007; P = 0.003) but was unrelated to stride kinematic symmetry (P ≥ 0.636). Therefore, prosthetic recommendations based on symmetric stride kinematics do not necessarily minimize the metabolic cost of running. Instead, an optimal prosthetic model, which improves overall biomechanics, minimizes the metabolic cost of running for athletes with unilateral transtibial amputations.NEW & NOTEWORTHY The metabolic cost of running for athletes with unilateral transtibial amputations depends on prosthetic model and is associated with lower peak and stance average vertical ground reaction forces, longer contact times, and reduced leg stiffness. Metabolic cost is unrelated to prosthetic stiffness, height, and stride kinematic symmetry. Unlike nonamputees who decrease leg stiffness with increased in-series surface stiffness, biological limb stiffness for athletes with unilateral

  7. Analysis of the traditional vehicle’s running cost and the electric vehicle’s running cost under car-following model

    Science.gov (United States)

    Tang, Tie-Qiao; Xu, Ke-Wei; Yang, Shi-Chun; Shang, Hua-Yan

    2016-03-01

    In this paper, we use car-following theory to study the traditional vehicle’s running cost and the electric vehicle’s running cost. The numerical results illustrate that the traditional vehicle’s running cost is larger than that of the electric vehicle and that the system’s total running cost drops with the increase of the electric vehicle’s proportion, which shows that the electric vehicle is better than the traditional vehicle from the perspective of the running cost.

  8. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2012-03-01

    Full Text Available (CSIR), Pretoria, South Africa tj.grant@nlda.nl iburke@csir.co.za rvhheerden@csir.co.za Abstract: Cyber operations denote the response of governments and organisations to cyber crime, terrorism, and warfare. To date, cyber operations have been.... This could include responding to an (impending) attack by counter-attacking or by proactively neutralizing the source of an impending attack. A good starting point to improving understanding would be to model the offensive cyber operations process...

  9. Towards a numerical run-out model for quick-clay slides

    Science.gov (United States)

    Issler, Dieter; L'Heureux, Jean-Sébastien; Cepeda, José M.; Luna, Byron Quan; Gebreslassie, Tesfahunegn A.

    2015-04-01

    Highly sensitive glacio-marine clays occur in many relatively low-lying areas near the coasts of eastern Canada, Scandinavia and northern Russia. If the load exceeds the yield stress of these clays, they quickly liquefy, with a reduction of the yield strength and the viscosity by several orders of magnitude. Leaching, fluvial erosion, earthquakes and man-made overloads, by themselves or combined, are the most frequent triggers of quick-clay slides, which are hard to predict and can attain catastrophic dimensions. The present contribution reports on two preparatory studies that were conducted with a view to creating a run-out model tailored to the characteristics of quick-clay slides. One study analyzed the connections between the morphological and geotechnical properties of more than 30 well-documented Norwegian quick-clay slides and their run-out behavior. The laboratory experiments by Locat and Demers (1988) suggest that the behavior of quick clays can be reasonably described by universal relations involving the liquidity index, plastic index, remolding energy, salinity and sensitivity. However, these tests should be repeated with Norwegian clays and analyzed in terms of a (shear-thinning) Herschel-Bulkley fluid rather than a Bingham fluid because the shear stress appears to grow in a sub-linear fashion with the shear rate. Further study is required to understand the discrepancy between the material parameters obtained in laboratory tests of material from observed slides and in back-calculations of the same slides with the simple model by Edgers & Karlsrud (1982). The second study assessed the capability of existing numerical flow models to capture the most important aspects of quick-clay slides by back-calculating three different, well documented events in Norway: Rissa (1978), Finneidfjord (1996) and Byneset (2012). The numerical codes were (i) BING, a quasi-two-dimensional visco-plastic model, (ii) DAN3D (2009 version), and (iii) MassMov2D. The latter two are

  10. Non-linear structure formation in the `Running FLRW' cosmological model

    Science.gov (United States)

    Bibiano, Antonio; Croton, Darren J.

    2016-07-01

    We present a suite of cosmological N-body simulations describing the `Running Friedmann-Lemaïtre-Robertson-Walker' (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends Lambda cold dark matter (ΛCDM) with a time-evolving vacuum density, Λ(z), and time-evolving gravitational Newton's coupling, G(z). In this paper, we review the model and introduce the necessary analytical treatment needed to adapt a reference N-body code. Our resulting simulations represent the first realization of the full growth history of structure in the R-FLRW cosmology into the non-linear regime, and our normalization choice makes them fully consistent with the latest cosmic microwave background data. The post-processing data products also allow, for the first time, an analysis of the properties of the halo and sub-halo populations. We explore the degeneracies of many statistical observables and discuss the steps needed to break them. Furthermore, we provide a quantitative description of the deviations of R-FLRW from ΛCDM, which could be readily exploited by future cosmological observations to test and further constrain the model.

  11. A comparison between conventional and LANDSAT based hydrologic modeling: The Four Mile Run case study

    Science.gov (United States)

    Ragan, R. M.; Jackson, T. J.; Fitch, W. N.; Shubinski, R. P.

    1976-01-01

    Models designed to support the hydrologic studies associated with urban water resources planning require input parameters that are defined in terms of land cover. Estimating the land cover is a difficult and expensive task when drainage areas larger than a few sq. km are involved. Conventional and LANDSAT based methods for estimating the land cover based input parameters required by hydrologic planning models were compared in a case study of the 50.5 sq. km (19.5 sq. mi) Four Mile Run Watershed in Virginia. Results of the study indicate that the LANDSAT based approach is highly cost effective for planning model studies. The conventional approach to define inputs was based on 1:3600 aerial photos, required 110 man-days and a total cost of $14,000. The LANDSAT based approach required 6.9 man-days and cost $2,350. The conventional and LANDSAT based models gave similar results relative to discharges and estimated annual damages expected from no flood control, channelization, and detention storage alternatives.

  12. Development of a simulation model for compression ignition engine running with ignition improved blend

    Directory of Open Access Journals (Sweden)

    Sudeshkumar Ponnusamy Moranahalli

    2011-01-01

    Full Text Available Department of Automobile Engineering, Anna University, Chennai, India. The present work describes the thermodynamic and heat transfer models used in a computer program which simulates the diesel fuel and ignition improver blend to predict the combustion and emission characteristics of a direct injection compression ignition engine fuelled with ignition improver blend using classical two zone approach. One zone consists of pure air called non burning zone and other zone consist of fuel and combustion products called burning zone. First law of thermodynamics and state equations are applied in each of the two zones to yield cylinder temperatures and cylinder pressure histories. Using the two zone combustion model the combustion parameters and the chemical equilibrium composition were determined. To validate the model an experimental investigation has been conducted on a single cylinder direct injection diesel engine fuelled with 12% by volume of 2- ethoxy ethanol blend with diesel fuel. Addition of ignition improver blend to diesel fuel decreases the exhaust smoke and increases the thermal efficiency for the power outputs. It was observed that there is a good agreement between simulated and experimental results and the proposed model requires low computational time for a complete run.

  13. Stochastic Modelling and Analysis of Warehouse Operations

    NARCIS (Netherlands)

    Y. Gong (Yeming)

    2009-01-01

    textabstractThis thesis has studied stochastic models and analysis of warehouse operations. After an overview of stochastic research in warehouse operations, we explore the following topics. Firstly, we search optimal batch sizes in a parallel-aisle warehouse with online order arrivals. We employ a

  14. Modeling Control Situations in Power System Operations

    DEFF Research Database (Denmark)

    Saleem, Arshad; Lind, Morten; Singh, Sri Niwas

    2010-01-01

    Increased interconnection and loading of the power system along with deregulation has brought new challenges for electric power system operation, control and automation. Traditional power system models used in intelligent operation and control are highly dependent on the task purpose. Thus, a model...... for intelligent operation and control must represent system features, so that information from measurements can be related to possible system states and to control actions. These general modeling requirements are well understood, but it is, in general, difficult to translate them into a model because of the lack...... of explicit principles for model construction. This paper presents a work on using explicit means-ends model based reasoning about complex control situations which results in maintaining consistent perspectives and selecting appropriate control action for goal driven agents. An example of power system...

  15. Modeling Control Situations in Power System Operations

    DEFF Research Database (Denmark)

    Saleem, Arshad; Lind, Morten; Singh, Sri Niwas

    2010-01-01

    Increased interconnection and loading of the power system along with deregulation has brought new challenges for electric power system operation, control and automation. Traditional power system models used in intelligent operation and control are highly dependent on the task purpose. Thus, a model...... for intelligent operation and control must represent system features, so that information from measurements can be related to possible system states and to control actions. These general modeling requirements are well understood, but it is, in general, difficult to translate them into a model because of the lack...... of explicit principles for model construction. This paper presents a work on using explicit means-ends model based reasoning about complex control situations which results in maintaining consistent perspectives and selecting appropriate control action for goal driven agents. An example of power system...

  16. Short-run analysis of fiscal policy and the current account in a finite horizon model

    OpenAIRE

    Heng-fu Zou

    1995-01-01

    This paper utilizes a technique developed by Judd to quantify the short-run effects of fiscal policies and income shocks on the current account in a small open economy. It is found that: (1) a future increase in government spending improves the short-run current account; (2) a future tax increase worsens the short-run current account; (3) a present increase in the government spending worsens the short-run current account dollar by dollar, while a present increase in the income improves the cu...

  17. Equator To Pole in the Cretaceous: A Comparison of Clumped Isotope Data and CESM Model Runs

    Science.gov (United States)

    Petersen, S. V.; Tabor, C. R.; Meyer, K.; Lohmann, K. C.; Poulsen, C. J.; Carpenter, S. J.

    2015-12-01

    An outstanding issue in the field of paleoclimate is the inability of models to reproduce the shallower equator-to-pole temperature gradients suggested by proxies for past greenhouse periods. Here, we focus on the Late Cretaceous (Maastrichtian, 72-66 Ma), when estimated CO2 levels were ~400-1000ppm. New clumped isotope temperature data from more than 10 sites spanning 65°S to 48°N are used to reconstruct the Maastrichtian equator-to-pole temperature gradient. This data is compared to CESM model simulations of the Maastrichtian, run using relevant paleogeography and atmospheric CO2 levels of 560 and 1120 ppm. Due to a reduced "proxy toolkit" this far in the past, much of our knowledge of Cretaceous climate comes from the oxygen isotope paleothermometer, which incorporates an assumption about the oxygen isotopic composition of seawater (δ18Osw), a quantity often related to salinity. With the clumped isotope paleothermometer, we can directly calculate δ18Osw. This will be used to test commonly applied assumptions about water composition, and will be compared to modeled ocean salinity. We also discuss basin-to-basin differences and their implications for paleo-circulation patterns.

  18. Non-linear structure formation in the "Running FLRW" cosmological model

    CERN Document Server

    Bibiano, Antonio

    2016-01-01

    We present a suite of cosmological N-body simulations describing the "Running Friedmann-Lema{\\"i}tre-Robertson-Walker" (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends {\\Lambda}CDM with a time-evolving vacuum density, {\\Lambda}(z), and time-evolving gravitational Newton's coupling, G(z). In this paper we review the model and introduce the necessary analytical treatment needed to adapt a reference N-body code. Our resulting simulations represent the first realisation of the full growth history of structure in the R-FLRW cosmology into the non-linear regime, and our normalisation choice makes them fully consistent with the latest cosmic microwave background data. The post-processing data products also allow, for the first time, an analysis of the properties of the halo and sub-halo populations. We explore the degeneracies of many statistical observables and discuss the steps needed to break them. Furthermore, we provide a quantitative description of the...

  19. A knowledge based model of electric utility operations. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-08-11

    This report consists of an appendix to provide a documentation and help capability for an analyst using the developed expert system of electric utility operations running in CLIPS. This capability is provided through a separate package running under the WINDOWS Operating System and keyed to provide displays of text, graphics and mixed text and graphics that explain and elaborate on the specific decisions being made within the knowledge based expert system.

  20. On Demand Runs Of Mesoscale Models : Météo-France multi-mission, multi-support GUI

    Science.gov (United States)

    Periard, C.; Pourret, V.; Chaupin, D.

    2009-09-01

    Numerous experiment campaigns have shown the interest of mesoscale models to represent weather conditions of the atmosphere as a support to various applications, from electromagnetic propagation to wind power atlas. However running mesoscale models requires high level knowledge on computing and modelling to define the different parameters for a given simulation. With the increase of the demands for mesoscale simulations, we decided to develop a GUI that enables to easily define and run type-experiments Ø at any location on the globe Ø on different types of computers (from Meteo-France Fujitsu to a PC Cluster) Ø with different choices of forcing models. The GUI developed in PHP, uses a map server to visualize the location of the experiment being defined and the different forcing models available for the simulation. The other parameters such as time steps, resolutions, sizes and number of embedded domains, etc … can be modified through checkboxes or multiple choices lists in the GUI. So far, the GUI has been used to run 3 different types of experiment : Ø for EM propagation purpose, during an experiment campaign near Toulon : the simulations were run on a PC Cluster in analyse mode. Ø for wind profiles prediction, in Afghanistan : the simulations are run on the Fujitsu in forecast mode. Ø for weather forecast, during a the F1 race in Japan : the simulations were run on a PC Cluster in forecast mode. During the presentation, I will first give some screen-prints of the different fill-in forms of the Gui and the way to define an experiment. Then I will focus on the 3 examples mentioned above showing different types of graphs and maps produced. There are tons of other applications where this tool is going to be useful especially in climatology: using weather type classification and downscaling, the Gui will help run the simulations of the different clusters representatives . The last thing to accomplish is find a name for the tool.

  1. Diel activity patterns of juvenile late fall-run Chinook salmon with implications for operation of a gated water diversion in the Sacramento–San Joaquin River Delta

    Science.gov (United States)

    Plumb, John M.; Adams, Noah S.; Perry, Russell W.; Holbrook, Christopher; Romine, Jason G.; Blake, Aaron R.; Burau, Jon R.

    2016-01-01

    In the Sacramento-San Joaquin River Delta, California, tidal forces that reverse river flows increase the proportion of water and juvenile late fall-run Chinook salmon diverted into a network of channels that were constructed to support agriculture and human consumption. This area is known as the interior delta, and it has been associated with poor fish survival. Under the rationale that the fish will be diverted in proportion to the amount of water that is diverted, the Delta Cross Channel (DCC) has been prescriptively closed during the winter out-migration to reduce fish entrainment and mortality into the interior delta. The fish are thought to migrate mostly at night, and so daytime operation of the DCC may allow for water diversion that minimizes fish entrainment and mortality. To assess this, the DCC gate was experimentally opened and closed while we released 2983 of the fish with acoustic transmitters upstream of the DCC to monitor their arrival and entrainment into the DCC. We used logistic regression to model night-time arrival and entrainment probabilities with covariates that included the proportion of each diel period with upstream flow, flow, rate of change in flow and water temperature. The proportion of time with upstream flow was the most important driver of night-time arrival probability, yet river flow had the largest effect on fish entrainment into the DCC. Modelling results suggest opening the DCC during daytime while keeping the DCC closed during night-time may allow for water diversion that minimizes fish entrainment into the interior delta.

  2. Vertex operators in solvable lattice models

    CERN Document Server

    Foda, O E; Miwa, T; Miki, K; Nakayashiki, A; Foda, Omar; Jimbo, Michio; Miwa, Tetsuji; Miki, Kei; Nakayashiki, Atsushi

    1994-01-01

    We formulate the basic properties of q-vertex operators in the context of the Andrews-Baxter-Forrester (ABF) series, as an example of face-interaction models, derive the q-difference equations satisfied by their correlation functions, and establish their connection with representation theory. We also discuss the q-difference equations of the Kashiwara-Miwa (KM) series, as an example of edge-interaction models. Next, the Ising model--the simplest special case of both ABF and KM series--is studied in more detail using the Jordan-Wigner fermions. In particular, all matrix elements of vertex operators are calculated.

  3. Assessing the debris flow run-out frequency of a catchment in the French Alps using a parameterization analysis with the RAMMS numerical run-out model

    NARCIS (Netherlands)

    Hussin, Y.A.; Quan Luna, B.; Van Westen, C.J.; Christen, M.; Malet, J.P.; Asch, Th.W.J. van

    2012-01-01

    Debris flows occurring in the European Alps frequently cause significant damage to settlements, power-lines and transportation infrastructure which has led to traffic disruptions, economic loss and even death. Estimating the debris flow run-out extent and the parameter uncertainty related to run-out

  4. Effects of independently altering body weight and mass on the energetic cost of a human running model.

    Science.gov (United States)

    Ackerman, Jeffrey; Seipel, Justin

    2016-03-21

    The mechanisms underlying the metabolic cost of running, and legged locomotion in general, remain to be well understood. Prior experimental studies show that the metabolic cost of human running correlates well with the vertical force generated to support body weight, the mechanical work done, and changes in the effective leg stiffness. Further, previous work shows that the metabolic cost of running decreases with decreasing body weight, increases with increasing body weight and mass, and does not significantly change with changing body mass alone. In the present study, we seek to uncover the basic mechanism underlying this existing experimental data. We find that an actuated spring-mass mechanism representing the effective mechanics of human running provides a mechanistic explanation for the previously reported changes in the metabolic cost of human running if the dimensionless relative leg stiffness (effective stiffness normalized by body weight and leg length) is regulated to be constant. The model presented in this paper provides a mechanical explanation for the changes in metabolic cost due to changing body weight and mass which have been previously measured experimentally and highlights the importance of active leg stiffness regulation during human running.

  5. Synthane Pilot Plant, South Park Township, Pennsylvania. Run report No. 2-DB: operating period September 1977--September 1978

    Energy Technology Data Exchange (ETDEWEB)

    1978-01-01

    This report covers the operation of the Synthane Coal Gasification Pilot Plant, South Park Township, Allegheny County, Pennsylvania from September 1977 through September 1978. The facility is owned by the United States Government and operated by C-E Lummus. Test Directive No. 2-DB directed the plant be operated with Illinois No. 6 coal from the River King Mine of the Peabody Coal Company at a pressure of 600 psig. Concurrent pretreater/gasifier operation was to take place at coal feed rates from 1.5 to 2.5 tons/hour. Gas was produced for 182 hours and 1,100 tons of coal were fed to the pretreater and gasifier. Continuous operation of up to 56 hours and carbon conversions based on char of up to 72% were achieved. This successful operation demonstrates that coal gasification via the Synthane Process is viable. Additional data are required for the design of a commercial facility; however, the data obtained to date are adequate to recommend improvements and modifications to the Synthane Process Pilot Plant to increase on stream time efficiency. The successful operation of the pilot plant with Illinois No. 6 coal demonstrates the feasibility of the Synthane Pilot Plant to process a caking type of coal. The ability to successfully pretreat a caking coal at high pressure in a plant of this size is a first and a direct result of the successful operation of the Synthane Process. Other similar type processes operated to date require pretreatment of a caking coal at atmospheric pressure with little or no recovery of the gases or heat produced during pretreatment.

  6. CORSICA modelling of ITER hybrid operation scenarios

    Science.gov (United States)

    Kim, S. H.; Bulmer, R. H.; Campbell, D. J.; Casper, T. A.; LoDestro, L. L.; Meyer, W. H.; Pearlstein, L. D.; Snipes, J. A.

    2016-12-01

    The hybrid operating mode observed in several tokamaks is characterized by further enhancement over the high plasma confinement (H-mode) associated with reduced magneto-hydro-dynamic (MHD) instabilities linked to a stationary flat safety factor (q ) profile in the core region. The proposed ITER hybrid operation is currently aiming at operating for a long burn duration (>1000 s) with a moderate fusion power multiplication factor, Q , of at least 5. This paper presents candidate ITER hybrid operation scenarios developed using a free-boundary transport modelling code, CORSICA, taking all relevant physics and engineering constraints into account. The ITER hybrid operation scenarios have been developed by tailoring the 15 MA baseline ITER inductive H-mode scenario. Accessible operation conditions for ITER hybrid operation and achievable range of plasma parameters have been investigated considering uncertainties on the plasma confinement and transport. ITER operation capability for avoiding the poloidal field coil current, field and force limits has been examined by applying different current ramp rates, flat-top plasma currents and densities, and pre-magnetization of the poloidal field coils. Various combinations of heating and current drive (H&CD) schemes have been applied to study several physics issues, such as the plasma current density profile tailoring, enhancement of the plasma energy confinement and fusion power generation. A parameterized edge pedestal model based on EPED1 added to the CORSICA code has been applied to hybrid operation scenarios. Finally, fully self-consistent free-boundary transport simulations have been performed to provide information on the poloidal field coil voltage demands and to study the controllability with the ITER controllers. Extended from Proc. 24th Int. Conf. on Fusion Energy (San Diego, 2012) IT/P1-13.

  7. Quantitative assessment of changes in landslide risk using a regional scale run-out model

    Science.gov (United States)

    Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone

    2015-04-01

    The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors

  8. Multiple-step model-experiment matching allows precise definition of dynamical leg parameters in human running.

    Science.gov (United States)

    Ludwig, C; Grimmer, S; Seyfarth, A; Maus, H-M

    2012-09-21

    The spring-loaded inverted pendulum (SLIP) model is a well established model for describing bouncy gaits like human running. The notion of spring-like leg behavior has led many researchers to compute the corresponding parameters, predominantly stiffness, in various experimental setups and in various ways. However, different methods yield different results, making the comparison between studies difficult. Further, a model simulation with experimentally obtained leg parameters typically results in comparatively large differences between model and experimental center of mass trajectories. Here, we pursue the opposite approach which is calculating model parameters that allow reproduction of an experimental sequence of steps. In addition, to capture energy fluctuations, an extension of the SLIP (ESLIP) is required and presented. The excellent match of the models with the experiment validates the description of human running by the SLIP with the obtained parameters which we hence call dynamical leg parameters.

  9. MODELING OPERANT BEHAVIOR IN THE PARKINSONIAN RAT

    OpenAIRE

    Avila, Irene; Reilly, Mark P; Sanabria, Federico; Posadas-Sánchez, Diana; Chavez, Claudia L.; Banerjee, Nikhil; Killeen, Peter; Castañeda, Edward

    2008-01-01

    Mathematical principles of reinforcement (MPR; Killeen, 1994) is a quantitative model of operant behavior that contains 3 parameters representing motor capacity (δ), motivation (a), and short term memory (λ). The present study applied MPR to characterize the effects of bilateral infusions of 6-OHDA into the substantia nigra pars compacta in the rat, a model of Parkinson’s disease. Rats were trained to lever press under a 5-component fixed ratio (5, 15, 30, 60, and 100) schedule of food reinfo...

  10. Long-term running alleviates some behavioral and molecular abnormalities in Down syndrome mouse model Ts65Dn.

    Science.gov (United States)

    Kida, Elizabeth; Rabe, Ausma; Walus, Marius; Albertini, Giorgio; Golabek, Adam A

    2013-02-01

    Running may affect the mood, behavior and neurochemistry of running animals. In the present study, we investigated whether voluntary daily running, sustained over several months, might improve cognition and motor function and modify the brain levels of selected proteins (SOD1, DYRK1A, MAP2, APP and synaptophysin) in Ts65Dn mice, a mouse model for Down syndrome (DS). Ts65Dn and age-matched wild-type mice, all females, had free access to a running wheel either from the time of weaning (post-weaning cohort) or from around 7 months of age (adult cohort). Sedentary female mice were housed in similar cages, without running wheels. Behavioral testing and evaluation of motor performance showed that running improved cognitive function and motor skills in Ts65Dn mice. However, while a dramatic improvement in the locomotor functions and learning of motor skills was observed in Ts65Dn mice from both post-weaning and adult cohorts, improved object memory was seen only in Ts65Dn mice that had free access to the wheel from weaning. The total levels of APP and MAP2ab were reduced and the levels of SOD1 were increased in the runners from the post-weaning cohort, while only the levels of MAP2ab and α-cleaved C-terminal fragments of APP were reduced in the adult group in comparison with sedentary trisomic mice. Hence, our study demonstrates that Ts65Dn females benefit from sustained voluntary physical exercise, more prominently if running starts early in life, providing further support to the idea that a properly designed physical exercise program could be a valuable adjuvant to future pharmacotherapy for DS.

  11. Operation and modeling of the MOS transistor

    CERN Document Server

    Tsividis, Yannis

    2011-01-01

    Operation and Modeling of the MOS Transistor has become a standard in academia and industry. Extensively revised and updated, the third edition of this highly acclaimed text provides a thorough treatment of the MOS transistor - the key element of modern microelectronic chips.

  12. A simple running model with rolling contact and its role as a template for dynamic locomotion on a hexapod robot.

    Science.gov (United States)

    Huang, Ke-Jung; Huang, Chun-Kai; Lin, Pei-Chun

    2014-10-07

    We report on the development of a robot's dynamic locomotion based on a template which fits the robot's natural dynamics. The developed template is a low degree-of-freedom planar model for running with rolling contact, which we call rolling spring loaded inverted pendulum (R-SLIP). Originating from a reduced-order model of the RHex-style robot with compliant circular legs, the R-SLIP model also acts as the template for general dynamic running. The model has a torsional spring and a large circular arc as the distributed foot, so during locomotion it rolls on the ground with varied equivalent linear stiffness. This differs from the well-known spring loaded inverted pendulum (SLIP) model with fixed stiffness and ground contact points. Through dimensionless steps-to-fall and return map analysis, within a wide range of parameter spaces, the R-SLIP model is revealed to have self-stable gaits and a larger stability region than that of the SLIP model. The R-SLIP model is then embedded as the reduced-order 'template' in a more complex 'anchor', the RHex-style robot, via various mapping definitions between the template and the anchor. Experimental validation confirms that by merely deploying the stable running gaits of the R-SLIP model on the empirical robot with simple open-loop control strategy, the robot can easily initiate its dynamic running behaviors with a flight phase and can move with similar body state profiles to those of the model, in all five testing speeds. The robot, embedded with the SLIP model but performing walking locomotion, further confirms the importance of finding an adequate template of the robot for dynamic locomotion.

  13. Dark Matter Benchmark Models for Early LHC Run-2 Searches. Report of the ATLAS/CMS Dark Matter Forum

    Energy Technology Data Exchange (ETDEWEB)

    Abercrombie, Daniel [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States). et al.

    2015-07-06

    One of the guiding principles of this report is to channel the efforts of the ATLAS and CMS collaborations towards a minimal basis of dark matter models that should influence the design of the early Run-2 searches. At the same time, a thorough survey of realistic collider signals of Dark Matter is a crucial input to the overall design of the search program.

  14. An integrated model to assess critical rain fall thresholds for the critical run-out distances of debris flows

    NARCIS (Netherlands)

    van Asch, Th.W.J.; Tang, C.; Alkema, D.; Zhu, J.; Zhou, W.

    2013-01-01

    A dramatic increase in debris flows occurred in the years after the 2008 Wenchuan earthquake in SW China due to the deposition of loose co-seismic landslide material. This paper proposes a preliminary integrated model, which describes the relationship between rain input and debris flow run-out in or

  15. Business Intelligence Modeling in Launch Operations

    Science.gov (United States)

    Bardina, Jorge E.; Thirumalainambi, Rajkumar; Davis, Rodney D.

    2005-01-01

    This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation .based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations. process models, systems and environment models, and cost models as a comprehensive disciplined enterprise analysis environment. Significant emphasis is being placed on adapting root cause from existing Shuttle operations to exploration. Technical challenges include cost model validation, integration of parametric models with discrete event process and systems simulations. and large-scale simulation integration. The enterprise architecture is required for coherent integration of systems models. It will also require a plan for evolution over the life of the program. The proposed technology will produce

  16. Business intelligence modeling in launch operations

    Science.gov (United States)

    Bardina, Jorge E.; Thirumalainambi, Rajkumar; Davis, Rodney D.

    2005-05-01

    The future of business intelligence in space exploration will focus on the intelligent system-of-systems real-time enterprise. In present business intelligence, a number of technologies that are most relevant to space exploration are experiencing the greatest change. Emerging patterns of set of processes rather than organizational units leading to end-to-end automation is becoming a major objective of enterprise information technology. The cost element is a leading factor of future exploration systems. This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations, process models, systems and environment models, and cost models as a comprehensive disciplined

  17. Convergent Validity of the One-Mile Run and PACER VO2MAX Prediction Models in Middle School Students

    Directory of Open Access Journals (Sweden)

    Ryan D. Burns

    2014-02-01

    Full Text Available FITNESSGRAM uses an equating method to convert Progressive Aerobic Cardiovascular Endurance Run (PACER laps to One-mile run/walk (1MRW times to estimate aerobic fitness (VO2MAX in children. However, other prediction models can more directly estimate VO2MAX from PACER performance. The purpose of this study was to examine the convergent validity and relative accuracy between 1MRW and various PACER models for predicting VO2MAX in middle school students. Aerobic fitness was assessed on 134 students utilizing the 1MRW and PACER on separate testing days. Pearson correlations, Bland–Altman plots, kappa statistics, proportion of agreement, and prediction error were used to assess associations and agreement among models. Correlation coefficients were strong (r ≥ .80, p .40 and agreement > .90. The results support that PACER models contain convergent validity and strong relative accuracy with the 1MRW model.

  18. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers

    Science.gov (United States)

    Tomkins, James L.; Camp, William J.

    2009-03-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  19. Performance and Operation Experience of the ATLAS SemiConductor Tracker in LHC Run 1 (2009-2012)

    CERN Document Server

    Robichaud-Veronneau, A; The ATLAS collaboration

    2013-01-01

    After more than 3 years of successful operation at the LHC, we report on the operation and performance of the Semi-Conductor Tracker (SCT) functioning in a high luminosity, high radiation environment. The SCT is part of the ATLAS experiment at CERN and is constructed of 4088 silicon detector modules for a total of 6.3 million strips. Each module is designed, constructed and tested to operate as a stand-alone unit, mechanically, electrically, optically and thermally. The modules are mounted into two types of structures: one barrel (4 cylinders) and two end-cap systems (9 disks on each end of the barrel). The SCT silicon micro-strip sensors are processed in the planar p-in-n technology. The signals are processed in the front-end ABCD3TA ASICs, which use a binary readout architecture. Data is transferred to the off-detector readout electronics via optical fibers. We find 99.3% of the SCT modules are operational, noise occupancy and hit efficiency exceed the design specifications; the alignment is very close to t...

  20. Search for non-standard model signatures in the WZ/ZZ final state at CDF run II

    Energy Technology Data Exchange (ETDEWEB)

    Norman, Matthew [Univ. of California, San Diego, CA (United States)

    2009-01-01

    This thesis discusses a search for non-Standard Model physics in heavy diboson production in the dilepton-dijet final state, using 1.9 fb -1 of data from the CDF Run II detector. New limits are set on the anomalous coupling parameters for ZZ and WZ production based on limiting the production cross-section at high š. Additionally limits are set on the direct decay of new physics to ZZ andWZ diboson pairs. The nature and parameters of the CDF Run II detector are discussed, as are the influences that it has on the methods of our analysis.

  1. On the duality between long-run relations and common trends in the I(1) versus I(2) model

    DEFF Research Database (Denmark)

    Juselius, Katarina

    1994-01-01

    Long-run relations and common trends are discussed in terms of the multivariate cointegration model given in the autoregressive and the moving average form. The basic results needed for the analysis of I(1) and 1(2)processes are reviewed and the results applied to Danish monetary data. The test p......, "excess money" is estimated and its effect on the other determinants of the system is investigated. In particular, it is found that "excess money" has no effect on price inflation...... procedures reveal that nominal money stock is essentially I(2). Long-run price homogeneity is supported by the data and imposed on the system. It is found that the bond rate is weakly exogenous for the long-run parameters and therefore act as a driving trend. Using the nonstationarity property of the data...

  2. Development and operational experience with topless wood gasifier running a 3. 75 kW diesel engine pumpset

    Energy Technology Data Exchange (ETDEWEB)

    Rajvanshi, A.K.; Joshi, M.S. (Nimbkar Agricultural Research Inst., Maharashtra (IN))

    1989-01-01

    Operational experience with a topless hybrid wood gasifier powering a 3.75 kW diesel engine pumpset is detailed. The gasifier-engine pumpset has logged 250 h of operation. The fuel was Leucaena leucocephala wood from trees 1-2 years old. Average diesel substitution varied between 50-78% depending on load. With increased load the diesel substitution decreases. On average the gasifier consumes 1.33 kg of wood and 125 ml of diesel to produce 1 kWh of mechanical energy for water pumping. Economic analysis reveals that at 60% diesel substitution and wood cost of Rs 0.5/kg (1 US$=Rs 13.0), the gasifier system is economically comparable to stand alone diesel water pumping systems. Reasons for slow propagation of this technology in rural areas are outlined. (author).

  3. The national operational environment model (NOEM)

    Science.gov (United States)

    Salerno, John J.; Romano, Brian; Geiler, Warren

    2011-06-01

    The National Operational Environment Model (NOEM) is a strategic analysis/assessment tool that provides insight into the complex state space (as a system) that is today's modern operational environment. The NOEM supports baseline forecasts by generating plausible futures based on the current state. It supports what-if analysis by forecasting ramifications of potential "Blue" actions on the environment. The NOEM also supports sensitivity analysis by identifying possible pressure (leverage) points in support of the Commander that resolves forecasted instabilities, and by ranking sensitivities in a list for each leverage point and response. The NOEM can be used to assist Decision Makers, Analysts and Researchers with understanding the inter-workings of a region or nation state, the consequences of implementing specific policies, and the ability to plug in new operational environment theories/models as they mature. The NOEM is built upon an open-source, license-free set of capabilities, and aims to provide support for pluggable modules that make up a given model. The NOEM currently has an extensive number of modules (e.g. economic, security & social well-being pieces such as critical infrastructure) completed along with a number of tools to exercise them. The focus this year is on modeling the social and behavioral aspects of a populace within their environment, primarily the formation of various interest groups, their beliefs, their requirements, their grievances, their affinities, and the likelihood of a wide range of their actions, depending on their perceived level of security and happiness. As such, several research efforts are currently underway to model human behavior from a group perspective, in the pursuit of eventual integration and balance of populace needs/demands within their respective operational environment and the capacity to meet those demands. In this paper we will provide an overview of the NOEM, the need for and a description of its main components

  4. Effects of Degree of Superheat on the Running Performance of an Organic Rankine Cycle (ORC Waste Heat Recovery System for Diesel Engines under Various Operating Conditions

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2014-04-01

    Full Text Available This study analyzed the variation law of engine exhaust energy under various operating conditions to improve the thermal efficiency and fuel economy of diesel engines. An organic Rankine cycle (ORC waste heat recovery system with internal heat exchanger (IHE was designed to recover waste heat from the diesel engine exhaust. The zeotropic mixture R416A was used as the working fluid for the ORC. Three evaluation indexes were presented as follows: waste heat recovery efficiency (WHRE, engine thermal efficiency increasing ratio (ETEIR, and output energy density of working fluid (OEDWF. In terms of various operating conditions of the diesel engine, this study investigated the variation tendencies of the running performances of the ORC waste heat recovery system and the effects of the degree of superheat on the running performance of the ORC waste heat recovery system through theoretical calculations. The research findings showed that the net power output, WHRE, and ETEIR of the ORC waste heat recovery system reach their maxima when the degree of superheat is 40 K, engine speed is 2200 r/min, and engine torque is 1200 N·m. OEDWF gradually increases with the increase in the degree of superheat, which indicates that the required mass flow rate of R416A decreases for a certain net power output, thereby significantly decreasing the risk of environmental pollution.

  5. Batch vs continuous-feeding operational mode for the removal of pesticides from agricultural run-off by microalgae systems: A laboratory scale study

    Energy Technology Data Exchange (ETDEWEB)

    Matamoros, Víctor, E-mail: victor.matamoros@idaea.csic.es; Rodríguez, Yolanda

    2016-05-15

    Highlights: • The effect of microalgae on the removal of pesticides has been evaluated. • Continuous feeding operational mode is more efficient for removing pesticides. • Microalgae increased the removal of some pesticides. • Pesticide TPs confirmed that biodegradation was relevant. - Abstract: Microalgae-based water treatment technologies have been used in recent years to treat different water effluents, but their effectiveness for removing pesticides from agricultural run-off has not yet been addressed. This paper assesses the effect of microalgae in pesticide removal, as well as the influence of different operation strategies (continuous vs batch feeding). The following pesticides were studied: mecoprop, atrazine, simazine, diazinone, alachlor, chlorfenvinphos, lindane, malathion, pentachlorobenzene, chlorpyrifos, endosulfan and clofibric acid (tracer). 2 L batch reactors and 5 L continuous reactors were spiked to 10 μg L{sup −1} of each pesticide. Additionally, three different hydraulic retention times (HRTs) were assessed (2, 4 and 8 days) in the continuous feeding reactors. The batch-feeding experiments demonstrated that the presence of microalgae increased the efficiency of lindane, alachlor and chlorpyrifos by 50%. The continuous feeding reactors had higher removal efficiencies than the batch reactors for pentachlorobenzene, chlorpyrifos and lindane. Whilst longer HRTs increased the technology’s effectiveness, a low HRT of 2 days was capable of removing malathion, pentachlorobenzene, chlorpyrifos, and endosulfan by up to 70%. This study suggests that microalgae-based treatment technologies can be an effective alternative for removing pesticides from agricultural run-off.

  6. Evaluation of land surface model representation of phenology: an analysis of model runs submitted to the NACP Interim Site Synthesis

    Science.gov (United States)

    Richardson, A. D.; Nacp Interim Site Synthesis Participants

    2010-12-01

    Phenology represents a critical intersection point between organisms and their growth environment. It is for this reason that phenology is a sensitive and robust integrator of the biological impacts of year-to-year climate variability and longer-term climate change on natural systems. However, it is perhaps equally important that phenology, by controlling the seasonal activity of vegetation on the land surface, plays a fundamental role in regulating ecosystem processes, competitive interactions, and feedbacks to the climate system. Unfortunately, the phenological sub-models implemented in most state-of-the-art ecosystem models and land surface schemes are overly simplified. We quantified model errors in the representation of the seasonal cycles of leaf area index (LAI), gross ecosystem photosynthesis (GEP), and net ecosystem exchange of CO2. Our analysis was based on site-level model runs (14 different models) submitted to the North American Carbon Program (NACP) Interim Synthesis, and long-term measurements from 10 forested (5 evergreen conifer, 5 deciduous broadleaf) sites within the AmeriFlux and Fluxnet-Canada networks. Model predictions of the seasonality of LAI and GEP were unacceptable, particularly in spring, and especially for deciduous forests. This is despite an historical emphasis on deciduous forest phenology, and the perception that controls on spring phenology are better understood than autumn phenology. Errors of up to 25 days in predicting “spring onset” transition dates were common, and errors of up to 50 days were observed. For deciduous sites, virtually every model was biased towards spring onset being too early, and autumn senescence being too late. Thus, models predicted growing seasons that were far too long for deciduous forests. For most models, errors in the seasonal representation of deciduous forest LAI were highly correlated with errors in the seasonality of both GPP and NEE, indicating the importance of getting the underlying

  7. Robust Boolean Operation for Sculptured Models

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    To enhance the ability of current modeling system, an uniformed representation is designed to represent wire-frame, solid, surface models. We present an algorithm for Boolean operation between the models under this representation. Accuracy, efficiency and robustness are the main consideration. The geometric information is represented with trimmed parametric patches and trimmed parametric splines. The topological information is represented with an extended half-edge data structure. In the process of intersection calculation, hierarchy intersection method is applied for unified classification. Tracing the intersection curve to overcome degenerate cases that occur frequently in practice. The algorithm has been implemented as the modeling kernel of a feature based modeling system named GS-CAD98, which was developed on Windows/NT platform.

  8. Method of Running Sines: Modeling Variability in Long-Period Variables

    CERN Document Server

    Andronov, Ivan L

    2013-01-01

    We review one of complementary methods for time series analysis - the method of "Running Sines". "Crash tests" of the method include signals with a large period variation and with a large trend. The method is most effective for "nearly periodic" signals, which exhibit "wavy shape" with a "cycle length" varying within few dozen per cent (i.e. oscillations of low coherence). This is a typical case for brightness variations of long-period pulsating variables and resembles QPO (Quasi-Periodic Oscillations) and TPO (Transient Periodic Oscillations) in interacting binary stars - cataclysmic variables, symbiotic variables, low-mass X-Ray binaries etc. General theory of "running approximations" was described by Andronov (1997A &AS..125..207A), one of realizations of which is the method of "running sines". The method is related to Morlet-type wavelet analysis improved for irregularly spaced data (Andronov, 1998KFNT...14..490A, 1999sss..conf...57A), as well as to a classical "running mean" (="moving average"). The ...

  9. Simulation of nonlinear wave run-up with a high-order Boussinesq model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.

    2008-01-01

    cases involving long wave resonance in a parabolic basin, solitary wave evolution in a triangular channel, and solitary wave run-up on a circular conical island are considered. In each case the computed results compare well against available analytical solutions or experimental measurements. The ability...

  10. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  11. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  12. Challenges in Mechanization Efforts of Small Diameter Eucalyptus Harvesting Operations with a Low Capacity Running Skyline Yarder in Southern China

    Directory of Open Access Journals (Sweden)

    Stephan Hoffmann

    2015-08-01

    Full Text Available This case study examines the performance of the Igland Hauler employed in small diameter Eucalyptus clear-cut operations in Guangxi, China. A yarding crew of eight persons was monitored by a snap back elemental time study for 19.23 SMH (scheduled machine hours, with 159 yarding cycles and a yarded log volume at landing of 49.4 m³ solid over bark. A gross-productivity of 2.50 m³/SMH and net-productivity of 5.06 m³/PMH0 (productive machine hours excluding delay times was achieved, leading to a machine utilization rate of 49.5%. The costs of the yarder and associated overhead as well as the personnel costs of a large crew with eight people sum up to extraction costs of 50.24 USD/m³. The high costs make it difficult to compete economically with the locally common manual extraction system as long as abundant labor at a low hourly rate is available in the region. Further performance improvement through skill development, but also technical and organizational system modification in conjunction with rising wages and decreasing labor force in rural primary production will determine the justification of employing such yarding systems. However, new silvicultural regimes with extended rotations and supply requirements of the forest products industry in China demand new operational systems.

  13. Running and addiction: precipitated withdrawal in a rat model of activity-based anorexia

    OpenAIRE

    Kanarek, Robin B.; D'Anci, Kristen E.; Jurdak, Nicole; Mathes, Wendy Foulds

    2009-01-01

    Physical activity improves cardiovascular health, strengthens muscles and bones, stimulates neuroplasticity, and promotes feelings of well-being and self-esteem. However, when taken to extremes, exercise can develop into an addictive-like behavior. To further assess the addictive potential of physical activity, the present experiments assessed whether running wheel activity in rats would lead to physical dependence similar to that observed after chronic morphine administration. Active male an...

  14. MathRun: An Adaptive Mental Arithmetic Game Using A Quantitative Performance Model

    OpenAIRE

    Chen, L.; Tang, Wen

    2016-01-01

    Pedagogy and the way children learn are changing rapidly with the introduction of widely accessible computer technologies, from mobile apps to interactive educational games. Digital games have the capacity to embed many learning supports using the widely accredited VARK (visual, auditory, reading, and kinaesthetic) learning style. In this paper, we present a mathematics educational game MathRun for children age between 7-11 years old to practice mental arithmetic. We build the game as an inte...

  15. Treadmill running improves spatial memory in an animal model of Alzheimer's disease.

    Science.gov (United States)

    Hoveida, Reihaneh; Alaei, Hojjatallah; Oryan, Shahrbanoo; Parivar, Kazem; Reisi, Parham

    2011-01-01

    Alzheimer's disease (AD) is a progressive neurodegenerative disease that is characterized by a decline in cognitive function and severe neuronal loss in the cerebral cortex and certain subcortical regions of the brain including nucleus basalis magnocellularis (NBM) that play an important role in learning and memory. There are few therapeutic regimens that influence the underlying pathogenic phenotypes of AD, however, of the currently available therapies, exercise training is considered to be one of the best strategies for attenuating the pathological phenotypes of AD for people with AD. Here, we sought to investigate the effect of treadmill running on spatial memory in Alzheimer-induced rats. Male Wistar rats were split into two groups namely shams (n=7) and lesions with the lesion group subdivided further into the lesion-rest (n=7) and lesion-exercise (n=7). The lesion-exercise and shams were subjected to treadmill running at 17 meters per minute (m/min) for 60 min per day (min/day), 7 days per week (days/wk), for 60 days. Spatial memory was investigated using the Morris Water Maze test in the rats after 60 days of Alzheimer induction and the exercise. Our data demonstrated that spatial memory was indeed impaired in the lesion group compared with the shams. However, exercise notably improved spatial memory in the lesion-exercised rats compared to lesion-rested group. The present results suggest that spatial memory is affected under Alzheimer conditions and that treadmill running improves these effects. Our data suggested that treadmill running contributes to the alleviation of the cognitive decline in AD.

  16. Voluntary Running Attenuates Memory Loss, Decreases Neuropathological Changes and Induces Neurogenesis in a Mouse Model of Alzheimer's Disease.

    Science.gov (United States)

    Tapia-Rojas, Cheril; Aranguiz, Florencia; Varela-Nallar, Lorena; Inestrosa, Nibaldo C

    2016-01-01

    Alzheimer's disease (AD) is a neurodegenerative disorder characterized by loss of memory and cognitive abilities, and the appearance of amyloid plaques composed of the amyloid-β peptide (Aβ) and neurofibrillary tangles formed of tau protein. It has been suggested that exercise might ameliorate the disease; here, we evaluated the effect of voluntary running on several aspects of AD including amyloid deposition, tau phosphorylation, inflammatory reaction, neurogenesis and spatial memory in the double transgenic APPswe/PS1ΔE9 mouse model of AD. We report that voluntary wheel running for 10 weeks decreased Aβ burden, Thioflavin-S-positive plaques and Aβ oligomers in the hippocampus. In addition, runner APPswe/PS1ΔE9 mice showed fewer phosphorylated tau protein and decreased astrogliosis evidenced by lower staining of GFAP. Further, runner APPswe/PS1ΔE9 mice showed increased number of neurons in the hippocampus and exhibited increased cell proliferation and generation of cells positive for the immature neuronal protein doublecortin, indicating that running increased neurogenesis. Finally, runner APPswe/PS1ΔE9 mice showed improved spatial memory performance in the Morris water maze. Altogether, our findings indicate that in APPswe/PS1ΔE9 mice, voluntary running reduced all the neuropathological hallmarks of AD studied, reduced neuronal loss, increased hippocampal neurogenesis and reduced spatial memory loss. These findings support that voluntary exercise might have therapeutic value on AD.

  17. Running Exercise Alleviates Pain and Promotes Cell Proliferation in a Rat Model of Intervertebral Disc Degeneration

    Directory of Open Access Journals (Sweden)

    Shuo Luan

    2015-01-01

    Full Text Available Chronic low back pain accompanied by intervertebral disk degeneration is a common musculoskeletal disorder. Physical exercise, which is clinically recommended by international guidelines, has proven to be effective for degenerative disc disease (DDD patients. However, the mechanism underlying the analgesic effects of physical exercise on DDD remains largely unclear. The results of the present study showed that mechanical withdrawal thresholds of bilateral hindpaw were significantly decreased beginning on day three after intradiscal complete Freund’s adjuvant (CFA injection and daily running exercise remarkably reduced allodynia in the CFA exercise group beginning at day 28 compared to the spontaneous recovery group (controls. The hindpaw withdrawal thresholds of the exercise group returned nearly to baseline at the end of experiment, but severe pain persisted in the control group. Histological examinations performed on day 70 revealed that running exercise restored the degenerative discs and increased the cell densities of the annulus fibrosus (AF and nucleus pulposus (NP. Furthermore, immunofluorescence labeling revealed significantly higher numbers of 5-bromo-2-deoxyuridine (BrdU-positive cells in the exercise group on days 28, 42, 56 and 70, which indicated more rapid proliferation compared to the control at the corresponding time points. Taken together, these results suggest that running exercise might alleviate the mechanical allodynia induced by intradiscal CFA injection via disc repair and cell proliferation, which provides new evidence for future clinical use.

  18. Dual-use tools and systematics-aware analysis workflows in the ATLAS Run-2 analysis model

    CERN Document Server

    FARRELL, Steven; The ATLAS collaboration; Calafiura, Paolo; Delsart, Pierre-Antoine; Elsing, Markus; Koeneke, Karsten; Krasznahorkay, Attila; Krumnack, Nils; Lancon, Eric; Lavrijsen, Wim; Laycock, Paul; Lei, Xiaowen; Strandberg, Sara Kristina; Verkerke, Wouter; Vivarelli, Iacopo; Woudstra, Martin

    2015-01-01

    The ATLAS analysis model has been overhauled for the upcoming run of data collection in 2015 at 13 TeV. One key component of this upgrade was the Event Data Model (EDM), which now allows for greater flexibility in the choice of analysis software framework and provides powerful new features that can be exploited by analysis software tools. A second key component of the upgrade is the introduction of a dual-use tool technology, which provides abstract interfaces for analysis software tools to run in either the Athena framework or a ROOT-based framework. The tool interfaces, including a new interface for handling systematic uncertainties, have been standardized for the development of improved analysis workflows and consolidation of high-level analysis tools. This paper will cover the details of the dual-use tool functionality, the systematics interface, and how these features fit into a centrally supported analysis environment.

  19. Batch vs continuous-feeding operational mode for the removal of pesticides from agricultural run-off by microalgae systems: A laboratory scale study.

    Science.gov (United States)

    Matamoros, Víctor; Rodríguez, Yolanda

    2016-05-15

    Microalgae-based water treatment technologies have been used in recent years to treat different water effluents, but their effectiveness for removing pesticides from agricultural run-off has not yet been addressed. This paper assesses the effect of microalgae in pesticide removal, as well as the influence of different operation strategies (continuous vs batch feeding). The following pesticides were studied: mecoprop, atrazine, simazine, diazinone, alachlor, chlorfenvinphos, lindane, malathion, pentachlorobenzene, chlorpyrifos, endosulfan and clofibric acid (tracer). 2L batch reactors and 5L continuous reactors were spiked to 10 μg L(-1) of each pesticide. Additionally, three different hydraulic retention times (HRTs) were assessed (2, 4 and 8 days) in the continuous feeding reactors. The batch-feeding experiments demonstrated that the presence of microalgae increased the efficiency of lindane, alachlor and chlorpyrifos by 50%. The continuous feeding reactors had higher removal efficiencies than the batch reactors for pentachlorobenzene, chlorpyrifos and lindane. Whilst longer HRTs increased the technology's effectiveness, a low HRT of 2 days was capable of removing malathion, pentachlorobenzene, chlorpyrifos, and endosulfan by up to 70%. This study suggests that microalgae-based treatment technologies can be an effective alternative for removing pesticides from agricultural run-off. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Modelling effects of acid deposition and climate change on soil and run-off chemistry at Risdalsheia, Norway

    Directory of Open Access Journals (Sweden)

    J. P. Mol-Dijkstra

    2001-01-01

    Full Text Available Elevated carbon dioxide levels, caused by anthropogenic emissions of carbon dioxide to the atmosphere, and higher temperature may lead to increased plant growth and uptake of nitrogen, but increased temperature may lead to increased nitrogen mineralisation causing enhanced nitrogen leaching. The overall result of both counteracting effects is largely unknown. To gain insight into the long-term effects, the geochemical model SMART2 was applied using data from the catchment-scale experiments of the RAIN and CLIMEX projects, conducted on boreal forest ecosystems at Risdalsheia, southern Norway. These unique experiments at the ecosystem scale provide information on the short-term effects and interactions of nitrogen deposition and increased temperature and carbon dioxide on carbon and nitrogen cycling and especially the run-off chemistry. To predict changes in soil processes in response to climate change, the model was extended by including the temperature effect on mineralisation, nitrification, denitrification, aluminium dissolution and mineral weathering. The extended model was tested on the two manipulated catchments at Risdalsheia and long-term effects were evaluated by performing long-time runs. The effects of climate change treatment, which resulted in increased nitrogen fluxes at both catchments, were slightly overestimated by SMART2. The temperature dependency of mineralisation was simulated adequately but the temperature effect on nitrification was slightly overestimated. Monitored changes in base cation concentrations and pH were quite well simulated with SMART2. The long-term simulations indicate that the increase in nitrogen run-off is only a temporary effect; in the long-term, no effect on total nitrogen leaching is predicted. At higher deposition levels the temporary increase in nitrogen leaching lasts longer than at low deposition. Contrary to nitrogen leaching, temperature increase leads to a permanent decrease in aluminium

  1. Modeling decisions information fusion and aggregation operators

    CERN Document Server

    Torra, Vicenc

    2007-01-01

    Information fusion techniques and aggregation operators produce the most comprehensive, specific datum about an entity using data supplied from different sources, thus enabling us to reduce noise, increase accuracy, summarize and extract information, and make decisions. These techniques are applied in fields such as economics, biology and education, while in computer science they are particularly used in fields such as knowledge-based systems, robotics, and data mining. This book covers the underlying science and application issues related to aggregation operators, focusing on tools used in practical applications that involve numerical information. Starting with detailed introductions to information fusion and integration, measurement and probability theory, fuzzy sets, and functional equations, the authors then cover the following topics in detail: synthesis of judgements, fuzzy measures, weighted means and fuzzy integrals, indices and evaluation methods, model selection, and parameter extraction. The method...

  2. Dark Matter Benchmark Models for Early LHC Run-2 Searches: Report of the ATLAS/CMS Dark Matter Forum

    CERN Document Server

    Abercrombie, Daniel; Akilli, Ece; Alcaraz Maestre, Juan; Allen, Brandon; Alvarez Gonzalez, Barbara; Andrea, Jeremy; Arbey, Alexandre; Azuelos, Georges; Azzi, Patrizia; Backovic, Mihailo; Bai, Yang; Banerjee, Swagato; Beacham, James; Belyaev, Alexander; Boveia, Antonio; Brennan, Amelia Jean; Buchmueller, Oliver; Buckley, Matthew R.; Busoni, Giorgio; Buttignol, Michael; Cacciapaglia, Giacomo; Caputo, Regina; Carpenter, Linda; Filipe Castro, Nuno; Gomez Ceballos, Guillelmo; Cheng, Yangyang; Chou, John Paul; Cortes Gonzalez, Arely; Cowden, Chris; D'Eramo, Francesco; De Cosa, Annapaola; De Gruttola, Michele; De Roeck, Albert; De Simone, Andrea; Deandrea, Aldo; Demiragli, Zeynep; DiFranzo, Anthony; Doglioni, Caterina; du Pree, Tristan; Erbacher, Robin; Erdmann, Johannes; Fischer, Cora; Flaecher, Henning; Fox, Patrick J.; Fuks, Benjamin; Genest, Marie-Helene; Gomber, Bhawna; Goudelis, Andreas; Gramling, Johanna; Gunion, John; Hahn, Kristian; Haisch, Ulrich; Harnik, Roni; Harris, Philip C.; Hoepfner, Kerstin; Hoh, Siew Yan; Hsu, Dylan George; Hsu, Shih-Chieh; Iiyama, Yutaro; Ippolito, Valerio; Jacques, Thomas; Ju, Xiangyang; Kahlhoefer, Felix; Kalogeropoulos, Alexis; Kaplan, Laser Seymour; Kashif, Lashkar; Khoze, Valentin V.; Khurana, Raman; Kotov, Khristian; Kovalskyi, Dmytro; Kulkarni, Suchita; Kunori, Shuichi; Kutzner, Viktor; Lee, Hyun Min; Lee, Sung-Won; Liew, Seng Pei; Lin, Tongyan; Lowette, Steven; Madar, Romain; Malik, Sarah; Maltoni, Fabio; Martinez Perez, Mario; Mattelaer, Olivier; Mawatari, Kentarou; McCabe, Christopher; Megy, Theo; Morgante, Enrico; Mrenna, Stephen; Narayanan, Siddharth M.; Nelson, Andy; Novaes, Sergio F.; Padeken, Klaas Ole; Pani, Priscilla; Papucci, Michele; Paulini, Manfred; Paus, Christoph; Pazzini, Jacopo; Penning, Bjorn; Peskin, Michael E.; Pinna, Deborah; Procura, Massimiliano; Qazi, Shamona F.; Racco, Davide; Re, Emanuele; Riotto, Antonio; Rizzo, Thomas G.; Roehrig, Rainer; Salek, David; Sanchez Pineda, Arturo; Sarkar, Subir; Schmidt, Alexander; Schramm, Steven Randolph; Shepherd, William; Singh, Gurpreet; Soffi, Livia; Srimanobhas, Norraphat; Sung, Kevin; Tait, Tim M.P.; Theveneaux-Pelzer, Timothee; Thomas, Marc; Tosi, Mia; Trocino, Daniele; Undleeb, Sonaina; Vichi, Alessandro; Wang, Fuquan; Wang, Lian-Tao; Wang, Ren-Jie; Whallon, Nikola; Worm, Steven; Wu, Mengqing; Wu, Sau Lan; Yang, Hongtao; Yang, Yong; Yu, Shin-Shan; Zaldivar, Bryan; Zanetti, Marco; Zhang, Zhiqing; Zucchetta, Alberto

    2015-01-01

    This document is the final report of the ATLAS-CMS Dark Matter Forum, a forum organized by the ATLAS and CMS collaborations with the participation of experts on theories of Dark Matter, to select a minimal basis set of dark matter simplified models that should support the design of the early LHC Run-2 searches. A prioritized, compact set of benchmark models is proposed, accompanied by studies of the parameter space of these models and a repository of generator implementations. This report also addresses how to apply the Effective Field Theory formalism for collider searches and present the results of such interpretations.

  3. Dark Matter Benchmark Models for Early LHC Run-2 Searches: Report of the ATLAS/CMS Dark Matter Forum

    OpenAIRE

    Abercrombie, Daniel; Akchurin, Nural; Akilli, Ece; Maestre, Juan Alcaraz; Allen, Brandon; Gonzalez, Barbara Alvarez; Andrea, Jeremy; Arbey, Alexandre; Azuelos, Georges; Azzi, Patrizia; Backović, Mihailo; Bai, Yang; Banerjee, Swagato; Beacham, James; Belyaev, Alexander

    2015-01-01

    This document is the final report of the ATLAS-CMS Dark Matter Forum, a forum organized by the ATLAS and CMS collaborations with the participation of experts on theories of Dark Matter, to select a minimal basis set of dark matter simplified models that should support the design of the early LHC Run-2 searches. A prioritized, compact set of benchmark models is proposed, accompanied by studies of the parameter space of these models and a repository of generator implementations. This report als...

  4. 40 CFR 258.26 - Run-on/run-off control systems.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Run-on/run-off control systems. 258.26... FOR MUNICIPAL SOLID WASTE LANDFILLS Operating Criteria § 258.26 Run-on/run-off control systems. (a) Owners or operators of all MSWLF units must design, construct, and maintain: (1) A run-on control system...

  5. Physical modeling of long-wave run-up mitigation using submerged breakwaters

    Science.gov (United States)

    Lee, Yu-Ting; Wu, Yun-Ta; Hwung, Hwung-Hweng; Yang, Ray-Yeng

    2016-04-01

    Natural hazard due to tsunami inundation inland has been viewed as a crucial issue for coastal engineering community. The 2004 India Ocean tsunami and the 2011 Tohoku earthquake tsunami were caused by mega scale earthquakes that brought tremendous catastrophe in the disaster regions. It is thus of great importance to develop innovative approach to achieve the reduction and mitigation of tsunami hazards. In this study, new experiments have been carried out in a laboratory-scale to investigate the physical process of long-wave through submerged breakwaters built upon a mild slope. Solitary-wave is employed to represent the characteristic of long-wave with infinite wavelength and wave period. Our goal is twofold. First of all, through changing the positions of single breakwater and multiple breakwaters upon a mild slope, the optimal locations of breakwaters can be pointed out by means of maximum run-up reduction. Secondly, through using a state-of-the-art measuring technique Bubble Image Velocimetry, which features non-intrusive and image-based measurement, the wave kinematics in the highly aerated region due to solitary-wave shoaling, breaking and uprush can be quantitated. Therefore, the mitigation of long-wave due to the construction of submerged breakwaters built upon a mild slope can be evaluated not only for imaging run-up and run-down characteristics but also for measuring turbulent velocity fields due to breaking wave. Although we understand the most devastating tsunami hazards cannot be fully mitigated with impossibility, this study is to provide quantitated information on what kind of artificial coastal structure that can withstand which level of wave loads.

  6. Classical running and symmetry breaking in models with two extra dimensions

    CERN Document Server

    Papineau, C

    2007-01-01

    We consider a codimension two scalar theory with brane-localised Higgs type potential. The six-dimensional field has Dirichlet boundary condition on the bounds of the transverse compact space. The regularisation of the brane singularity yields renormalisation group evolution for the localised couplings at the classical level. In particular, a tachyonic mass term grows at large distances and hits a Landau pole. We exhibit a peculiar value of the bare coupling such that the running mass parameter becomes large precisely at the compactification scale, and the effective four-dimensional zero mode is massless. Above the critical coupling, spontaneous symmetry breaking occurs and there is a very light state.

  7. Up and running with AutoCAD 2014 2D and 3D drawing and modeling

    CERN Document Server

    Gindis, Elliot

    2013-01-01

    Get ""Up and Running"" with AutoCAD using Gindis's combination of step-by-step instruction, examples, and insightful explanations. The emphasis from the beginning is on core concepts and practical application of AutoCAD in architecture, engineering and design. Equally useful in instructor-led classroom training, self-study, or as a professional reference, the book is written with the user in mind by a long-time AutoCAD professional and instructor based on what works in the industry and the classroom. Strips away complexities, both real and perceived, and reduces AutoCAD t

  8. Integrating Geo-Spatial Data for Regional Landslide Susceptibility Modeling in Consideration of Run-Out Signature

    Science.gov (United States)

    Lai, J.-S.; Tsai, F.; Chiang, S.-H.

    2016-06-01

    This study implements a data mining-based algorithm, the random forests classifier, with geo-spatial data to construct a regional and rainfall-induced landslide susceptibility model. The developed model also takes account of landslide regions (source, non-occurrence and run-out signatures) from the original landslide inventory in order to increase the reliability of the susceptibility modelling. A total of ten causative factors were collected and used in this study, including aspect, curvature, elevation, slope, faults, geology, NDVI (Normalized Difference Vegetation Index), rivers, roads and soil data. Consequently, this study transforms the landslide inventory and vector-based causative factors into the pixel-based format in order to overlay with other raster data for constructing the random forests based model. This study also uses original and edited topographic data in the analysis to understand their impacts to the susceptibility modeling. Experimental results demonstrate that after identifying the run-out signatures, the overall accuracy and Kappa coefficient have been reached to be become more than 85 % and 0.8, respectively. In addition, correcting unreasonable topographic feature of the digital terrain model also produces more reliable modelling results.

  9. The Tourism Market of Australia – A Model of Managerial Performance in Running an Exotic Tourist Destination

    Directory of Open Access Journals (Sweden)

    Mihai Daniela

    2012-12-01

    Full Text Available The purpose of this paper is to illustrate the performance management that government decision-making bodies involve in organizing tourism in Australia. The proposed quantitative indicators evaluate the managerial performance in running this system: macroeconomic indicators of domestic and international tourist flows and their impact on the Australian economy. The conclusion is that the national tourism development strategy adopted in Australia, through its objectives and identified strategic options, offers the potential to enhance the competitiveness of the tourism industry. The interim results of its implementation demonstrate its effectiveness: in Australia, tourism has become the real driver of socioeconomic progress, thus a model of performance management in running a potentially valuable tourist destinations.

  10. Modeling Fall Run Chinook Salmon Populations in the San Joaquin River Basin Using an Artificial Neural Network

    Science.gov (United States)

    Keyantash, J.; Quinn, N. W.; Hidalgo, H. G.; Dracup, J. A.

    2002-12-01

    The number of chinook salmon returning to spawn during the fall run (September-November) were separately modeled for three San Joaquin River tributaries-the Stanislaus, Tuolumne, and Merced Rivers-to determine the sensitivity of salmon populations to hydrologic alterations associated with potential climate change. The modeling was accomplished using a feed-forward artificial neural network (ANN) with error backpropagation. Inputs to the ANN included modeled monthly river temperature and streamflow data for each tributary, and were lagged multiple years to include the effects of antecedent environmental conditions upon populations of salmon throughout their life histories. Temperature and streamflow conditions at downstream locations in each tributary were computed using the California Dept. of Water Resources' DSM-2 model. Inputs to the DSM-2 model originated from regional climate modeling under a CO2 doubling scenario. Annual population data for adult chinook salmon (1951-present) were provided by the California Dept. of Fish and Game, and were used for supervised training of the ANN. It was determined that Stanislaus, Tuolumne and Merced River chinook runs could be impacted by alterations to the hydroclimatology of the San Joaquin basin.

  11. Spectral Running and Non-Gaussianity from Slow-Roll Inflation in Generalised Two--Field Models

    CERN Document Server

    Choi, Ki-Young; van de Bruck, Carsten

    2008-01-01

    Theories beyond the standard model such as string theory motivate low energy effective field theories with several scalar fields which are not only coupled through a potential but also through their kinetic terms. For such theories we derive the general formulae for the running of the spectral indices for the adiabatic, isocurvature and correlation spectra in the case of two field inflation. We also compute the expected non-Gaussianity in such models for specific forms of the potentials. We find that the coupling has little impact on the level of non-Gaussianity during inflation.

  12. Operational modal analysis by updating autoregressive model

    Science.gov (United States)

    Vu, V. H.; Thomas, M.; Lakis, A. A.; Marcouiller, L.

    2011-04-01

    This paper presents improvements of a multivariable autoregressive (AR) model for applications in operational modal analysis considering simultaneously the temporal response data of multi-channel measurements. The parameters are estimated by using the least squares method via the implementation of the QR factorization. A new noise rate-based factor called the Noise rate Order Factor (NOF) is introduced for use in the effective selection of model order and noise rate estimation. For the selection of structural modes, an orderwise criterion called the Order Modal Assurance Criterion (OMAC) is used, based on the correlation of mode shapes computed from two successive orders. Specifically, the algorithm is updated with respect to model order from a small value to produce a cost-effective computation. Furthermore, the confidence intervals of each natural frequency, damping ratio and mode shapes are also computed and evaluated with respect to model order and noise rate. This method is thus very effective for identifying the modal parameters in case of ambient vibrations dealing with modern output-only modal analysis. Simulations and discussions on a steel plate structure are presented, and the experimental results show good agreement with the finite element analysis.

  13. Use of an operational model evaluation system for model intercomparison

    Energy Technology Data Exchange (ETDEWEB)

    Foster, K. T., LLNL

    1998-03-01

    The Atmospheric Release Advisory Capability (ARAC) is a centralized emergency response system used to assess the impact from atmospheric releases of hazardous materials. As part of an on- going development program, new three-dimensional diagnostic windfield and Lagrangian particle dispersion models will soon replace ARAC`s current operational windfield and dispersion codes. A prototype model performance evaluation system has been implemented to facilitate the study of the capabilities and performance of early development versions of these new models relative to ARAC`s current operational codes. This system provides tools for both objective statistical analysis using common performance measures and for more subjective visualization of the temporal and spatial relationships of model results relative to field measurements. Supporting this system is a database of processed field experiment data (source terms and meteorological and tracer measurements) from over 100 individual tracer releases.

  14. A mechanistic model on the role of "radially-running" collagen fibers on dissection properties of human ascending thoracic aorta.

    Science.gov (United States)

    Pal, Siladitya; Tsamis, Alkiviadis; Pasta, Salvatore; D'Amore, Antonio; Gleason, Thomas G; Vorp, David A; Maiti, Spandan

    2014-03-21

    Aortic dissection (AoD) is a common condition that often leads to life-threatening cardiovascular emergency. From a biomechanics viewpoint, AoD involves failure of load-bearing microstructural components of the aortic wall, mainly elastin and collagen fibers. Delamination strength of the aortic wall depends on the load-bearing capacity and local micro-architecture of these fibers, which may vary with age, disease and aortic location. Therefore, quantifying the role of fiber micro-architecture on the delamination strength of the aortic wall may lead to improved understanding of AoD. We present an experimentally-driven modeling paradigm towards this goal. Specifically, we utilize collagen fiber micro-architecture, obtained in a parallel study from multi-photon microscopy, in a predictive mechanistic framework to characterize the delamination strength. We then validate our model against peel test experiments on human aortic strips and utilize the model to predict the delamination strength of separate aortic strips and compare with experimental findings. We observe that the number density and failure energy of the radially-running collagen fibers control the peel strength. Furthermore, our model suggests that the lower delamination strength previously found for the circumferential direction in human aorta is related to a lower number density of radially-running collagen fibers in that direction. Our model sets the stage for an expanded future study that could predict AoD propagation in patient-specific aortic geometries and better understand factors that may influence propensity for occurrence.

  15. Running Club

    CERN Multimedia

    Running Club

    2011-01-01

    The cross country running season has started well this autumn with two events: the traditional CERN Road Race organized by the Running Club, which took place on Tuesday 5th October, followed by the ‘Cross Interentreprises’, a team event at the Evaux Sports Center, which took place on Saturday 8th October. The participation at the CERN Road Race was slightly down on last year, with 65 runners, however the participants maintained the tradition of a competitive yet friendly atmosphere. An ample supply of refreshments before the prize giving was appreciated by all after the race. Many thanks to all the runners and volunteers who ensured another successful race. The results can be found here: https://espace.cern.ch/Running-Club/default.aspx CERN participated successfully at the cross interentreprises with very good results. The teams succeeded in obtaining 2nd and 6th place in the Mens category, and 2nd place in the Mixed category. Congratulations to all. See results here: http://www.c...

  16. REAL STOCK PRICES AND THE LONG-RUN MONEY DEMAND FUNCTION IN MALAYSIA: Evidence from Error Correction Model

    Directory of Open Access Journals (Sweden)

    Naziruddin Abdullah

    2004-06-01

    Full Text Available This study adopts the error correction model to empirically investigate the role of real stock prices in the long run-money demand in the Malaysian financial or money market for the period 1977: Q1-1997: Q2. Specifically, an attempt is made to check whether the real narrow money (M1/P is cointegrated with the selected variables like industrial production index (IPI, one-year T-Bill rates (TB12, and real stock prices (RSP. If a cointegration between the variables, i.e., the dependent and independent variables, is found to be the case, it may imply that there exists a long-run co-movement among these variables in the Malaysian money market. From the empirical results it is found that the cointegration between money demand and real stock prices (RSP is positive, implying that in the long run there is a positive association between real stock prices (RSP and demand for real narrow money (M1/P. The policy implication that can be extracted from this study is that an increase in stock prices is likely to necessitate an expansionary monetary policy to prevent nominal income or inflation target from undershooting.

  17. A secure operational model for mobile payments.

    Science.gov (United States)

    Chang, Tao-Ku

    2014-01-01

    Instead of paying by cash, check, or credit cards, customers can now also use their mobile devices to pay for a wide range of services and both digital and physical goods. However, customers' security concerns are a major barrier to the broad adoption and use of mobile payments. In this paper we present the design of a secure operational model for mobile payments in which access control is based on a service-oriented architecture. A customer uses his/her mobile device to get authorization from a remote server and generate a two-dimensional barcode as the payment certificate. This payment certificate has a time limit and can be used once only. The system also provides the ability to remotely lock and disable the mobile payment service.

  18. A Secure Operational Model for Mobile Payments

    Directory of Open Access Journals (Sweden)

    Tao-Ku Chang

    2014-01-01

    Full Text Available Instead of paying by cash, check, or credit cards, customers can now also use their mobile devices to pay for a wide range of services and both digital and physical goods. However, customers’ security concerns are a major barrier to the broad adoption and use of mobile payments. In this paper we present the design of a secure operational model for mobile payments in which access control is based on a service-oriented architecture. A customer uses his/her mobile device to get authorization from a remote server and generate a two-dimensional barcode as the payment certificate. This payment certificate has a time limit and can be used once only. The system also provides the ability to remotely lock and disable the mobile payment service.

  19. The 14 TeV LHC Takes Aim at SUSY: A No-Scale Supergravity Model for LHC Run 2

    CERN Document Server

    Li, Tianjun; Nanopoulos, Dimitri V; Walker, Joel W

    2015-01-01

    The Supergravity model named No-Scale ${\\cal F}$-$SU(5)$, which is based upon the flipped $SU$(5) Grand Unified Theory (GUT) with additional TeV-scale vector-like flippon multiplets, has been partially probed during the LHC Run 1 at 7-8 TeV, though the majority of its model space remains viable and should be accessible by the 13-14 TeV LHC during Run 2. The model framework possesses the rather unique capacity to provide a light CP-even Higgs boson mass in the favored 124-126 GeV window while simultaneously retaining a testably light supersymmetry (SUSY) spectrum. We summarize the outlook for No-Scale ${\\cal F}$-$SU(5)$ at the 13-14 TeV LHC and review a promising methodology for the discrimination of its long-chain cascade decay signature. We further show that proportional dependence of all model scales upon the unified gaugino mass $M_{1/2}$ minimizes electroweak fine-tuning, allowing the $Z$-boson mass $M_Z$ to be expressed as an explicit function of $M_{1/2}$, $M_Z^2 = M_Z^2 (M_{1/2}^2)$, with implicit depe...

  20. A numerical study of tsunami wave impact and run-up on coastal cliffs using a CIP-based model

    Science.gov (United States)

    Zhao, Xizeng; Chen, Yong; Huang, Zhenhua; Hu, Zijun; Gao, Yangyang

    2017-05-01

    There is a general lack of understanding of tsunami wave interaction with complex geographies, especially the process of inundation. Numerical simulations are performed to understand the effects of several factors on tsunami wave impact and run-up in the presence of gentle submarine slopes and coastal cliffs, using an in-house code, a constrained interpolation profile (CIP)-based model. The model employs a high-order finite difference method, the CIP method, as the flow solver; utilizes a VOF-type method, the tangent of hyperbola for interface capturing/slope weighting (THINC/SW) scheme, to capture the free surface; and treats the solid boundary by an immersed boundary method. A series of incident waves are arranged to interact with varying coastal geographies. Numerical results are compared with experimental data and good agreement is obtained. The influences of gentle submarine slope, coastal cliff and incident wave height are discussed. It is found that the tsunami amplification factor varying with incident wave is affected by gradient of cliff slope, and the critical value is about 45°. The run-up on a toe-erosion cliff is smaller than that on a normal cliff. The run-up is also related to the length of a gentle submarine slope with a critical value of about 2.292 m in the present model for most cases. The impact pressure on the cliff is extremely large and concentrated, and the backflow effect is non-negligible. Results of our work are highly precise and helpful in inverting tsunami source and forecasting disaster.

  1. Parallel runs of a large air pollution model on a grid of Sun computers

    DEFF Research Database (Denmark)

    Alexandrov, V.N.; Owczarz, W.; Thomsen, Per Grove

    2004-01-01

    Large -scale air pollution models can successfully be used in different environmental studies. These models are described mathematically by systems of partial differential equations. Splitting procedures followed by discretization of the spatial derivatives leads to several large systems of ordin...

  2. Modelling Energy Loss Mechanisms and a Determination of the Electron Energy Scale for the CDF Run II W Mass Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Riddick, Thomas [Univ. College London, Bloomsbury (United Kingdom)

    2012-06-15

    The calibration of the calorimeter energy scale is vital to measuring the mass of the W boson at CDF Run II. For the second measurement of the W boson mass at CDF Run II, two independent simulations were developed. This thesis presents a detailed description of the modification and validation of Bremsstrahlung and pair production modelling in one of these simulations, UCL Fast Simulation, comparing to both geant4 and real data where appropriate. The total systematic uncertainty on the measurement of the W boson mass in the W → eve channel from residual inaccuracies in Bremsstrahlung modelling is estimated as 6.2 ±3.2 MeV/c2 and the total systematic uncertainty from residual inaccuracies in pair production modelling is estimated as 2.8± 2.7 MeV=c2. Two independent methods are used to calibrate the calorimeter energy scale in UCL Fast Simulation; the results of these two methods are compared to produce a measurement of the Z boson mass as a cross-check on the accuracy of the simulation.

  3. Modeling and Design of Container Terminal Operations

    NARCIS (Netherlands)

    D. Roy (Debjit); M.B.M. de Koster (René)

    2014-01-01

    textabstractDesign of container terminal operations is complex because multiple factors affect the operational perfor- mance. These factors include: topological constraints, a large number of design parameters and settings, and stochastic interactions that interplay among the quayside, vehicle trans

  4. Coupled models of heat transfer and phase transformation for the run-out table in hot rolling

    Institute of Scientific and Technical Information of China (English)

    Shui-xuan CHEN; Jun ZOU; Xin FU

    2008-01-01

    Mathematical models are been proposed to simulate the thermal and metallurgical behaviors of the strip occurring on the run-out table (ROT) in a hot strip mill. A variational method is utilized for the discretization of the governing transient conduction-convection equation, with heat transfer coefficients adaptively determined by the actual mill data. To consider the thermal effect of phase transformation during cooling, a constitutive equation for describing austenite decomposition kinetics of steel in air and water cooling zones is coupled with the heat transfer model. As the basic required inputs in the numerical simulations, thermal material properties are experimentally measured for three carbon steels and the least squares method is used to statistically derive regression models for the properties, including specific heat and thermal conductivity. The numerical simulation and experimental results show that the setup accuracy of the temperature prediction system of ROT is effectively improved.

  5. Vehicle-Scheduling Model for Operation Based on Single-Depot

    Directory of Open Access Journals (Sweden)

    Jing Teng

    2015-01-01

    Full Text Available Centralized assigning of bus running between multiple lines can save operation cost of transit agency. As more big transit terminals can serve for multiple bus lines being established, coordinating the operation of these lines’ vehicles becomes more economical and perspective. This paper proposed a vehicle-scheduling model for multiple lines which share vehicle resource together and service based on the same terminal. The optimization goal is to minimize the number of vehicles while considering reducing the invalid operation time under the constraint of timetable schemes and matching time for vehicle crossing two lines. A case in Ningbo city, China, was conducted to compare the performance of the cross-line schedules with the original schedules assigning vehicles within respective lines. The optimized schedules can reduce 7.14% vehicles in need while meeting the timetable schemes of all bus lines, which indicated that the proposed model is suitable for operation practice.

  6. Fuzzy rule-based macroinvertebrate habitat suitability models for running waters

    NARCIS (Netherlands)

    Broekhoven, Van E.; Adriaenssens, V.; Baets, De B.; Verdonschot, P.F.M.

    2006-01-01

    A fuzzy rule-based approach was applied to a macroinvertebrate habitat suitability modelling problem. The model design was based on a knowledge base summarising the preferences and tolerances of 86 macroinvertebrate species for four variables describing river sites in springs up to small rivers in t

  7. Damage Propagation Modeling for Aircraft Engine Run-to-Failure Simulation

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are...

  8. Impact of treadmill running and sex on hippocampal neurogenesis in the mouse model of amyotrophic lateral sclerosis.

    Directory of Open Access Journals (Sweden)

    Xiaoxing Ma

    Full Text Available Hippocampal neurogenesis in the subgranular zone (SGZ of dentate gyrus (DG occurs throughout life and is regulated by pathological and physiological processes. The role of oxidative stress in hippocampal neurogenesis and its response to exercise or neurodegenerative diseases remains controversial. The present study was designed to investigate the impact of oxidative stress, treadmill exercise and sex on hippocampal neurogenesis in a murine model of heightened oxidative stress (G93A mice. G93A and wild type (WT mice were randomized to a treadmill running (EX or a sedentary (SED group for 1 or 4 wk. Immunohistochemistry was used to detect bromodeoxyuridine (BrdU labeled proliferating cells, surviving cells, and their phenotype, as well as for determination of oxidative stress (3-NT; 8-OHdG. BDNF and IGF1 mRNA expression was assessed by in situ hybridization. Results showed that: (1 G93A-SED mice had greater hippocampal neurogenesis, BDNF mRNA, and 3-NT, as compared to WT-SED mice. (2 Treadmill running promoted hippocampal neurogenesis and BDNF mRNA content and lowered DNA oxidative damage (8-OHdG in WT mice. (3 Male G93A mice showed significantly higher cell proliferation but a lower level of survival vs. female G93A mice. We conclude that G93A mice show higher hippocampal neurogenesis, in association with higher BDNF expression, yet running did not further enhance these phenomena in G93A mice, probably due to a 'ceiling effect' of an already heightened basal levels of hippocampal neurogenesis and BDNF expression.

  9. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  10. runmlwin : A Program to Run the MLwiN Multilevel Modeling Software from within Stata

    Directory of Open Access Journals (Sweden)

    George Leckie

    2013-03-01

    Full Text Available We illustrate how to fit multilevel models in the MLwiN package seamlessly from within Stata using the Stata program runmlwin. We argue that using MLwiN and Stata in combination allows researchers to capitalize on the best features of both packages. We provide examples of how to use runmlwin to fit continuous, binary, ordinal, nominal and mixed response multilevel models by both maximum likelihood and Markov chain Monte Carlo estimation.

  11. A multi-state model for wind farms considering operational outage probability

    DEFF Research Database (Denmark)

    Cheng, Lin; Liu, Manjun; Sun, Yuanzhang;

    2013-01-01

    power penetration levels. Therefore, a more comprehensive analysis toward WECS as well as an appropriate reliability assessment model are essential for maintaining the reliable operation of power systems. In this paper, the impact of wind turbine outage probability on system reliability is firstly......As one of the most important renewable energy resources, wind power has drawn much attention in recent years. The stochastic characteristics of wind speed lead to generation output uncertainties of wind energy conversion system (WECS) and affect power system reliability, especially at high wind...... developed by considering the following factors: running time, operating environment, operating conditions, and wind speed fluctuations. A multi-state model for wind farms is also established. Numerical results illustrate that the proposed model can be well applied to power system reliability assessment...

  12. A description of the FAMOUS (version XDBUA climate model and control run

    Directory of Open Access Journals (Sweden)

    A. Osprey

    2008-12-01

    Full Text Available FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.

  13. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    Science.gov (United States)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  14. Developing Operator Models for UAV Search Scheduling

    NARCIS (Netherlands)

    Bertuccelli, L.F.; Beckers, N.W.M.; Cummings, M.L.

    2010-01-01

    With the increased use of Unmanned Aerial Vehicles (UAVs), it is envisioned that UAV operators will become high level mission supervisors, responsible for information management and task planning. In the context of search missions, operators supervising a large number of UAVs can become overwhelmed

  15. Renormalization group running of fermion observables in an extended non-supersymmetric SO(10) model

    Science.gov (United States)

    Meloni, Davide; Ohlsson, Tommy; Riad, Stella

    2017-03-01

    We investigate the renormalization group evolution of fermion masses, mixings and quartic scalar Higgs self-couplings in an extended non-supersymmetric SO(10) model, where the Higgs sector contains the 10 H, 120 H, and 126 H representations. The group SO(10) is spontaneously broken at the GUT scale to the Pati-Salam group and subsequently to the Standard Model (SM) at an intermediate scale M I. We explicitly take into account the effects of the change of gauge groups in the evolution. In particular, we derive the renormalization group equations for the different Yukawa couplings. We find that the computed physical fermion observables can be successfully matched to the experimental measured values at the electroweak scale. Using the same Yukawa couplings at the GUT scale, the measured values of the fermion observables cannot be reproduced with a SM-like evolution, leading to differences in the numerical values up to around 80%. Furthermore, a similar evolution can be performed for a minimal SO(10) model, where the Higgs sector consists of the 10 H and 126 H representations only, showing an equally good potential to describe the low-energy fermion observables. Finally, for both the extended and the minimal SO(10) models, we present predictions for the three Dirac and Majorana CP-violating phases as well as three effective neutrino mass parameters.

  16. Measuring Short- and Long-run Promotional Effectiveness on Scanner Data Using Persistence Modeling

    NARCIS (Netherlands)

    M.G. Dekimpe (Marnik); D.M. Hanssens (Dominique); V.R. Nijs; J-B.E.M. Steenkamp (Jan-Benedict)

    2003-01-01

    textabstractThe use of price promotions to stimulate brand and firm performance is increasing. We discuss how (i) the availability of longer scanner data time series, and (ii) persistence modeling, have lead to greater insights into the dynamic effects of price promotions, as one can now quantify th

  17. IPSL-CM5A2. An Earth System Model designed to run long simulations for past and future climates.

    Science.gov (United States)

    Sepulchre, Pierre; Caubel, Arnaud; Marti, Olivier; Hourdin, Frédéric; Dufresne, Jean-Louis; Boucher, Olivier

    2017-04-01

    The IPSL-CM5A model was developed and released in 2013 "to study the long-term response of the climate system to natural and anthropogenic forcings as part of the 5th Phase of the Coupled Model Intercomparison Project (CMIP5)" [Dufresne et al., 2013]. Although this model also has been used for numerous paleoclimate studies, a major limitation was its computation time, which averaged 10 model-years / day on 32 cores of the Curie supercomputer (on TGCC computing center, France). Such performances were compatible with the experimental designs of intercomparison projects (e.g. CMIP, PMIP) but became limiting for modelling activities involving several multi-millenial experiments, which are typical for Quaternary or "deeptime" paleoclimate studies, in which a fully-equilibrated deep-ocean is mandatory. Here we present the Earth-System model IPSL-CM5A2. Based on IPSL-CM5A, technical developments have been performed both on separate components and on the coupling system in order to speed up the whole coupled model. These developments include the integration of hybrid parallelization MPI-OpenMP in LMDz atmospheric component, the use of a new input-ouput library to perform parallel asynchronous input/output by using computing cores as "IO servers", the use of a parallel coupling library between the ocean and the atmospheric components. Running on 304 cores, the model can now simulate 55 years per day, opening new gates towards multi-millenial simulations. Apart from obtaining better computing performances, one aim of setting up IPSL-CM5A2 was also to overcome the cold bias depicted in global surface air temperature (t2m) in IPSL-CM5A. We present the tuning strategy to overcome this bias as well as the main characteristics (including biases) of the pre-industrial climate simulated by IPSL-CM5A2. Lastly, we shortly present paleoclimate simulations run with this model, for the Holocene and for deeper timescales in the Cenozoic, for which the particular continental configuration

  18. Quark flavour observables in the Littlest Higgs model with T-parity after LHC Run 1.

    Science.gov (United States)

    Blanke, Monika; Buras, Andrzej J; Recksiegel, Stefan

    2016-01-01

    The Littlest Higgs model with T-parity (LHT) belongs to the simplest new physics scenarios with new sources of flavour and CP violation. The latter originate in the interactions of ordinary quarks and leptons with heavy mirror quarks and leptons that are mediated by new heavy gauge bosons. Also a heavy fermionic top partner is present in this model which communicates with the SM fermions by means of standard [Formula: see text] and [Formula: see text] gauge bosons. We present a new analysis of quark flavour observables in the LHT model in view of the oncoming flavour precision era. We use all available information on the CKM parameters, lattice QCD input and experimental data on quark flavour observables and corresponding theoretical calculations, taking into account new lower bounds on the symmetry breaking scale and the mirror quark masses from the LHC. We investigate by how much the branching ratios for a number of rare K and B decays are still allowed to depart from their SM values. This includes [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text]. Taking into account the constraints from [Formula: see text] processes, significant departures from the SM predictions for [Formula: see text] and [Formula: see text] are possible, while the effects in B decays are much smaller. In particular, the LHT model favours [Formula: see text], which is not supported by the data, and the present anomalies in [Formula: see text] decays cannot be explained in this model. With the recent lattice and large N input the imposition of the [Formula: see text] constraint implies a significant suppression of the branching ratio for [Formula: see text] with respect to its SM value while allowing only for small modifications of [Formula: see text]. Finally, we investigate how the LHT physics could be distinguished from other models by means of indirect measurements and

  19. Quark flavour observables in the Littlest Higgs model with T-parity after LHC Run 1

    Science.gov (United States)

    Blanke, Monika; Buras, Andrzej J.; Recksiegel, Stefan

    2016-04-01

    The Littlest Higgs model with T-parity (LHT) belongs to the simplest new physics scenarios with new sources of flavour and CP violation. The latter originate in the interactions of ordinary quarks and leptons with heavy mirror quarks and leptons that are mediated by new heavy gauge bosons. Also a heavy fermionic top partner is present in this model which communicates with the SM fermions by means of standard W^± and Z^0 gauge bosons. We present a new analysis of quark flavour observables in the LHT model in view of the oncoming flavour precision era. We use all available information on the CKM parameters, lattice QCD input and experimental data on quark flavour observables and corresponding theoretical calculations, taking into account new lower bounds on the symmetry breaking scale and the mirror quark masses from the LHC. We investigate by how much the branching ratios for a number of rare K and B decays are still allowed to depart from their SM values. This includes K^+→ π ^+ν bar{ν }, KL→ π ^0ν bar{ν }, K_L→ μ ^+μ ^-, B→ X_sγ , B_{s,d}→ μ ^+μ ^-, B→ K^{(*)}ℓ ^+ℓ ^-, B→ K^{(*)}ν bar{ν }, and \\varepsilon '/\\varepsilon . Taking into account the constraints from Δ F=2 processes, significant departures from the SM predictions for K^+→ π ^+ν bar{ν } and KL→ π ^0ν bar{ν } are possible, while the effects in B decays are much smaller. In particular, the LHT model favours B(Bs→ μ ^+μ ^-) ≥ B(Bs→ μ ^+μ ^-)_SM, which is not supported by the data, and the present anomalies in B→ K^{(*)}ℓ ^+ℓ ^- decays cannot be explained in this model. With the recent lattice and large N input the imposition of the \\varepsilon '/\\varepsilon constraint implies a significant suppression of the branching ratio for KL→ π ^0ν bar{ν } with respect to its SM value while allowing only for small modifications of K^+→ π ^+ν bar{ν }. Finally, we investigate how the LHT physics could be distinguished from other models by means of

  20. Liquid phase methanol LaPorte Process Development Unit: Modification, operation, and support studies. Task 2.2: Process variable Scan Run E-8 and in-situ activation with syngas Run E-9

    Energy Technology Data Exchange (ETDEWEB)

    1991-02-28

    The LPMEOH process was conceived and patented by Chem Systems Inc. in 1975. Initial research and studies on the process focused on two distinct modes of operation. The first was a liquid fluidized mode with relatively large catalyst pellets suspended in a fluidizing liquid, and the second was an entrained (slurry) mode with fine catalyst particles slurried in an inert liquid. The development of both operating modes progressed in parallel from bench scale reactors, through an intermediate scale lab PDU, and then to the LaPorte PDU in 1984. The slurry mode of operation was ultimately chosen as the operating mode of choice due to its superior performance.

  1. Run off-on-out method and models for soil infiltrability on hill-slope under rainfall conditions

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The soil infiltrability of hill-slope is important to such studies and practices as hydrological process, crop water supply, irrigation practices, and soil erosion. A new method for measuring soil infiltrability on hill-slope under rainfall condition with run off-on-out was advanced. Based on water (mass) balance, the mathematic models for soil infiltrability estimated from the advances of runoff on soil surface and the water running out of the slope were derived. Experiments of 2 cases were conducted. Case I was done under a rainfall intensity of 20 mm/h, at a slope gradient of about 0° with a runoff/on length (area) ratio of 1 : 1. Case II was under a rainfall intensity of 60 mm/h and a slope of 20° with a runoff/on length (area) ratio of 1 : 1. Double ring method was also used to measure the infiltrability for comparison purposes. The experiments were done with soil moisture of 10%. Required data were collected from laboratory experiments. The infiltrability curves were computed from the experimental data. The results indicate that the method can well conceptually represent the transient infiltrability process, with capability to simulate the very high initial soil infiltrability. The rationalities of the method and the models were validated. The errors of the method for the two cases were 1.82%/1.39% and 4.49%/3.529% (Experimental/Model) respectively, as estimated by comparing the rainfall amount with the infiltrated volume, to demonstrate the accuracy of the method. The transient and steady infiltrability measured with double ring was much lower than those with this new method, due to water supply limit and soil aggregates breaking down at initial infiltration stage. The method can overcome the short backs of the traditional sprinkler method and double ring method for soil infiltraility. It can be used to measure the infiltrability of sloped surface under rainfall-runoff-erosion conditions, in the related studies.

  2. Influential factors of red-light running at signalized intersection and prediction using a rare events logistic regression model.

    Science.gov (United States)

    Ren, Yilong; Wang, Yunpeng; Wu, Xinkai; Yu, Guizhen; Ding, Chuan

    2016-10-01

    Red light running (RLR) has become a major safety concern at signalized intersection. To prevent RLR related crashes, it is critical to identify the factors that significantly impact the drivers' behaviors of RLR, and to predict potential RLR in real time. In this research, 9-month's RLR events extracted from high-resolution traffic data collected by loop detectors from three signalized intersections were applied to identify the factors that significantly affect RLR behaviors. The data analysis indicated that occupancy time, time gap, used yellow time, time left to yellow start, whether the preceding vehicle runs through the intersection during yellow, and whether there is a vehicle passing through the intersection on the adjacent lane were significantly factors for RLR behaviors. Furthermore, due to the rare events nature of RLR, a modified rare events logistic regression model was developed for RLR prediction. The rare events logistic regression method has been applied in many fields for rare events studies and shows impressive performance, but so far none of previous research has applied this method to study RLR. The results showed that the rare events logistic regression model performed significantly better than the standard logistic regression model. More importantly, the proposed RLR prediction method is purely based on loop detector data collected from a single advance loop detector located 400 feet away from stop-bar. This brings great potential for future field applications of the proposed method since loops have been widely implemented in many intersections and can collect data in real time. This research is expected to contribute to the improvement of intersection safety significantly.

  3. Quark flavour observables in the Littlest Higgs model with T-parity after LHC Run 1

    CERN Document Server

    Blanke, Monika; Recksiegel, Stefan

    2016-01-01

    The Littlest Higgs Model with T-parity (LHT) belongs to the simplest new physics scenarios with new sources of flavour and CP violation. We present a new analysis of quark observables in the LHT model in view of the oncoming flavour precision era. We use all available information on the CKM parameters, lattice QCD input and experimental data on quark flavour observables and corresponding theoretical calculations, taking into account new lower bounds on the symmetry breaking scale and the mirror quark masses from the LHC. We investigate by how much the branching ratios for a number of rare $K$ and $B$ decays are still allowed to depart from their SM values. This includes $K^+\\to\\pi^+\

  4. PDU Run 10

    Energy Technology Data Exchange (ETDEWEB)

    1981-09-01

    PDU Run 10, a 46-day H-Coal syncrude mode operation using Wyodak coal, successfully met all targeted objectives, and was the longest PDU operation to date in this program. Targeted coal conversion of 90 W % was exceeded with a C/sub 4/-975/sup 0/F distillate yield of 43 to 48 W %. Amocat 1A catalyst was qualified for Pilot Plant operation based on improved operation and superior performance. PDU 10 achieved improved yields and lower hydrogen consumption compared to PDU 6, a similar operation. High hydroclone efficiency and high solids content in the vacuum still were maintained throughout the run. Steady operations at lower oil/solids ratios were demonstrated. Microautoclave testing was introduced as an operational aid. Four additional studies were successfully completed during PDU 10. These included a catalyst tracer study in conjunction with Sandia Laboratories; tests on letdown valve trims for Battelle; a fluid dynamics study with Amoco; and special high-pressure liquid sampling.

  5. Running Club

    CERN Multimedia

    Running Club

    2010-01-01

    The 2010 edition of the annual CERN Road Race will be held on Wednesday 29th September at 18h. The 5.5km race takes place over 3 laps of a 1.8 km circuit in the West Area of the Meyrin site, and is open to everyone working at CERN and their families. There are runners of all speeds, with times ranging from under 17 to over 34 minutes, and the race is run on a handicap basis, by staggering the starting times so that (in theory) all runners finish together. Children (< 15 years) have their own race over 1 lap of 1.8km. As usual, there will be a “best family” challenge (judged on best parent + best child). Trophies are awarded in the usual men’s, women’s and veterans’ categories, and there is a challenge for the best age/performance. Every adult will receive a souvenir prize, financed by a registration fee of 10 CHF. Children enter free (each child will receive a medal). More information, and the online entry form, can be found at http://cern.ch/club...

  6. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2012-01-01

      On Wednesday 14 March, the machine group successfully injected beams into LHC for the first time this year. Within 48 hours they managed to ramp the beams to 4 TeV and proceeded to squeeze to β*=0.6m, settings that are used routinely since then. This brought to an end the CMS Cosmic Run at ~Four Tesla (CRAFT), during which we collected 800k cosmic ray events with a track crossing the central Tracker. That sample has been since then topped up to two million, allowing further refinements of the Tracker Alignment. The LHC started delivering the first collisions on 5 April with two bunches colliding in CMS, giving a pile-up of ~27 interactions per crossing at the beginning of the fill. Since then the machine has increased the number of colliding bunches to reach 1380 bunches and peak instantaneous luminosities around 6.5E33 at the beginning of fills. The average bunch charges reached ~1.5E11 protons per bunch which results in an initial pile-up of ~30 interactions per crossing. During the ...

  7. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2012-01-01

      With the analysis of the first 5 fb–1 culminating in the announcement of the observation of a new particle with mass of around 126 GeV/c2, the CERN directorate decided to extend the LHC run until February 2013. This adds three months to the original schedule. Since then the LHC has continued to perform extremely well, and the total luminosity delivered so far this year is 22 fb–1. CMS also continues to perform excellently, recording data with efficiency higher than 95% for fills with the magnetic field at nominal value. The highest instantaneous luminosity achieved by LHC to date is 7.6x1033 cm–2s–1, which translates into 35 interactions per crossing. On the CMS side there has been a lot of work to handle these extreme conditions, such as a new DAQ computer farm and trigger menus to handle the pile-up, automation of recovery procedures to minimise the lost luminosity, better training for the shift crews, etc. We did suffer from a couple of infrastructure ...

  8. The Effect of Treadmill Running on Passive Avoidance Learning in Animal Model of Alzheimer Disease

    OpenAIRE

    Nasrin Hosseini; Hojjatallah Alaei; Parham Reisi; Maryam Radahmadi

    2013-01-01

    Background : Alzheimer′s disease was known as a progressive neurodegenerative disorder in the elderly and is characterized by dementia and severe neuronal loss in the some regions of brain such as nucleus basalis magnocellularis. It plays an important role in the brain functions such as learning and memory. Loss of cholinergic neurons of nucleus basalis magnocellularis by ibotenic acid can commonly be regarded as a suitable model of Alzheimer′s disease. Previous studies reported that exercise...

  9. Classically conformal U(1)' extended standard model, electroweak vacuum stability, and LHC Run-2 bounds

    CERN Document Server

    Das, Arindam; Okada, Nobuchika; Takahashi, Dai-suke

    2016-01-01

    We consider the minimal U(1)' extension of the Standard Model (SM) with the classically conformal invariance, where an anomaly free U(1)' gauge symmetry is introduced along with three generations of right-handed neutrinos and a U(1)' Higgs field. Since the classically conformal symmetry forbids all dimensional parameters in the model, the U(1)' gauge symmetry is broken through the Coleman-Weinberg mechanism, generating the mass terms of the U(1)' gauge boson (Z' boson) and the right-handed neutrinos. Through a mixing quartic coupling between the U(1)' Higgs field and the SM Higgs doublet field, the radiative U(1)' gauge symmetry breaking also triggers the breaking of the electroweak symmetry. In this model context, we first investigate the electroweak vacuum instability problem in the SM. Employing the renormalization group equations at the two-loop level and the central values for the world average masses of the top quark ($m_t=173.34$ GeV) and the Higgs boson ($m_h=125.09$ GeV), we perform parameter scans t...

  10. Running climate model on a commercial cloud computing environment: A case study using Community Earth System Model (CESM) on Amazon AWS

    Science.gov (United States)

    Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock

    2017-01-01

    The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.

  11. Operational, regional-scale, chemical weather forecasting models in Europe

    NARCIS (Netherlands)

    Kukkonen, J.; Balk, T.; Schultz, D.M.; Baklanov, A.; Klein, T.; Miranda, A.I.; Monteiro, A.; Hirtl, M.; Tarvainen, V.; Boy, M.; Peuch, V.H.; Poupkou, A.; Kioutsioukis, I.; Finardi, S.; Sofiev, M.; Sokhi, R.; Lehtinen, K.; Karatzas, K.; San José, R.; Astitha, M.; Kallos, G.; Schaap, M.; Reimer, E.; Jakobs, H.; Eben, K.

    2011-01-01

    Numerical models that combine weather forecasting and atmospheric chemistry are here referred to as chemical weather forecasting models. Eighteen operational chemical weather forecasting models on regional and continental scales in Europe are described and compared in this article. Topics discussed

  12. Modelling Metrics for Mine Counter Measure Operations

    Science.gov (United States)

    2014-08-01

    Refs [22][25]) that was initially developed at DRDC Atlantic and was recently upgraded at DRDC CORA under the Technology Investment Fund (TIF) project...and Yip H. Autonomous underwater vehicles conducting mine countermeasure operations, DRDC CORA TM 2008-42, Oct 08, 54 pages. [14] Nguyen, B. U. and...Contract Report, Jun 06. 46 DRDC-RDDC-2014-R58 List of symbols/abbreviations/acronyms/initialisms CAF Canadian Forces CORA Centre for Operational

  13. Fast Atmosphere-Ocean Model Runs with Large Changes in CO2

    Science.gov (United States)

    Russell, Gary L.; Lacis, Andrew A.; Rind, David H.; Colose, Christopher; Opstbaum, Roger F.

    2013-01-01

    How does climate sensitivity vary with the magnitude of climate forcing? This question was investigated with the use of a modified coupled atmosphere-ocean model, whose stability was improved so that the model would accommodate large radiative forcings yet be fast enough to reach rapid equilibrium. Experiments were performed in which atmospheric CO2 was multiplied by powers of 2, from 1/64 to 256 times the 1950 value. From 8 to 32 times, the 1950 CO2, climate sensitivity for doubling CO2 reaches 8 C due to increases in water vapor absorption and cloud top height and to reductions in low level cloud cover. As CO2 amount increases further, sensitivity drops as cloud cover and planetary albedo stabilize. No water vapor-induced runaway greenhouse caused by increased CO2 was found for the range of CO2 examined. With CO2 at or below 1/8 of the 1950 value, runaway sea ice does occur as the planet cascades to a snowball Earth climate with fully ice covered oceans and global mean surface temperatures near 30 C.

  14. Modeling the Coordinated Operation between Bus Rapid Transit and Bus

    OpenAIRE

    Jiaqing Wu; Rui Song; Youan Wang; Feng Chen; Shubin Li

    2015-01-01

    The coordination between bus rapid transit (BRT) and feeder bus service is helpful in improving the operational efficiency and service level of urban public transport system. Therefore, a coordinated operation model of BRT and bus is intended to develop in this paper. The total costs are formulated and optimized by genetic algorithm. Moreover, the skip-stop BRT operation is considered when building the coordinated operation model. A case of the existing bus network in Beijing is studied, the ...

  15. Vmax estimate from three-parameter critical velocity models: validity and impact on 800 m running performance prediction.

    Science.gov (United States)

    Bosquet, Laurent; Duchene, Antoine; Lecot, François; Dupont, Grégory; Leger, Luc

    2006-05-01

    The purpose of this study was to evaluate the validity of maximal velocity (Vmax) estimated from three-parameter systems models, and to compare the predictive value of two- and three-parameter models for the 800 m. Seventeen trained male subjects (VO2max=66.54+/-7.29 ml min(-1) kg(-1)) performed five randomly ordered constant velocity tests (CVT), a maximal velocity test (mean velocity over the last 10 m portion of a 40 m sprint) and a 800 m time trial (V 800 m). Five systems models (two three-parameter and three two-parameter) were used to compute V max (three-parameter models), critical velocity (CV), anaerobic running capacity (ARC) and V800m from times to exhaustion during CVT. Vmax estimates were significantly lower than (0.19Critical velocity (CV) alone explained 40-62% of the variance in V800m. Combining CV with other parameters of each model to produce a calculated V800m resulted in a clear improvement of this relationship (0.83models had a better association (0.93models (0.83models appear to have a better predictive value for short duration events such as the 800 m, the fact the Vmax is not associated with the ability it is supposed to reflect suggests that they are more empirical than systems models.

  16. Community Coordinated Modeling Center: Addressing Needs of Operational Space Weather Forecasting

    Science.gov (United States)

    Kuznetsova, M.; Maddox, M.; Pulkkinen, A.; Hesse, M.; Rastaetter, L.; Macneice, P.; Taktakishvili, A.; Berrios, D.; Chulaki, A.; Zheng, Y.; Mullinix, R.

    2012-01-01

    Models are key elements of space weather forecasting. The Community Coordinated Modeling Center (CCMC, http://ccmc.gsfc.nasa.gov) hosts a broad range of state-of-the-art space weather models and enables access to complex models through an unmatched automated web-based runs-on-request system. Model output comparisons with observational data carried out by a large number of CCMC users open an unprecedented mechanism for extensive model testing and broad community feedback on model performance. The CCMC also evaluates model's prediction ability as an unbiased broker and supports operational model selections. The CCMC is organizing and leading a series of community-wide projects aiming to evaluate the current state of space weather modeling, to address challenges of model-data comparisons, and to define metrics for various user s needs and requirements. Many of CCMC models are continuously running in real-time. Over the years the CCMC acquired the unique experience in developing and maintaining real-time systems. CCMC staff expertise and trusted relations with model owners enable to keep up to date with rapid advances in model development. The information gleaned from the real-time calculations is tailored to specific mission needs. Model forecasts combined with data streams from NASA and other missions are integrated into an innovative configurable data analysis and dissemination system (http://iswa.gsfc.nasa.gov) that is accessible world-wide. The talk will review the latest progress and discuss opportunities for addressing operational space weather needs in innovative and collaborative ways.

  17. A multi-model Python wrapper for operational oil spill transport forecasts

    Science.gov (United States)

    Hou, X.; Hodges, B. R.; Negusse, S.; Barker, C.

    2015-01-01

    The Hydrodynamic and oil spill modeling system for Python (HyosPy) is presented as an example of a multi-model wrapper that ties together existing models, web access to forecast data and visualization techniques as part of an adaptable operational forecast system. The system is designed to automatically run a continual sequence of hindcast/forecast hydrodynamic models so that multiple predictions of the time-and-space-varying velocity fields are already available when a spill is reported. Once the user provides the estimated spill parameters, the system runs multiple oil spill prediction models using the output from the hydrodynamic models. As new wind and tide data become available, they are downloaded from the web, used as forcing conditions for a new instance of the hydrodynamic model and then applied to a new instance of the oil spill model. The predicted spill trajectories from multiple oil spill models are visualized through Python methods invoking Google MapTM and Google EarthTM functions. HyosPy is designed in modules that allow easy future adaptation to new models, new data sources or new visualization tools.

  18. An improved Peronnet-Thibault mathematical model of human running performance.

    Science.gov (United States)

    Alvarez-Ramirez, Jose

    2002-04-01

    Using an improved Peronnet-Thibault model to analyse the maximal power available during exercise, it was found that a 3rd-order relaxation process for the decreasing dynamics of aerobic power can describe accurately the data available for world track records and aerobic-to-total energy ratio (ATER). It was estimated that the time-scales for the decreasing dynamics are around 25 s for anaerobic power output and that they range from 2.12 h to 7.8 days for aerobic power output. In agreement with experimental evidence, the ATER showed a rapid increase during the first 300 s of exercise duration, to achieve an asymptote close to 100% after 1,000 s. In addition, the transition time when the ATER rose above 50% was found to be at a race duration of about 100 s, which would correspond to race distances of about 800 m. The results suggest that the aerobic power output achieves its maximal value at 300-400 s, and reaches a plateau at 26-28 W.kg(-1) that lasts about 5,000 s. After this period, the aerobic power output decreases slowly due to the contribution of long time-scale metabolic processes having smaller energy contributions (about 30% to 40% of the total aerobic power output).

  19. Regional on-road vehicle running emissions modeling and evaluation for conventional and alternative vehicle technologies.

    Science.gov (United States)

    Frey, H Christopher; Zhai, Haibo; Rouphail, Nagui M

    2009-11-01

    This study presents a methodology for estimating high-resolution, regional on-road vehicle emissions and the associated reductions in air pollutant emissions from vehicles that utilize alternative fuels or propulsion technologies. The fuels considered are gasoline, diesel, ethanol, biodiesel, compressed natural gas, hydrogen, and electricity. The technologies considered are internal combustion or compression engines, hybrids, fuel cell, and electric. Road link-based emission models are developed using modal fuel use and emission rates applied to facility- and speed-specific driving cycles. For an urban case study, passenger cars were found to be the largest sources of HC, CO, and CO(2) emissions, whereas trucks contributed the largest share of NO(x) emissions. When alternative fuel and propulsion technologies were introduced in the fleet at a modest market penetration level of 27%, their emission reductions were found to be 3-14%. Emissions for all pollutants generally decreased with an increase in the market share of alternative vehicle technologies. Turnover of the light duty fleet to newer Tier 2 vehicles reduced emissions of HC, CO, and NO(x) substantially. However, modest improvements in fuel economy may be offset by VMT growth and reductions in overall average speed.

  20. Comparison of CAISO-run Plexos output with LLNL-run Plexos output

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, A; Meyers, C; Smith, S

    2011-12-20

    In this report we compare the output of the California Independent System Operator (CAISO) 33% RPS Plexos model when run on various computing systems. Specifically, we compare the output resulting from running the model on CAISO's computers (Windows) and LLNL's computers (both Windows and Linux). We conclude that the differences between the three results are negligible in the context of the entire system and likely attributed to minor differences in Plexos version numbers as well as the MIP solver used in each case.

  1. Study of the ion kinetic effects in ICF run-away burn using a quasi-1D hybrid model

    Science.gov (United States)

    Huang, C.-K.; Molvig, K.; Albright, B. J.; Dodd, E. S.; Vold, E. L.; Kagan, G.; Hoffman, N. M.

    2017-02-01

    The loss of fuel ions in the Gamow peak and other kinetic effects related to the α particles during ignition, run-away burn, and disassembly stages of an inertial confinement fusion D-T capsule are investigated with a quasi-1D hybrid volume ignition model that includes kinetic ions, fluid electrons, Planckian radiation photons, and a metallic pusher. The fuel ion loss due to the Knudsen effect at the fuel-pusher interface is accounted for by a local-loss model by Molvig et al. [Phys. Rev. Lett. 109, 095001 (2012)] with an albedo model for ions returning from the pusher wall. The tail refilling and relaxation of the fuel ion distribution are captured with a nonlinear Fokker-Planck solver. Alpha heating of the fuel ions is modeled kinetically while simple models for finite alpha range and electron heating are used. This dynamical model is benchmarked with a 3 T hydrodynamic burn model employing similar assumptions. For an energetic pusher (˜40 kJ) that compresses the fuel to an areal density of ˜1.07 g/cm 2 at ignition, the simulation shows that the Knudsen effect can substantially limit ion temperature rise in runaway burn. While the final yield decreases modestly from kinetic effects of the α particles, large reduction of the fuel reactivity during ignition and runaway burn may require a higher Knudsen loss rate compared to the rise time of the temperatures above ˜25 keV when the broad D-T Gamow peak merges into the bulk Maxwellian distribution.

  2. Modeling the Coordinated Operation between Bus Rapid Transit and Bus

    Directory of Open Access Journals (Sweden)

    Jiaqing Wu

    2015-01-01

    Full Text Available The coordination between bus rapid transit (BRT and feeder bus service is helpful in improving the operational efficiency and service level of urban public transport system. Therefore, a coordinated operation model of BRT and bus is intended to develop in this paper. The total costs are formulated and optimized by genetic algorithm. Moreover, the skip-stop BRT operation is considered when building the coordinated operation model. A case of the existing bus network in Beijing is studied, the proposed coordinated operation model of BRT and bus is applied, and the optimized headway and costs are obtained. The results show that the coordinated operation model could effectively decrease the total costs of the transit system and the transfer time of passengers. The results also suggest that the coordination between the skip-stop BRT and bus during peak hour is more effective than non-coordination operation.

  3. Data Scaling for Operational Risk Modelling

    NARCIS (Netherlands)

    H.S. Na; L. Couto Miranda; J.H. van den Berg (Jan); M. Leipoldt

    2006-01-01

    textabstractIn 2004, the Basel Committee on Banking Supervision defined Operational Risk (OR) as the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. After publication of the new capital accord containing this dfinition, statistical

  4. Data Scaling for Operational Risk Modelling

    NARCIS (Netherlands)

    H.S. Na; L. Couto Miranda; J.H. van den Berg (Jan); M. Leipoldt

    2006-01-01

    textabstractIn 2004, the Basel Committee on Banking Supervision defined Operational Risk (OR) as the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. After publication of the new capital accord containing this dfinition, statistical pro

  5. 磁悬浮列车跨系统运行Petri网模型%Petri net model of maglev train running across different control systems

    Institute of Scientific and Technical Information of China (English)

    郑伟

    2012-01-01

    The general framework of running control system on maglev train was studied according to the running requirements of maglev train running across different control systems.Functional subsystems need to be added was defined.The hierarchical models of system key attributes,maglev operation procedures and the subsystem function were built based on the system theory by using Petri net.The key attributes of whole system were described by the highest model,and the operation procedures of maglev train and the reliabilities of subsystems were presented in the lower level model.The relationship between the failure rates of maglev train running across different control systems and the reliabilities of subsystem components was quantitatively analyzed with the model.It is pointed that the loss ratio of network connecting neibouring control systems should be lower than 10-6 times per hour when the required failure number of maglev train running across different systems is no more than 1 time per year.The failure rates of maglev train running across different control systems are 1.95×10-5 and 1.65×10-5 times per hour when the triggering times equal 0.2 and 2.0 min respectively,and the stepping times equal 4 and 16 min respectively.Simulation result shows that the failure rates of train running across the boundary decrease when the reliabilities of a and b networks are improved,or the triggering time and stepping time of train are prolonged.The reliability requirements of subsystem components based on the required key attributes of system level are quantantatively identified by using the proposed approach.9 figs,14 refs.%根据磁悬浮列车跨系统运行需求,研究了其运行控制系统的总体框架,明确了需要增加的功能子系统。基于系统理论,采用Petri网对系统关键属性、列车运行过程及各子系统的功能进行了层次化的建模。最高层模型描述系统整体关键属性,低层模型描述列车运行过程及可靠性。此模

  6. From Walking to Running

    Science.gov (United States)

    Rummel, Juergen; Blum, Yvonne; Seyfarth, Andre

    The implementation of bipedal gaits in legged robots is still a challenge in state-of-the-art engineering. Human gaits could be realized by imitating human leg dynamics where a spring-like leg behavior is found as represented in the bipedal spring-mass model. In this study we explore the gap between walking and running by investigating periodic gait patterns. We found an almost continuous morphing of gait patterns between walking and running. The technical feasibility of this transition is, however, restricted by the duration of swing phase. In practice, this requires an abrupt gait transition between both gaits, while a change of speed is not necessary.

  7. Ubuntu Up and Running

    CERN Document Server

    Nixon, Robin

    2010-01-01

    Ubuntu for everyone! This popular Linux-based operating system is perfect for people with little technical background. It's simple to install, and easy to use -- with a strong focus on security. Ubuntu: Up and Running shows you the ins and outs of this system with a complete hands-on tour. You'll learn how Ubuntu works, how to quickly configure and maintain Ubuntu 10.04, and how to use this unique operating system for networking, business, and home entertainment. This book includes a DVD with the complete Ubuntu system and several specialized editions -- including the Mythbuntu multimedia re

  8. A new regional climate model operating at the meso-gamma scale: performance over Europe

    Directory of Open Access Journals (Sweden)

    David Lindstedt

    2015-01-01

    Full Text Available There are well-known difficulties to run numerical weather prediction (NWP and climate models at resolutions traditionally referred to as ‘grey-zone’ (~3–8 km where deep convection is neither completely resolved by the model dynamics nor completely subgrid. In this study, we describe the performance of an operational NWP model, HARMONIE, in a climate setting (HCLIM, run at two different resolutions (6 and 15 km for a 10-yr period (1998–2007. This model has a convection scheme particularly designed to operate in the ‘grey-zone’ regime, which increases the realism and accuracy of the time and spatial evolution of convective processes compared to more traditional parametrisations. HCLIM is evaluated against standard observational data sets over Europe as well as high-resolution, regional, observations. Not only is the regional climate very well represented but also higher order climate statistics and smaller scale spatial characteristics of precipitation are in good agreement with observations. The added value when making climate simulations at ~5 km resolution compared to more typical regional climate model resolutions is mainly seen for the very rare, high-intensity precipitation events. HCLIM at 6 km resolution reproduces the frequency and intensity of these events better than at 15 km resolution and is in closer agreement with the high-resolution observations.

  9. Vector operations for modelling data-conversion procedures

    Energy Technology Data Exchange (ETDEWEB)

    Rivkin, M.N.

    1992-03-01

    This article presents a set of vector operations that permit effective modelling of operations from extended relational algebra for implementations of variable-construction procedures in data-conversion processors. Vector operations are classified, and test results are given for the ARIUS UP and other popular database management systems for PC`s. 10 refs., 5 figs.

  10. Coalition Modeling in Humanitarian Assistance Operations

    Science.gov (United States)

    2006-03-01

    Generally, the high cost of military operations, reduced military budgets after the cold war, global economies, and the need for international legitimacy...addition, there has been a global increase in civil/ethnic strife which cause complex emergency (Lynch: 4). Furthermore, since these emergencies generally...notional scenario has been scaled back for demonstration purposes. These scenarios are solved using Xpress by Dash Optimization which is a commercial

  11. Pseudo-Differential Operators and Integrable Models

    CERN Document Server

    Sedra, M B

    2009-01-01

    The importance of the theory of pseudo-differential operators in the study of non linear integrable systems is point out. Principally, the algebra $\\Xi $ of nonlinear (local and nonlocal) differential operators, acting on the ring of analytic functions $u_{s}(x, t)$, is studied. It is shown in particular that this space splits into several classes of subalgebras $\\Sigma_{jr}, j=0,\\pm 1, r=\\pm 1$ completely specified by the quantum numbers: $s$ and $(p,q)$ describing respectively the conformal weight (or spin) and the lowest and highest degrees. The algebra ${\\huge \\Sigma}_{++}$ (and its dual $\\Sigma_{--}$) of local (pure nonlocal) differential operators is important in the sense that it gives rise to the explicit form of the second hamiltonian structure of the KdV system and that we call also the Gelfand-Dickey Poisson bracket. This is explicitly done in several previous studies, see for the moment \\cite{4, 5, 14}. Some results concerning the KdV and Boussinesq hierarchies are derived explicitly.

  12. Integration of Dynamic Models in Range Operations

    Science.gov (United States)

    Bardina, Jorge; Thirumalainambi, Rajkumar

    2004-01-01

    This work addresses the various model interactions in real-time to make an efficient internet based decision making tool for Shuttle launch. The decision making tool depends on the launch commit criteria coupled with physical models. Dynamic interaction between a wide variety of simulation applications and techniques, embedded algorithms, and data visualizations are needed to exploit the full potential of modeling and simulation. This paper also discusses in depth details of web based 3-D graphics and applications to range safety. The advantages of this dynamic model integration are secure accessibility and distribution of real time information to other NASA centers.

  13. Mapping Relational Operations onto Hypergraph Model

    Directory of Open Access Journals (Sweden)

    2010-10-01

    . However, the hypergraph model is non-tabular; thus, loses the simplicity of the relational model. In this study, we consider the means to convert a relational model into a hypergraph model in two layers. At the bottom layer, each relational tuple can be considered as a star graph centered where the primary key node is surrounded by non-primary key attributes. At the top layer, each tuple is a hypernode, and a relation is a set of hypernodes. We presented a reference implementation of relational operators (project, rename, select, inner join, natural join, left join, right join, outer join and Cartesian join on a hypergraph model. Using a simple example, we demonstrate that a relation and relational operators can be implemented on this hypergraph model.

  14. An operator calculus for surface and volume modeling

    Science.gov (United States)

    Gordon, W. J.

    1984-01-01

    The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.

  15. Comparative adaptations in oxidative and glycolytic muscle fibers in a low voluntary wheel running rat model performing three levels of physical activity.

    Science.gov (United States)

    Hyatt, Hayden W; Toedebusch, Ryan G; Ruegsegger, Greg; Mobley, C Brooks; Fox, Carlton D; McGinnis, Graham R; Quindry, John C; Booth, Frank W; Roberts, Michael D; Kavazis, Andreas N

    2015-11-01

    A unique polygenic model of rat physical activity has been recently developed where rats were selected for the trait of low voluntary wheel running. We utilized this model to identify differences in soleus and plantaris muscles of sedentary low voluntary wheel running rats and physically active low voluntary wheel running rats exposed to moderate amounts of treadmill training. Three groups of 28-day-old male Wistar rats were used: (1) rats without a running wheel (SEDENTARY, n = 7), (2) rats housed with a running wheel (WHEEL, n = 7), and (3) rats housed with a running wheel and exercised on the treadmill (5 days/week for 20 min/day at 15.0 m/min) (WHEEL + TREADMILL, n = 7). Animals were euthanized 5 weeks after the start of the experiment and the soleus and plantaris muscles were excised and used for analyses. Increases in skeletal muscle gene expression of peroxisome proliferator-activated receptor gamma coactivator 1 alpha and fibronectin type III domain-containing protein 5 in WHEEL + TREADMILL group were observed. Also, WHEEL + TREADMILL had higher protein levels of superoxide dismutase 2 and decreased levels of oxidative damage. Our data demonstrate that the addition of treadmill training induces beneficial muscular adaptations compared to animals with wheel access alone. Furthermore, our data expand our understanding of differential muscular adaptations in response to exercise in mitochondrial, antioxidant, and metabolic markers.

  16. Organizational effectiveness of coalition operations' headquarters : A theoretical model

    NARCIS (Netherlands)

    Vogler-Bisig, E.; Blais, A.R.; Hof, T.; Tresch, T.S.; Seiler, S.; Yanakiev, Y.

    2012-01-01

    Purpose - This article describes a theoretical model that allows understanding, explaining, and measuring the perceived organizational effectiveness of multinational coalition operations' headquarters. Design/methodology/approach - The proposed model is based on subject matter experts' opinions and

  17. eWaterCycle: Live Demonstration of an Operational Hyper Resolution Global Hydrological Model

    Science.gov (United States)

    Drost, N.; Sutanudjaja, E.; Hut, R.; van Meersbergen, M.; Donchyts, G.; Bierkens, M. F.; Van De Giesen, N.

    2014-12-01

    The eWaterCycle project works towards running an operational hyper-resolution hydrological global model, assimilating incoming satellite data in real time, and making 14 day predictions of floods and droughts.In our approach, we aim to re-use existing models and techniques as much as possible, and make use of standards and open source software wherever we can. To couple the different parts of our system we use the Basic Model Interface (BMI) as developped in the CSDMS community.Starting point of the eWaterCycle project was the PCR-GLOBWB model built by Utrecht University. The software behind this model has been partially re-engineered in order to enable it to run in a High Performance Computing (HPC) environment, and to be able to interface using BMI, and run on multiple compute nodes in parallel. The final aim is to have a spatial resolution of 1km x 1km, (currently 10 x 10km).For the data assimilation we make heavy use of the OpenDA system. This allows us to make use of different data assimilation techniques without the need to implement these from scratch. We have developped a BMI adaptor for OpenDA, allowing OpenDA to use any BMI compatible model. As a data assimilation technique we currently use an Ensemble Kalman Filter, and are working on a variant of this technique optimized for HPC environments.One of the next steps in the eWaterCycle project is to couple the model with a hydrodynamic model. Our system will start a localized simulation on demand based on triggers in the global model, giving detailed flow and flood forecasting in support of navigation and disaster management.We will show a live demo of our system, including real-time integration of satellite data.

  18. Modeling and optimization of laser cutting operations

    Directory of Open Access Journals (Sweden)

    Gadallah Mohamed Hassan

    2015-01-01

    Full Text Available Laser beam cutting is one important nontraditional machining process. This paper optimizes the parameters of laser beam cutting parameters of stainless steel (316L considering the effect of input parameters such as power, oxygen pressure, frequency and cutting speed. Statistical design of experiments is carried in three different levels and process responses such as average kerf taper (Ta, surface roughness (Ra and heat affected zones are measured accordingly. A response surface model is developed as a function of the process parameters. Responses predicted by the models (as per Taguchi’s L27OA are employed to search for an optimal combination to achieve desired process yield. Response Surface Models (RSMs are developed for mean responses, S/N ratio, and standard deviation of responses. Optimization models are formulated as single objective optimization problem subject to process constraints. Models are formulated based on Analysis of Variance (ANOVA and optimized using Matlab developed environment. Optimum solutions are compared with Taguchi Methodology results. As such, practicing engineers have means to model, analyze and optimize nontraditional machining processes. Validation experiments are carried to verify the developed models with success.

  19. Competence-Oriented Decision Model for Optimizing the Operation of a Cascading Hydroelectric Power Reservoir

    Directory of Open Access Journals (Sweden)

    Tilahun Derib Asfaw

    2013-07-01

    Full Text Available The operation of the four Perak cascading reservoirs namely, Temenggor, Bersia, Kenering and Chenderoh analyzed using the newly developed genetic algorithm model. The reservoirs are located in the state of Perak of Peninsular Malaysia that used for hydroelectric power generation and flood mitigation. The hydroelectric potential of the cascading scheme is 578 MW. However, the actual annual average generation was 228 MW, which is about 39% of the potential. The research aimed to improve the annual average hydroelectric power generation. The result of the fitness value used to select the optimal option from the test of eight model runs options. After repeated runs of the optimal option, the best model parameters are found. Therefore, optimality achieved at population size of 150, crossover probability of 0.75 and generation number of 60. The operation of GA model produced an additional of 12.17 MW per day. The additional power is found with the same total annual volume of release and similar natural inflow pattern. The additional hydroelectric power can worth over 22 million Ringgit Malaysia per year. In addition, it plays a significant role on the growing energy needs of the country.

  20. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  1. Dimension-seven operators in the standard model with right handed neutrinos

    Science.gov (United States)

    Bhattacharya, Subhaditya; Wudka, José

    2016-09-01

    In this article, we consider the standard model extended by a number of (light) right-handed neutrinos, and assume the presence of some heavy physics that cannot be directly produced, but can be probed by its low-energy effective interactions. Within this scenario, we obtain all the gauge-invariant dimension-7 effective operators, and determine whether each of the operators can be generated at tree level by the heavy physics, or whether it is necessarily loop generated. We then use the tree-generated operators, including those containing right-handed neutrinos, to put limits on the scale of new physics Λ using low-energy measurements. We also study the production of same-sign dileptons at the Large Hadron Collider and determine the constraints on the heavy physics that can be derived from existing data, as well as the reach in probing Λ expected from future runs of this collider.

  2. Nonlinear Model Identification from Operating Records.

    Science.gov (United States)

    1980-11-01

    34, Submitted July 1979 to Proc. IEEE. [13] Wellstead , P., "Model Order Identification Using an Auxillary System," Proc. IEEE, vol. 123, No. 12, December...C and Systems, Nov. 1979 . I I ~I lt( -~ I -l.. .... .. . ... . .. . . , _. . - -"

  3. NOAA Operational Model Archive Distribution System (NOMADS): High Availability Applications for Reliable Real Time Access to Operational Model Data

    Science.gov (United States)

    Alpert, J. C.; Wang, J.

    2009-12-01

    To reduce the impact of natural hazards and environmental changes, the National Centers for Environmental Prediction (NCEP) provide first alert and a preferred partner for environmental prediction services, and represents a critical national resource to operational and research communities affected by climate, weather and water. NOMADS is now delivering high availability services as part of NOAA’s official real time data dissemination at its Web Operations Center (WOC) server. The WOC is a web service used by organizational units in and outside NOAA, and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The user (client) executes what is efficient to execute on the client and the server efficiently provides format independent access services. Client applications can execute on the server, if it is desired, but the same program can be executed on the client side with no loss of efficiency. In this way this paradigm lends itself to aggregation servers that act as servers of servers listing, searching catalogs of holdings, data mining, and updating information from the metadata descriptions that enable collections of data in disparate places to be simultaneously accessed, with results processed on servers and clients to produce a needed answer. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including

  4. Theoretical model of smooth disk operation

    Energy Technology Data Exchange (ETDEWEB)

    Krauze, K.

    1985-02-01

    Aes a theoretical model is analyzed for coal cutting by disk cutters mounted on a helical cutting drum of a shearer loader. The model is based on the assumption that failure of coal cohesion is caused by crushing and separation of coal grains from a coal face and that coal cutting resistance depends on its contact strength as well as cutting depth and cutting angle. 1 reference.

  5. 高职院校中外合作办学模式和运行机制的探究%On Sino-foreign Cooperative Running Mode and Operating Mechanism in Higher Vocational Colleges

    Institute of Scientific and Technical Information of China (English)

    王进

    2012-01-01

    随着经济的全球化,高等教育也开始了全球化,高职院校中外合作办学逐渐兴起。以大连职业技术学院与澳大利亚霍姆斯格兰政府理工学院合作办学实践为例,结合其他高职院校合作办学经验,在剖析当前我国高职中外合作办学存在问题的基础上,提出高职中外合作办学的有效模式、运行机制和发展建议。%With the globalization of economy higher education has begun globalization.Sino-foreign cooperative running is rising in higher vocational colleges.Taking the cooperative running practice as an example between Dalian Vocational Technical College and Holmesglen Institute in Australia,this paper combines the cooperative running experience from other higher vocational colleges.On the basis of analyzing the existing problems of Sino-foreign cooperative running in our country at present,it puts forward the effective mode,operating mechanism and developing suggestion of this cooperative running.

  6. Cardiac modeling using active appearance models and morphological operators

    Science.gov (United States)

    Pfeifer, Bernhard; Hanser, Friedrich; Seger, Michael; Hintermueller, Christoph; Modre-Osprian, Robert; Fischer, Gerald; Muehlthaler, Hannes; Trieb, Thomas; Tilg, Bernhard

    2005-04-01

    We present an approach for fast reconstructing of cardiac myocardium and blood masses of a patient's heart from morphological image data, acquired either MRI or CT, in order to estimate numerically the spread of electrical excitation in the patient's atria and ventricles. The approach can be divided into two main steps. During the first step the ventricular and atrial blood masses are extracted employing Active Appearance Models (AAM). The left and right ventricular blood masses are segmented automatically after providing the positions of the apex cordis and the base of the heart. Because of the complex geometry of the atria the segmentation process of the atrial blood masses requires more information as the ventricular blood mass segmentation process of the ventricles. We divided, for this reason, the left and right atrium into three divisions of appearance. This proved sufficient for the 2D AAM model to extract the target blood masses. The base of the heart, the left upper and left lower pulmonary vein from its first up to its last appearance in the image stack, and the right upper and lower pulmonary vein have to be marked. After separating the volume data into these divisions the 2D AAM search procedure extracts the blood masses which are the main input for the second and last step in the myocardium extraction pipeline. This step uses morphologically-based operations in order to extract the ventricular and atrial myocardium either directly by detecting the myocardium in the volume block or by reconstructing the myocardium using mean model information, in case the algorithm fails to detect the myocardium.

  7. Modeling of useful operating life of radioelectronics

    Directory of Open Access Journals (Sweden)

    Nevlyudova V. V.

    2014-08-01

    Full Text Available The author considers the possibility of using the laws of nonequilibrium thermodynamics to determine the relationship between controlled parameters of radioelectronics and the displayed environment, as well as the construction of a deterministic model of the processes of manufacturing defects development. This possibility is based on the observed patterns of change in the amount of content area, in accordance with the principles of behavior of the thermodynamic parameters characterizing the state of the real environment (entropy, the quantity of heat, etc.. The equation for the evolution of the technical state of radioelectronics is based on the deterministic kinetic model of the processes occurring in the multi-component environment, and on the observation model, which takes into account the errors caused by external influences instability and uncertainty.

  8. Modeling Optimal Scheduling for Pumping System to Minimize Operation Cost and Enhance Operation Reliability

    Directory of Open Access Journals (Sweden)

    Yin Luo

    2012-01-01

    Full Text Available Traditional pump scheduling models neglect the operation reliability which directly relates with the unscheduled maintenance cost and the wear cost during the operation. Just for this, based on the assumption that the vibration directly relates with the operation reliability and the degree of wear, it could express the operation reliability as the normalization of the vibration level. The characteristic of the vibration with the operation point was studied, it could be concluded that idealized flow versus vibration plot should be a distinct bathtub shape. There is a narrow sweet spot (80 to 100 percent BEP to obtain low vibration levels in this shape, and the vibration also follows similar law with the square of the rotation speed without resonance phenomena. Then, the operation reliability could be modeled as the function of the capacity and rotation speed of the pump and add this function to the traditional model to form the new. And contrast with the tradition method, the result shown that the new model could fix the result produced by the traditional, make the pump operate in low vibration, then the operation reliability could increase and the maintenance cost could decrease.

  9. Sediment management of run-of-river hydroelectric power project in the Himalayan region using hydraulic model studies

    Indian Academy of Sciences (India)

    NEENA ISAAC; T I ELDHO

    2017-07-01

    Storage capacity of hydropower reservoirs is lost due to sediment deposition. The problem is severe in projects located on rivers with high sediment concentration during the flood season. Removing the sediment deposition hydraulically by drawdown flushing is one of the most effective methods for restoring the storagecapacity. Effectiveness of the flushing depends on various factors, as most of them are site specific. Physical/mathematical models can be effectively used to simulate the flushing operation, and based on the results of the simulation, the layout design and operation schedule of such projects can be modified for better sediment management. This paper presents the drawdown flushing studies of the reservoir of a Himalayan River Hydroelectric Project called Kotlibhel in Uttarakhand, India. For the hydraulic model studies, a 1:100 scale geometrically similar model was constructed. Simulation studies in the model indicated that drawdown flushing for duration of 12 h with a discharge of 500 m3/s or more is effective in removing the annual sediment deposition in the reservoir. The model studies show that the sedimentation problem of the reservoir can be effectively managed through hydraulic flushing.

  10. Shuttle operations simulation model programmers'/users' manual

    Science.gov (United States)

    Porter, D. G.

    1972-01-01

    The prospective user of the shuttle operations simulation (SOS) model is given sufficient information to enable him to perform simulation studies of the space shuttle launch-to-launch operations cycle. The procedures used for modifying the SOS model to meet user requirements are described. The various control card sequences required to execute the SOS model are given. The report is written for users with varying computer simulation experience. A description of the components of the SOS model is included that presents both an explanation of the logic involved in the simulation of the shuttle operations cycle and a description of the routines used to support the actual simulation.

  11. Operational Plan Ontology Model for Interconnection and Interoperability

    Science.gov (United States)

    Long, F.; Sun, Y. K.; Shi, H. Q.

    2017-03-01

    Aiming at the assistant decision-making system’s bottleneck of processing the operational plan data and information, this paper starts from the analysis of the problem of traditional expression and the technical advantage of ontology, and then it defines the elements of the operational plan ontology model and determines the basis of construction. Later, it builds up a semi-knowledge-level operational plan ontology model. Finally, it probes into the operational plan expression based on the operational plan ontology model and the usage of the application software. Thus, this paper has the theoretical significance and application value in the improvement of interconnection and interoperability of the operational plan among assistant decision-making systems.

  12. The operator algebra of orbifold models

    Science.gov (United States)

    Dijkgraaf, Robbert; Vafa, Cumrun; Verlinde, Erik; Verlinde, Herman

    1989-09-01

    We analyze the chiral properties of (orbifold) conformal field theories which are obtained from a given conformal field theory by modding out by a finite symmetry group. For a class of orbifolds, we derive the fusion rules by studying the modular transformation properties of the one-loop characters. The results are illustrated with explicit calculations of toroidal and c=1 models.

  13. Climate sensitivity runs and regional hydrologic modeling for predicting the response of the greater Florida Everglades ecosystem to climate change.

    Science.gov (United States)

    Obeysekera, Jayantha; Barnes, Jenifer; Nungesser, Martha

    2015-04-01

    It is important to understand the vulnerability of the water management system in south Florida and to determine the resilience and robustness of greater Everglades restoration plans under future climate change. The current climate models, at both global and regional scales, are not ready to deliver specific climatic datasets for water resources investigations involving future plans and therefore a scenario based approach was adopted for this first study in restoration planning. We focused on the general implications of potential changes in future temperature and associated changes in evapotranspiration, precipitation, and sea levels at the regional boundary. From these, we developed a set of six climate and sea level scenarios, used them to simulate the hydrologic response of the greater Everglades region including agricultural, urban, and natural areas, and compared the results to those from a base run of current conditions. The scenarios included a 1.5 °C increase in temperature, ±10 % change in precipitation, and a 0.46 m (1.5 feet) increase in sea level for the 50-year planning horizon. The results suggested that, depending on the rainfall and temperature scenario, there would be significant changes in water budgets, ecosystem performance, and in water supply demands met. The increased sea level scenarios also show that the ground water levels would increase significantly with associated implications for flood protection in the urbanized areas of southeastern Florida.

  14. Measuring Regional Spillovers in Long- and Short-Run Models of Total Factor Productivity, Trade, and FDI

    DEFF Research Database (Denmark)

    Mitze, Timo Friedel

    2014-01-01

    This article applies the novel concept of global panel cointegration to analyze the role played by trade and foreign direct investment (FDI) activity in driving regional total factor productivity (TFP). Using West German state-level data for the period 1976–2008, the approach allows us to ident......This article applies the novel concept of global panel cointegration to analyze the role played by trade and foreign direct investment (FDI) activity in driving regional total factor productivity (TFP). Using West German state-level data for the period 1976–2008, the approach allows us...... to identify the magnitude of direct trade and FDI effects as well as spatial spillovers from these variables. The author finds that the inclusion of spatial lags significantly improves the fit of the empirical model and allows us to strongly reject the null of no cointegration among the variables in the full...... spatial specification. For the long-run cointegration equation, the empirical results hint at export- and FDI-led growth. Additionally, outward FDI activity shows to have positive spatial spillover effects among German regions, while the spatial patterns of import and inward FDI activity indicate...

  15. Background Error Correlation Modeling with Diffusion Operators

    Science.gov (United States)

    2013-01-01

    functions defined on the orthogonal curvilin- ear grid of the Navy Coastal Ocean Model (NCOM) [28] set up in the Monterrey Bay (Fig. 4). The number N...H2 = [1 1; 1−1], the HMs with order N = 2n, n= 1,2... can be easily constructed. HMs with N = 12,20 were constructed ” manually ” more than a century

  16. Towards Run-time Assurance of Advanced Propulsion Algorithms

    Science.gov (United States)

    Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy

    2014-01-01

    This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.

  17. Motivation dimensions for running a marathon: A new model emerging from the Motivation of Marathon Scale (MOMS

    Directory of Open Access Journals (Sweden)

    Sima Zach

    2017-09-01

    Conclusion: This study provides a sound and solid framework for studying motivation for physically demanding tasks such as marathon runs, and needs to be similarly applied and tested in studies incorporating physical tasks which vary in mental demands.

  18. Measuring Regional Spillovers in Long- and Short-Run Models of Total Factor Productivity, Trade, and FDI

    DEFF Research Database (Denmark)

    Mitze, Timo Friedel

    2014-01-01

    throughout the analysis. Finally, summing over the four variables to get a direct and indirect net effect of internationalization activity, the author finds that the direct effect is always positive, while the indirect net effect is positive in the short run but slightly negative in the long-run equation....... spatial specification. For the long-run cointegration equation, the empirical results hint at export- and FDI-led growth. Additionally, outward FDI activity shows to have positive spatial spillover effects among German regions, while the spatial patterns of import and inward FDI activity indicate...... substitution effects of interregional input–output linkages in favor of international ones over the sample period 1976–2008. In the short run, TFP growth is predominantly affected by changes in exports, inward and outward FDI stocks, where the latter variable also provokes positive spillovers. The author...

  19. Wheel running from a juvenile age delays onset of specific motor deficits but does not alter protein aggregate density in a mouse model of Huntington's disease

    Directory of Open Access Journals (Sweden)

    Spires Tara L

    2008-04-01

    Full Text Available Abstract Background Huntington's disease (HD is a neurodegenerative disorder predominantly affecting the cerebral cortex and striatum. Transgenic mice (R6/1 line, expressing a CAG repeat encoding an expanded polyglutamine tract in the N-terminus of the huntingtin protein, closely model HD. We have previously shown that environmental enrichment of these HD mice delays the onset of motor deficits. Furthermore, wheel running initiated in adulthood ameliorates the rear-paw clasping motor sign, but not an accelerating rotarod deficit. Results We have now examined the effects of enhanced physical activity via wheel running, commenced at a juvenile age (4 weeks, with respect to the onset of various behavioral deficits and their neuropathological correlates in R6/1 HD mice. HD mice housed post-weaning with running wheels only, to enhance voluntary physical exercise, have delayed onset of a motor co-ordination deficit on the static horizontal rod, as well as rear-paw clasping, although the accelerating rotarod deficit remains unaffected. Both wheel running and environmental enrichment rescued HD-induced abnormal habituation of locomotor activity and exploratory behavior in the open field. We have found that neither environment enrichment nor wheel running ameliorates the shrinkage of the striatum and anterior cingulate cortex (ACC in HD mice, nor the overall decrease in brain weight, measured at 9 months of age. At this age, the density of ubiquitinated protein aggregates in the striatum and ACC is also not significantly ameliorated by environmental enrichment or wheel running. Conclusion These results indicate that enhanced voluntary physical activity, commenced at an early presymptomatic stage, contributes to the positive effects of environmental enrichment. However, sensory and cognitive stimulation, as well as motor stimulation not associated with running, may constitute major components of the therapeutic benefits associated with enrichment

  20. Determining the Marker Configuration and Modeling Technique to Optimize the Biomechanical Analysis of Running-Specific Prostheses

    Science.gov (United States)

    2012-03-01

    Prosthetics; 2005. 13. Nolan L. Carbon fibre prostheses and running in amputees: A review. Foot and Ankle Surgery 2008;14:125-9. 14. Gailey R...activity level may be insufficient guidelines for prescribing a stiffness category. A stiffer forefoot , wider c-curve, and thinner lay-up resulted... Surgery 2008;14:125-9. 14. Gailey R. Optimizing prosthetic running performance of the transtibial amputee. Proceedings of the Proceedings of the

  1. Launch and Landing Effects Ground Operations (LLEGO) Model

    Science.gov (United States)

    2008-01-01

    LLEGO is a model for understanding recurring launch and landing operations costs at Kennedy Space Center for human space flight. Launch and landing operations are often referred to as ground processing, or ground operations. Currently, this function is specific to the ground operations for the Space Shuttle Space Transportation System within the Space Shuttle Program. The Constellation system to follow the Space Shuttle consists of the crewed Orion spacecraft atop an Ares I launch vehicle and the uncrewed Ares V cargo launch vehicle. The Constellation flight and ground systems build upon many elements of the existing Shuttle flight and ground hardware, as well as upon existing organizations and processes. In turn, the LLEGO model builds upon past ground operations research, modeling, data, and experience in estimating for future programs. Rather than to simply provide estimates, the LLEGO model s main purpose is to improve expenses by relating complex relationships among functions (ground operations contractor, subcontractors, civil service technical, center management, operations, etc.) to tangible drivers. Drivers include flight system complexity and reliability, as well as operations and supply chain management processes and technology. Together these factors define the operability and potential improvements for any future system, from the most direct to the least direct expenses.

  2. Modeling and Simulation of Shuttle Launch and Range Operations

    Science.gov (United States)

    Bardina, Jorge; Thirumalainambi, Rajkumar

    2004-01-01

    The simulation and modeling test bed is based on a mockup of a space flight operations control suitable to experiment physical, procedural, software, hardware and psychological aspects of space flight operations. The test bed consists of a weather expert system to advise on the effect of weather to the launch operations. It also simulates toxic gas dispersion model, impact of human health risk, debris dispersion model in 3D visualization. Since all modeling and simulation is based on the internet, it could reduce the cost of operations of launch and range safety by conducting extensive research before a particular launch. Each model has an independent decision making module to derive the best decision for launch.

  3. Adapting NEMO for use as the UK operational storm surge forecasting model

    Science.gov (United States)

    Furner, Rachel; Williams, Jane; Horsburgh, Kevin; Saulter, Andrew

    2016-04-01

    The United Kingdom is an area vulnerable to damage due to storm surges, particularly the East Coast which suffered losses estimated at over £1 billion during the North Sea surge event of the 5th and 6th December 2013. Accurate forecasting of storm surge events for this region is crucial to enable government agencies to assess the risk of overtopping of coastal defences so they can respond appropriately, minimising risk to life and infrastructure. There has been an operational storm surge forecast service for this region since 1978, using a numerical model developed by the National Oceanography Centre (NOC) and run at the UK Met Office. This is also implemented as part of an ensemble prediction system, using perturbed atmospheric forcing to produce an ensemble surge forecast. In order to ensure efficient use of future supercomputer developments and to create synergy with existing operational coastal ocean models the Met Office and NOC have begun a joint project transitioning the storm surge forecast system from the current CS3X code base to a configuration based on the Nucleus for European Modelling of the Ocean (NEMO). This work involves both adapting NEMO to add functionality, such as allowing the drying out of ocean cells and changes allowing NEMO to run efficiently as a two-dimensional, barotropic model. As the ensemble surge forecast system is run with 12 members 4 times a day computational efficiency is of high importance. Upon completion this project will enable interesting scientific comparisons to be made between a NEMO based surge model and the full three-dimensional baroclinic NEMO based models currently run within the Met Office, facilitating assessment of the impact of baroclinic processes, and vertical resolution on sea surface height forecasts. Moving to a NEMO code base will also allow many future developments to be more easily used within the storm surge model due to the wide range of options which currently exist within NEMO or are planned for

  4. Designing visual displays and system models for safe reactor operations

    Energy Technology Data Exchange (ETDEWEB)

    Brown-VanHoozer, S.A.

    1995-12-31

    The material presented in this paper is based on two studies involving the design of visual displays and the user`s prospective model of a system. The studies involve a methodology known as Neuro-Linguistic Programming and its use in expanding design choices from the operator`s perspective image. The contents of this paper focuses on the studies and how they are applicable to the safety of operating reactors.

  5. On the Spectrum of a Model Operator in Fock Space

    CERN Document Server

    Rasulov, Tulkin H; Hasanov, Mahir

    2008-01-01

    A model operator $H$ associated to a system describing four particles in interaction, without conservation of the number of particles, is considered. We describe the essential spectrum of $H$ by the spectrum of the channel operators and prove the Hunziker-van Winter-Zhislin (HWZ) theorem for the operator $H.$ We also give some variational principles for boundaries of the essential spectrum and interior eigenvalues.

  6. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    2016-01-01

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  7. Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data

    Directory of Open Access Journals (Sweden)

    Paulino José García Nieto

    2016-01-01

    Full Text Available Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC in combination with multivariate adaptive regression splines (MARS technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC–MARS-based model was successfully used here to predict the milling tool flank wear (output variable as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC–MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed.

  8. Simulation Modeling of a Facility Layout in Operations Management Classes

    Science.gov (United States)

    Yazici, Hulya Julie

    2006-01-01

    Teaching quantitative courses can be challenging. Similarly, layout modeling and lean production concepts can be difficult to grasp in an introductory OM (operations management) class. This article describes a simulation model developed in PROMODEL to facilitate the learning of layout modeling and lean manufacturing. Simulation allows for the…

  9. Cognitive-Operative Model of Intelligent Learning Systems Behavior

    Science.gov (United States)

    Laureano-Cruces, Ana Lilia; Ramirez-Rodriguez, Javier; Mora-Torres, Martha; de Arriaga, Fernando; Escarela-Perez, Rafael

    2010-01-01

    In this paper behavior during the teaching-learning process is modeled by means of a fuzzy cognitive map. The elements used to model such behavior are part of a generic didactic model, which emphasizes the use of cognitive and operative strategies as part of the student-tutor interaction. Examples of possible initial scenarios for the…

  10. Cognitive-Operative Model of Intelligent Learning Systems Behavior

    Science.gov (United States)

    Laureano-Cruces, Ana Lilia; Ramirez-Rodriguez, Javier; Mora-Torres, Martha; de Arriaga, Fernando; Escarela-Perez, Rafael

    2010-01-01

    In this paper behavior during the teaching-learning process is modeled by means of a fuzzy cognitive map. The elements used to model such behavior are part of a generic didactic model, which emphasizes the use of cognitive and operative strategies as part of the student-tutor interaction. Examples of possible initial scenarios for the…

  11. Simulation Modeling of a Facility Layout in Operations Management Classes

    Science.gov (United States)

    Yazici, Hulya Julie

    2006-01-01

    Teaching quantitative courses can be challenging. Similarly, layout modeling and lean production concepts can be difficult to grasp in an introductory OM (operations management) class. This article describes a simulation model developed in PROMODEL to facilitate the learning of layout modeling and lean manufacturing. Simulation allows for the…

  12. Run-to-run variations, asymmetric pulses, and long time-scale transient phenomena in dielectric-barrier atmospheric pressure glow discharges

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Jichul; Raja, Laxminarayan L [Department of Aerospace Engineering and Engineering Mechanics, University of Texas at Austin, Austin, TX 78712 (United States)

    2007-05-21

    The dielectric-barrier (DB) discharge is an important approach to generate uniform non-equilibrium atmospheric-pressure glow discharges. We report run-to-run variations, asymmetric pulse formation and long time-scale transient phenomena in these discharges. For similar DB discharge geometric and operating conditions, we observe significant run-to-run variations as manifested in the different voltage-current waveforms at the start of each new run. These run-to-run variations are also accompanied by asymmetric pulses at the start of each run. The variations are observed to drift to a repeatable true steady-state condition on time scales of order tens of minutes to hours. Asymmetric pulse waveforms drift to a symmetric pulse waveform at the true steady state. We explore reasons for these phenomena and rule out thermal drift during a discharge run and gas-phase impurity buildup as potential causes. The most plausible explanation appears to be variations in the surface characteristics of the DBs between two consecutive runs owing to varying inter-run environmental exposure and the conditioning of the dielectric surface during a run owing to plasma-surface interactions. We speculate that the dielectric surface state affects the secondary electron emission coefficient of the surface which in turn is manifested in the discharge properties. A zero-dimensional model of the discharge is used to explore the effect of secondary electron emission.

  13. Quantitative modelling in design and operation of food supply systems

    NARCIS (Netherlands)

    Beek, van P.

    2004-01-01

    During the last two decades food supply systems not only got interest of food technologists but also from the field of Operations Research and Management Science. Operations Research (OR) is concerned with quantitative modelling and can be used to get insight into the optimal configuration and opera

  14. 一种仿人机器人跑步状态分析模型%A Running State Analysis Model for Humanoid Robot

    Institute of Scientific and Technical Information of China (English)

    王险峰; 洪炳镕; 朴松昊; 钟秋波

    2011-01-01

    In this paper, according to the dynamics of running humanoid robot, a probability model of running state analysis for humanoid robot is proposed based on the feedback of virtual acceleration sensor. Inertial force affects the running state of humanoid robot during the course of running. The value of acceleration can express inertial force. So we can obtain dynamic feedback from the virtual acceleration sensor built in humanoid robot to illustrate the running state of humanoid robot, and can analyse dynamic feedback from virtual acceleration sensor by using wavelet transform and fast Fourier transform. The probability model of running state analysis for humanoid robot is formulated by energy eigenvalue abstracted in freqency field. Using Mahalanobis distance as a criteria for stable running of humanoid robot, this model can express humanoid robot running state quantitatively. Simulation is conduct for humanoid robot model built with ADAMS, and the virtual acceleration sensor is built in the center of mass for humanoid robot. The experimental results show that this model is able to describe the running of humanoid robot and express the running state of humanoid robot during the course of running including start gait and stop gait, and it can help humanoid robot adjust their gaits with the change of environment to ensure their running stability.%依据仿人机器人跑步的动力学特性,通过对仿人机器人虚拟加速度传感器输出的信号进行分析,建立了仿人机器人跑步相关特征值的概率模型.针对仿人机器人的结构,分析了在整个跑步过程中惯性力和弯矩的作用,对跑步状态的影响,获取虚拟加速度传感器输出的信号,采用小波变换分析动态信号,同时进行快速傅里叶变换,在频域上提取能量特征值.使用马氏距离作为稳定跑步的判定标准,并给出了定量描述,在ADAMS软件中搭建仿人机器人,虚拟加速度传感器设置在质心处,进行跑步仿真实

  15. Models for estimation of land remote sensing satellites operational efficiency

    Science.gov (United States)

    Kurenkov, Vladimir I.; Kucherov, Alexander S.

    2017-01-01

    The paper deals with the problem of estimation of land remote sensing satellites operational efficiency. Appropriate mathematical models have been developed. Some results obtained with the help of the software worked out in Delphi programming support environment are presented.

  16. Aviation Shipboard Operations Modeling and Simulation (ASOMS) Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — Purpose:It is the mission of the Aviation Shipboard Operations Modeling and Simulation (ASOMS) Laboratory to provide a means by which to virtually duplicate products...

  17. Comparison of extracorporeal shock wave lithotripsy running models between outsourcing cooperation and rental cooperation conducted in Taiwan.

    Science.gov (United States)

    Liu, Chih-Kuang; Ko, Ming-Chung; Chen, Shiou-Sheng; Lee, Wen-Kai; Shia, Ben-Chang; Chiang, Han-Sun

    2015-02-01

    We conducted a retrospective study to compare the cost and effectiveness between two different running models for extracorporeal shock wave lithotripsy (SWL), including the outsourcing cooperation model (OC) and the rental cooperation model (RC). Between January 1999 and December 2005, we implemented OC for the SWL, and from January 2006 to October 2011, RC was utilized. With OC, the cooperative company provided a machine and shared a variable payment with the hospital, according to treatment sessions. With RC, the cooperative company provided a machine and received a fixed rent from the hospital. We calculated the cost of each treatment session, and evaluated the break-even point to estimate the lowest number of treatment sessions to make the balance between revenue and cost every month. Effectiveness parameters, including the stone-free rate, the retreatment rate, the rate of additional procedures and complications, were evaluated. Compared with OC there were significantly less treatment sessions for RC every month (42.6±7.8 vs. 36.8±6.5, p=0.01). The cost of each treatment session was significantly higher for OC than for RC (751.6±20.0 USD vs. 684.7±16.7 USD, p=0.01). The break-even point for the hospital was 27.5 treatment sessions/month for OC, when the hospital obtained 40% of the payment, and it could be reduced if the hospital got a greater percentage. The break-even point for the hospital was 27.3 treatment sessions/month for RC. No significant differences were noticed for the stone-free rate, the retreatment rate, the rate of additional procedures and complications. Our study revealed that RC had a lower cost for every treatment session, and fewer treatment sessions of SWL/month than OC. The study might provide a managerial implication for healthcare organization managers, when they face a situation of high price equipment investment. Copyright © 2012. Published by Elsevier B.V.

  18. Advancing reservoir operation description in physically based hydrological models

    Science.gov (United States)

    Anghileri, Daniela; Giudici, Federico; Castelletti, Andrea; Burlando, Paolo

    2016-04-01

    Last decades have seen significant advances in our capacity of characterizing and reproducing hydrological processes within physically based models. Yet, when the human component is considered (e.g. reservoirs, water distribution systems), the associated decisions are generally modeled with very simplistic rules, which might underperform in reproducing the actual operators' behaviour on a daily or sub-daily basis. For example, reservoir operations are usually described by a target-level rule curve, which represents the level that the reservoir should track during normal operating conditions. The associated release decision is determined by the current state of the reservoir relative to the rule curve. This modeling approach can reasonably reproduce the seasonal water volume shift due to reservoir operation. Still, it cannot capture more complex decision making processes in response, e.g., to the fluctuations of energy prices and demands, the temporal unavailability of power plants or varying amount of snow accumulated in the basin. In this work, we link a physically explicit hydrological model with detailed hydropower behavioural models describing the decision making process by the dam operator. In particular, we consider two categories of behavioural models: explicit or rule-based behavioural models, where reservoir operating rules are empirically inferred from observational data, and implicit or optimization based behavioural models, where, following a normative economic approach, the decision maker is represented as a rational agent maximising a utility function. We compare these two alternate modelling approaches on the real-world water system of Lake Como catchment in the Italian Alps. The water system is characterized by the presence of 18 artificial hydropower reservoirs generating almost 13% of the Italian hydropower production. Results show to which extent the hydrological regime in the catchment is affected by different behavioural models and reservoir

  19. Fuzzy Control Strategies in Human Operator and Sport Modeling

    CERN Document Server

    Ivancevic, Tijana T; Markovic, Sasa

    2009-01-01

    The motivation behind mathematically modeling the human operator is to help explain the response characteristics of the complex dynamical system including the human manual controller. In this paper, we present two different fuzzy logic strategies for human operator and sport modeling: fixed fuzzy-logic inference control and adaptive fuzzy-logic control, including neuro-fuzzy-fractal control. As an application of the presented fuzzy strategies, we present a fuzzy-control based tennis simulator.

  20. Operational research models in warehouse design and planning

    OpenAIRE

    Geraldes, Carla A. S.; Carvalho, Sameiro; Pereira, Guilherme

    2010-01-01

    The design and operation of a warehouse involve many challenging decision problems. In this paper, a literature review on warehousing models is presented. Authors start with a hierarchy of decision problems encountered in setting up warehouse design and planning processes. Next, some operational research decision models and solution algorithms supporting decision making at each discussed level are presented. The aim is to link academic researchers and warehouse practitioners, explaining what ...

  1. Marine Vessel Models in Changing Operational Conditions - A Tutorial

    DEFF Research Database (Denmark)

    Perez, Tristan; Sørensen, Asgeir; Blanke, Mogens

    2006-01-01

    conditions (VOC). However, since marine systems operate in changing VOCs, there is a need to adapt the models. To date, there is no theory available to describe a general model valid across different VOCs due to the complexity of the hydrodynamic involved. It is believed that system identification could......This tutorial paper provides an introduction, from a systems perspective, to the topic of ship motion dynamics of surface ships. It presents a classification of parametric models currently used for monitoring and control of marine vessels. These models are valid for certain vessel operational...

  2. The LHC Tier1 at PIC: experience from first LHC run

    CERN Document Server

    Flix, J; Acción, E; Acin, V; Acosta, C; Bernabeu, G; Bria, A; Casals, J; Caubet, M; Cruz, R; Delfino, M; Espinal, X; Lanciotti, E; López, F; Martinez, F; Méndez, V; Merino, G.; Pacheco, A.; Planas, E.; Porto, M C; Rodríguez, B; Sedov, A

    2013-01-01

    This paper summarizes the operational experience of the Tier1 computer center at Port d’Informació Científica (PIC) supporting the commissioning and first run (Run1) of the Large Hadron Collider (LHC). The evolution of the experiment computing models resulting from the higher amounts of data expected after the restart of the LHC are also described.

  3. The LHC Tier1 at PIC: experience from first LHC run

    Directory of Open Access Journals (Sweden)

    Flix J.

    2013-11-01

    Full Text Available This paper summarizes the operational experience of the Tier1 computer center at Port d’InformacióCientífica (PIC supporting the commissioning and first run (Run1 of the Large Hadron Collider (LHC. Theevolution of the experiment computing models resulting from the higher amounts of data expected after therestart of the LHC are also described.

  4. Quantum current operators; 3, Commutative quantum current operators semi-infinite construction and functional models

    CERN Document Server

    Ding, J; Ding, Jintai; Feigin, Boris

    1996-01-01

    We construct a commutative current operator $\\bar x^+(z)$ inside $U_q(\\hat{\\frak sl}(2))$. With this operator and the condition of quantum integrability on the quantum current of $U_q(\\hat{\\frak sl}(2))$, we derive the quantization of the semi-infinite construction of integrable modules of The quantization of the functional models for $\\hat{\\frak sl}(2)$ are also given.

  5. Changes in running economy following downhill running.

    Science.gov (United States)

    Chen, Trevor C; Nosaka, Kazunori; Tu, Jui-Hung

    2007-01-01

    In this study, we examined the time course of changes in running economy following a 30-min downhill (-15%) run at 70% peak aerobic power (VO2peak). Ten young men performed level running at 65, 75, and 85% VO2peak (5 min for each intensity) before, immediately after, and 1 - 5 days after the downhill run, at which times oxygen consumption (VO2), minute ventilation, the respiratory exchange ratio (RER), heart rate, ratings of perceived exertion (RPE), and blood lactate concentration were measured. Stride length, stride frequency, and range of motion of the ankle, knee, and hip joints during the level runs were analysed using high-speed (120-Hz) video images. Downhill running induced reductions (7 - 21%, P run. Oxygen consumption increased (4 - 7%, P stride frequency, as well as reductions in stride length and range of motion of the ankle and knee. The results suggest that changes in running form and compromised muscle function due to muscle damage contribute to the reduction in running economy for 3 days after downhill running.

  6. A Coupled Snow Operations-Skier Demand Model for the Ontario (Canada) Ski Region

    Science.gov (United States)

    Pons, Marc; Scott, Daniel; Steiger, Robert; Rutty, Michelle; Johnson, Peter; Vilella, Marc

    2016-04-01

    The multi-billion dollar global ski industry is one of the tourism subsectors most directly impacted by climate variability and change. In the decades ahead, the scholarly literature consistently projects decreased reliability of natural snow cover, shortened and more variable ski seasons, as well as increased reliance on snowmaking with associated increases in operational costs. In order to develop the coupled snow, ski operations and demand model for the Ontario ski region (which represents approximately 18% of Canada's ski market), the research utilized multiple methods, including: a in situ survey of over 2400 skiers, daily operations data from ski resorts over the last 10 years, climate station data (1981-2013), climate change scenario ensemble (AR5 - RCP 8.5), an updated SkiSim model (building on Scott et al. 2003; Steiger 2010), and an agent-based model (building on Pons et al. 2014). Daily snow and ski operations for all ski areas in southern Ontario were modeled with the updated SkiSim model, which utilized current differential snowmaking capacity of individual resorts, as determined from daily ski area operations data. Snowmaking capacities and decision rules were informed by interviews with ski area managers and daily operations data. Model outputs were validated with local climate station and ski operations data. The coupled SkiSim-ABM model was run with historical weather data for seasons representative of an average winter for the 1981-2010 period, as well as an anomalously cold winter (2012-13) and the record warm winter in the region (2011-12). The impact on total skier visits and revenues, and the geographic and temporal distribution of skier visits were compared. The implications of further climate adaptation (i.e., improving the snowmaking capacity of all ski areas to the level of leading resorts in the region) were also explored. This research advances system modelling, especially improving the integration of snow and ski operations models with

  7. Transformer real-time reliability model based on operating conditions

    Institute of Scientific and Technical Information of China (English)

    HE Jian; CHENG Lin; SUN Yuan-zhang

    2007-01-01

    Operational reliability evaluation theory reflects real-time reliability level of power system. The component failure rate varies with operating conditions. The impact of real-time operating conditions such as ambient temperature and transformer MVA (megavolt-ampere) loading on transformer insulation life is studied in this paper. The formula of transformer failure rate based on the winding hottest-spot temperature (HST) is given. Thus the real-time reliability model of transformer based on operating conditions is presented. The work is illustrated using the 1979 IEEE Reliability Test System. The changes of operating conditions are simulated by using hourly load curve and temperature curve, so the curves of real-time reliability indices are obtained by using operational reliability evaluation.

  8. Modeling Changes in Bed Surface Texture and Aquatic Habitat Caused by Run-of-River Hydropower Development

    Science.gov (United States)

    Fuller, T. K.; Venditti, J. G.; Nelson, P. A.; Popescu, V.; Palen, W.

    2014-12-01

    Run-of-river (RoR) hydropower has emerged as an important alternative to large reservoir-based dams in the renewable energy portfolios of China, India, Canada, and other areas around the globe. RoR projects generate electricity by diverting a portion of the channel discharge through a large pipe for several kilometers downhill where it is used to drive turbines before being returned to the channel. Individual RoR projects are thought to be less disruptive to local ecosystems than large hydropower because they involve minimal water storage, more closely match the natural hydrograph downstream of the project, and are capable of bypassing trapped sediment. However, there is concern that temporary sediment supply disruption may degrade the productivity of salmon spawning habitat downstream of the dam by causing changes in the grain size distribution of bed surface sediment. We hypothesize that salmon populations will be most susceptible to disruptions in sediment supply in channels where; 1) sediment supply is high relative to transport capacity prior to RoR development, and 2) project design creates substantial sediment storage volume. Determining the geomorphic effect of RoR development on aquatic habitat requires many years of field data collection, and even then it can be difficult to link geomorphic change to RoR development alone. As an alternative, we used a one-dimensional morphodynamic model to test our hypothesis across a range of pre-development sediment supply conditions and sediment storage volumes. Our results confirm that coarsening of the median surface grain-size is greatest in cases where pre-development sediment supply was highest and sediment storage volumes were large enough to disrupt supply over the course of the annual hydrograph or longer. In cases where the pre-development sediment supply is low, coarsening of the median surface grain-size is less than 2 mm over a multiple-year disruption period. When sediment supply is restored, our results

  9. Test and Evaluation of the Malicious Activity Simulation Tool (MAST) in a Local Area Network (LAN) Running the Common PC Operating System Environment (COMPOSE)

    Science.gov (United States)

    2013-09-01

    3 C. BENEFITS OF THIS RESEARCH TO THE DOD/DON ...........................3 D. ORGANIZATION...Transfer Protocol GIG Global Information Grid GUI Graphical User Interface HBSS Host-Based Security System HIPAA Health Information Portability...NPS Naval Postgraduate School OCO Offensive Cyber Operations OPNAV Office of the Chief of Naval Operations OSI Open Systems Interconnection PII

  10. Modeling and simulation of longwall scraper conveyor considering operational faults

    Science.gov (United States)

    Cenacewicz, Krzysztof; Katunin, Andrzej

    2016-06-01

    The paper provides a description of analytical model of a longwall scraper conveyor, including its electrical, mechanical, measurement and control actuating systems, as well as presentation of its implementation in the form of computer simulator in the Matlab®/Simulink® environment. Using this simulator eight scenarios typical of usual operational conditions of an underground scraper conveyor can be generated. Moreover, the simulator provides a possibility of modeling various operational faults and taking into consideration a measurement noise generated by transducers. The analysis of various combinations of scenarios of operation and faults with description is presented. The simulator developed may find potential application in benchmarking of diagnostic systems, testing of algorithms of operational control or can be used for supporting the modeling of real processes occurring in similar systems.

  11. INTELLECTUAL MODEL FORMATION OF RAILWAY STATION WORK DURING THE TRAIN OPERATION EXECUTION

    Directory of Open Access Journals (Sweden)

    O. V. Lavrukhin

    2014-11-01

    Full Text Available Purpose. The aim of this research work is to develop an intelligent technology for determination of the optimal route of freight trains administration on the basis of the technical and technological parameters. This will allow receiving the operational informed decisions by the station duty officer regarding to the train operation execution within the railway station. Metodology. The main elements of the research are the technical and technological parameters of the train station during the train operation. The methods of neural networks in order to form the self-teaching automated system were put in the basis of the generated model of train operation execution. Findings. The presented model of train operation execution at the railway station is realized on the basis of artificial neural networks using learning algorithm with a «teacher» in Matlab environment. The Matlab is also used for the immediate implementation of the intelligent automated control system of the train operation designed for the integration into the automated workplace of the duty station officer. The developed system is also useful to integrate on workplace of the traffic controller. This proposal is viable in case of the availability of centralized traffic control on the separate section of railway track. Originality. The model of train station operation during the train operation execution with elements of artificial intelligence was formed. It allows providing informed decisions to the station duty officer concerning a choice of rational and a safe option of reception and non-stop run of the trains with the ability of self-learning and adaptation to changing conditions. This condition is achieved by the principles of the neural network functioning. Practical value. The model of the intelligent system management of the process control for determining the optimal route receptionfor different categories of trains was formed.In the operational mode it offers the possibility

  12. Run scenarios for the linear collider

    Energy Technology Data Exchange (ETDEWEB)

    M. Battaglia et al.

    2002-12-23

    We have examined how a Linear Collider program of 1000 fb{sup -1} could be constructed in the case that a very rich program of new physics is accessible at {radical}s {le} 500 GeV. We have examined possible run plans that would allow the measurement of the parameters of a 120 GeV Higgs boson, the top quark, and could give information on the sparticle masses in SUSY scenarios in which many states are accessible. We find that the construction of the run plan (the specific energies for collider operation, the mix of initial state electron polarization states, and the use of special e{sup -}e{sup -} runs) will depend quite sensitively on the specifics of the supersymmetry model, as the decay channels open to particular sparticles vary drastically and discontinuously as the underlying SUSY model parameters are varied. We have explored this dependence somewhat by considering two rather closely related SUSY model points. We have called for operation at a high energy to study kinematic end points, followed by runs in the vicinity of several two body production thresholds once their location is determined by the end point studies. For our benchmarks, the end point runs are capable of disentangling most sparticle states through the use of specific final states and beam polarizations. The estimated sparticle mass precisions, combined from end point and scan data, are given in Table VIII and the corresponding estimates for the mSUGRA parameters are in Table IX. The precision for the Higgs boson mass, width, cross-sections, branching ratios and couplings are given in Table X. The errors on the top quark mass and width are expected to be dominated by the systematic limits imposed by QCD non-perturbative effects. The run plan devotes at least two thirds of the accumulated luminosity near the maximum LC energy, so that the program would be sensitive to unexpected new phenomena at high mass scales. We conclude that with a 1 ab{sup -1} program, expected to take the first 6-7 years

  13. Run scenarios for the linear collider

    Energy Technology Data Exchange (ETDEWEB)

    M. Battaglia et al.

    2002-12-23

    We have examined how a Linear Collider program of 1000 fb{sup -1} could be constructed in the case that a very rich program of new physics is accessible at {radical}s {le} 500 GeV. We have examined possible run plans that would allow the measurement of the parameters of a 120 GeV Higgs boson, the top quark, and could give information on the sparticle masses in SUSY scenarios in which many states are accessible. We find that the construction of the run plan (the specific energies for collider operation, the mix of initial state electron polarization states, and the use of special e{sup -}e{sup -} runs) will depend quite sensitively on the specifics of the supersymmetry model, as the decay channels open to particular sparticles vary drastically and discontinuously as the underlying SUSY model parameters are varied. We have explored this dependence somewhat by considering two rather closely related SUSY model points. We have called for operation at a high energy to study kinematic end points, followed by runs in the vicinity of several two body production thresholds once their location is determined by the end point studies. For our benchmarks, the end point runs are capable of disentangling most sparticle states through the use of specific final states and beam polarizations. The estimated sparticle mass precisions, combined from end point and scan data, are given in Table VIII and the corresponding estimates for the mSUGRA parameters are in Table IX. The precision for the Higgs boson mass, width, cross-sections, branching ratios and couplings are given in Table X. The errors on the top quark mass and width are expected to be dominated by the systematic limits imposed by QCD non-perturbative effects. The run plan devotes at least two thirds of the accumulated luminosity near the maximum LC energy, so that the program would be sensitive to unexpected new phenomena at high mass scales. We conclude that with a 1 ab{sup -1} program, expected to take the first 6-7 years

  14. Operator function modeling: Cognitive task analysis, modeling and intelligent aiding in supervisory control systems

    Science.gov (United States)

    Mitchell, Christine M.

    1990-01-01

    The design, implementation, and empirical evaluation of task-analytic models and intelligent aids for operators in the control of complex dynamic systems, specifically aerospace systems, are studied. Three related activities are included: (1) the models of operator decision making in complex and predominantly automated space systems were used and developed; (2) the Operator Function Model (OFM) was used to represent operator activities; and (3) Operator Function Model Expert System (OFMspert), a stand-alone knowledge-based system was developed, that interacts with a human operator in a manner similar to a human assistant in the control of aerospace systems. OFMspert is an architecture for an operator's assistant that uses the OFM as its system and operator knowledge base and a blackboard paradigm of problem solving to dynamically generate expectations about upcoming operator activities and interpreting actual operator actions. An experiment validated the OFMspert's intent inferencing capability and showed that it inferred the intentions of operators in ways comparable to both a human expert and operators themselves. OFMspert was also augmented with control capabilities. An interface allowed the operator to interact with OFMspert, delegating as much or as little control responsibility as the operator chose. With its design based on the OFM, OFMspert's control capabilities were available at multiple levels of abstraction and allowed the operator a great deal of discretion over the amount and level of delegated control. An experiment showed that overall system performance was comparable for teams consisting of two human operators versus a human operator and OFMspert team.

  15. An Economic Model of U.S. Airline Operating Expenses

    Science.gov (United States)

    Harris, Franklin D.

    2005-01-01

    This report presents a new economic model of operating expenses for 67 airlines. The model is based on data that the airlines reported to the United States Department of Transportation in 1999. The model incorporates expense-estimating equations that capture direct and indirect expenses of both passenger and cargo airlines. The variables and business factors included in the equations are detailed enough to calculate expenses at the flight equipment reporting level. Total operating expenses for a given airline are then obtained by summation over all aircraft operated by the airline. The model's accuracy is demonstrated by correlation with the DOT Form 41 data from which it was derived. Passenger airlines are more accurately modeled than cargo airlines. An appendix presents a concise summary of the expense estimating equations with explanatory notes. The equations include many operational and aircraft variables, which accommodate any changes that airline and aircraft manufacturers might make to lower expenses in the future. In 1999, total operating expenses of the 67 airlines included in this study amounted to slightly over $100.5 billion. The economic model reported herein estimates $109.3 billion.

  16. Enhancement of GABAA-current run-down in the hippocampus occurs at the first spontaneous seizure in a model of temporal lobe epilepsy

    Science.gov (United States)

    Mazzuferi, Manuela; Palma, Eleonora; Martinello, Katiuscia; Maiolino, Francesca; Roseti, Cristina; Fucile, Sergio; Fabene, Paolo F.; Schio, Federica; Pellitteri, Michele; Sperk, Guenther; Miledi, Ricardo; Eusebi, Fabrizio; Simonato, Michele

    2010-01-01

    Refractory temporal lobe epilepsy (TLE) is associated with a dysfunction of inhibitory signaling mediated by GABAA receptors. In particular, the use-dependent decrease (run-down) of the currents (IGABA) evoked by the repetitive activation of GABAA receptors is markedly enhanced in hippocampal and cortical neurons of TLE patients. Understanding the role of IGABA run-down in the disease, and its mechanisms, may allow development of medical alternatives to surgical resection, but such mechanistic insights are difficult to pursue in surgical human tissue. Therefore, we have used an animal model (pilocarpine-treated rats) to identify when and where the increase in IGABA run-down occurs in the natural history of epilepsy. We found: (i) that the increased run-down occurs in the hippocampus at the time of the first spontaneous seizure (i.e., when the diagnosis of epilepsy is made), and then extends to the neocortex and remains constant in the course of the disease; (ii) that the phenomenon is strictly correlated with the occurrence of spontaneous seizures, because it is not observed in animals that do not become epileptic. Furthermore, initial exploration of the molecular mechanism disclosed a relative increase in α4-, relative to α1-containing GABAA receptors, occurring at the same time when the increased run-down appears, suggesting that alterations in the molecular composition of the GABA receptors may be responsible for the occurrence of the increased run-down. These observations disclose research opportunities in the field of epileptogenesis that may lead to a better understanding of the mechanism whereby a previously normal tissue becomes epileptic. PMID:20133704

  17. Running Parallel Discrete Event Simulators on Sierra

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, P. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jefferson, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-12-03

    In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.

  18. Ergonomics applications of a mechanical model of the human operator in power hand tool operation.

    Science.gov (United States)

    Lin, Jia-Hua; Radwin, Robert; Nembhard, David

    2005-02-01

    Applications of a new model for predicting power threaded-fastener-driving tool operator response and capacity to react against impulsive torque reaction forces are explored for use in tool selection and ergonomic workplace design. The model is based on a mechanical analog of the human operator, with parameters dependent on work location (horizontal and vertical distances); work orientation (horizontal and vertical); and tool shape (in-line, pistol grip, and right angle); and is stratified by gender. This model enables prediction of group means and variances of handle displacement and force for a given tool configuration. Response percentiles can be ascertained for specific tool operations. For example, a sample pistol grip nutrunner used on a horizontal surface at 30 cm in front of the ankles and 140 cm above the floor results in a predicted mean handle reaction displacement of 39.0 (SD=28.1) mm for males. Consequently 63%of the male users exceed a 30 mm handle displacement limit. When a right angle tool of similar torque output is used instead, the model predicted that only 4.6%of the male tool users exceed a 30 mm handle displacement. A method is described for interpolating individual subject model parameters at any given work location using linear combinations in relation to the range of modeled factors. Additional examples pertinent to ergonomic workstation design and tool selection are provided to demonstrate how the model can be used to aid tool selection and workstation design.

  19. Estimation of pump operational state with model-based methods

    Energy Technology Data Exchange (ETDEWEB)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina [Institute of Energy Technology, Lappeenranta University of Technology, P.O. Box 20, FI-53851 Lappeenranta (Finland); Kestilae, Juha [ABB Drives, P.O. Box 184, FI-00381 Helsinki (Finland)

    2010-06-15

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently. (author)

  20. Effects of cognitive stimulation with a self-modeling video on time to exhaustion while running at maximal aerobic velocity: a pilot study.

    Science.gov (United States)

    Hagin, Vincent; Gonzales, Benoît R; Groslambert, Alain

    2015-04-01

    This study assessed whether video self-modeling improves running performance and influences the rate of perceived exertion and heart rate response. Twelve men (M age=26.8 yr., SD=6; M body mass index=22.1 kg.m(-2), SD=1) performed a time to exhaustion running test at 100 percent maximal aerobic velocity while focusing on a video self-modeling loop to synchronize their stride. Compared to the control condition, there was a significant increase of time to exhaustion. Perceived exertion was lower also, but there was no significant change in mean heart rate. In conclusion, the video self-modeling used as a pacer apparently increased endurance by decreasing perceived exertion without affecting the heart rate.

  1. Dynamic emulation modelling for the optimal operation of water systems: an overview

    Science.gov (United States)

    Castelletti, A.; Galelli, S.; Giuliani, M.

    2014-12-01

    Despite sustained increase in computing power over recent decades, computational limitations remain a major barrier to the effective and systematic use of large-scale, process-based simulation models in rational environmental decision-making. Whereas complex models may provide clear advantages when the goal of the modelling exercise is to enhance our understanding of the natural processes, they introduce problems of model identifiability caused by over-parameterization and suffer from high computational burden when used in management and planning problems. As a result, increasing attention is now being devoted to emulation modelling (or model reduction) as a way of overcoming these limitations. An emulation model, or emulator, is a low-order approximation of the process-based model that can be substituted for it in order to solve high resource-demanding problems. In this talk, an overview of emulation modelling within the context of the optimal operation of water systems will be provided. Particular emphasis will be given to Dynamic Emulation Modelling (DEMo), a special type of model complexity reduction in which the dynamic nature of the original process-based model is preserved, with consequent advantages in a wide range of problems, particularly feedback control problems. This will be contrasted with traditional non-dynamic emulators (e.g. response surface and surrogate models) that have been studied extensively in recent years and are mainly used for planning purposes. A number of real world numerical experiences will be used to support the discussion ranging from multi-outlet water quality control in water reservoir through erosion/sedimentation rebalancing in the operation of run-off-river power plants to salinity control in lake and reservoirs.

  2. Categorical model of structural operational semantics for imperative language

    Directory of Open Access Journals (Sweden)

    William Steingartner

    2016-12-01

    Full Text Available Definition of programming languages consists of the formal definition of syntax and semantics. One of the most popular semantic methods used in various stages of software engineering is structural operational semantics. It describes program behavior in the form of state changes after execution of elementary steps of program. This feature makes structural operational semantics useful for implementation of programming languages and also for verification purposes. In our paper we present a new approach to structural operational semantics. We model behavior of programs in category of states, where objects are states, an abstraction of computer memory and morphisms model state changes, execution of a program in elementary steps. The advantage of using categorical model is its exact mathematical structure with many useful proved properties and its graphical illustration of program behavior as a path, i.e. a composition of morphisms. Our approach is able to accentuate dynamics of structural operational semantics. For simplicity, we assume that data are intuitively typed. Visualization and facility of our model is  not only  a  new model of structural operational semantics of imperative programming languages but it can also serve for education purposes.

  3. Dimension Seven Operators in Standard Model with Right handed Neutrinos

    CERN Document Server

    Bhattacharya, Subhaditya

    2015-01-01

    In this article, we evaluate all the dimension seven operators involving Standard Model (SM) fields and SM gauge symmetry including right handed neutrinos. We also indicate those operators potentially tree generated (PTG) and those which will be loop generated (LG), so that we know where to look for new physics (NP) contributions in observable effects. We indicate limits on NP scale from the current data to each of the PTG operators without right handed neutrinos. We also calculate the reach of NP scale from two of the operators which produce same sign dilepton at the upgraded Large Hadron Collider (LHC). Our list seems to be consistent with earlier efforts where operators of same dimension have been worked out without right handed neutrinos.

  4. Quantum dimensions from local operator excitations in the Ising model

    CERN Document Server

    Caputa, Pawel

    2016-01-01

    We compare the time evolution of entanglement measures after local operator excitation in the critical Ising model with predictions from conformal field theory. For the spin operator and its descendants we find that Renyi entropies of a block of spins increase by a constant that matches the logarithm of the quantum dimension of the conformal family. However, for the energy operator we find a small constant contribution that differs from the conformal field theory answer equal to zero. We argue that the mismatch is caused by the subtleties in the identification between the local operators in conformal field theory and their lattice counterpart. Our results indicate that evolution of entanglement measures in locally excited states not only constraints this identification, but also can be used to extract non-trivial data about the conformal field theory that governs the critical point. We generalize our analysis to the Ising model away from the critical point, states with multiple local excitations, as well as t...

  5. The Computer-Aided Analytic Process Model. Operations Handbook for the Analytic Process Model Demonstration Package

    Science.gov (United States)

    1986-01-01

    Research Note 86-06 THE COMPUTER-AIDED ANALYTIC PROCESS MODEL : OPERATIONS HANDBOOK FOR THE ANALYTIC PROCESS MODEL DE ONSTRATION PACKAGE Ronald G...ic Process Model ; Operations Handbook; Tutorial; Apple; Systems Taxonomy Mod--l; Training System; Bradl1ey infantry Fighting * Vehicle; BIFV...8217. . . . . . . .. . . . . . . . . . . . . . . . * - ~ . - - * m- .. . . . . . . item 20. Abstract -continued companion volume-- "The Analytic Process Model for

  6. VERIFICATION OF GEAR DYNAMIC MODEL IN DIFFERENT OPERATING CONDITIONS

    Directory of Open Access Journals (Sweden)

    Grzegorz PERUŃ

    2014-09-01

    Full Text Available The article presents the results of verification of the drive system dynamic model with gear. Tests were carried out on the real object in different operating conditions. For the same assumed conditions were also carried out simulation studies. Comparison of the results obtained from those two series of tests helped determine the suitability of the model and verify the possibility of replacing experimental research by simulations with use of dynamic model.

  7. Q-operators in the six-vertex model

    Directory of Open Access Journals (Sweden)

    Vladimir V. Mangazeev

    2014-09-01

    Here we use a different strategy and construct Q-operators as integral operators with factorized kernels based on the original Baxter's method used in the solution of the eight-vertex model. We compare this approach with the method developed in [1] and find the explicit connection between two constructions. We also discuss a reduction to the case of finite-dimensional representations with (half-integer spins.

  8. Testing and Implementation of the Navy's Operational Circulation Model for the Mediterranean Sea

    Science.gov (United States)

    Farrar, P. D.; Mask, A. C.

    2012-04-01

    The US Naval Oceanographic Office (NAVOCEANO) has the responsibility for running ocean models in support of Navy operations. NAVOCEANO delivers Navy-relevant global, regional, and coastal ocean forecast products on a 24 hour/7 day a week schedule. In 2011, NAVOCEANO implemented an operational version of the RNCOM (Regional Navy Coastal Ocean Model) for the Mediterranean Sea (MedSea), replacing an older variation of the Princeton Ocean Model originally set up for this area back in the mid-1990's. RNCOM is a gridded model that assimilates both satellite data and in situ profile data in near real time. This 3km MedSea RNCOM is nested within a lower resolution global NCOM in the Atlantic at the 12.5 degree West longitude. Before being accepted as a source of operational products, a Navy ocean model must pass a series of validation tests and then once in service, its skill is monitored by software and regional specialists. This presentation will provide a brief summary of the initial evaluation results. Because of the oceanographic peculiarities of this basin, the MedSea implementation posed a set of new problems for an RNCOM operation. One problem was the present Navy satellite altimetry model assimilation techniques do not improve Mediterranean NCOM forecasts, so it has been turned off, pending improvements. Another problem was that since most in-situ observations were profiling floats with short five-day profiling intervals, there was a problem with temporal aliasing when comparing these observations to the NCOM predictions. Because of the time and spatial correlations in the MedSea and in the model, the observation/model comparisons would give an unrealistically optimistic estimate of model accuracy of the Mediterranean's temperature/salinity structure. Careful pre-selection of profiles for comparison during the evaluation stage, based on spatial distribution and novelty, was used to minimize this effect. NAVOCEANO's operational customers are interested primarily in

  9. Can Unshod Running Reduce Running Injuries?

    Science.gov (United States)

    2012-06-08

    quadrupeds run, their internal organs expand and contract like an accordion as they stride when running. As a cheetah strides forward, its lungs expand...and take in air. When the cheetah compresses its stride, the lungs are collapsed and the cheetah breathes out. This take-a-step and take-a- breath

  10. Model of environmental life cycle assessment for coal mining operations

    Energy Technology Data Exchange (ETDEWEB)

    Burchart-Korol, Dorota, E-mail: dburchart@gig.eu; Fugiel, Agata, E-mail: afugiel@gig.eu; Czaplicka-Kolarz, Krystyna, E-mail: kczaplicka@gig.eu; Turek, Marian, E-mail: mturek@gig.eu

    2016-08-15

    This paper presents a novel approach to environmental assessment of coal mining operations, which enables assessment of the factors that are both directly and indirectly affecting the environment and are associated with the production of raw materials and energy used in processes. The primary novelty of the paper is the development of a computational environmental life cycle assessment (LCA) model for coal mining operations and the application of the model for coal mining operations in Poland. The LCA model enables the assessment of environmental indicators for all identified unit processes in hard coal mines with the life cycle approach. The proposed model enables the assessment of greenhouse gas emissions (GHGs) based on the IPCC method and the assessment of damage categories, such as human health, ecosystems and resources based on the ReCiPe method. The model enables the assessment of GHGs for hard coal mining operations in three time frames: 20, 100 and 500 years. The model was used to evaluate the coal mines in Poland. It was demonstrated that the largest environmental impacts in damage categories were associated with the use of fossil fuels, methane emissions and the use of electricity, processing of wastes, heat, and steel supports. It was concluded that an environmental assessment of coal mining operations, apart from direct influence from processing waste, methane emissions and drainage water, should include the use of electricity, heat and steel, particularly for steel supports. Because the model allows the comparison of environmental impact assessment for various unit processes, it can be used for all hard coal mines, not only in Poland but also in the world. This development is an important step forward in the study of the impacts of fossil fuels on the environment with the potential to mitigate the impact of the coal industry on the environment. - Highlights: • A computational LCA model for assessment of coal mining operations • Identification of

  11. Effect of long-term voluntary exercise wheel running on susceptibility to bacterial pulmonary infections in a mouse model

    DEFF Research Database (Denmark)

    van de Weert-van Leeuwen, Pauline B; de Vrankrijker, Angélica M M; Fentz, Joachim

    2013-01-01

    moderate exercise has many health benefits, healthy mice showed increased bacterial (P. aeruginosa) load and symptoms, after regular voluntary exercise, with perseverance of the phagocytic capacity of monocytes and neutrophils. Whether patients, suffering from bacterial infectious diseases, should......Regular moderate exercise has been suggested to exert anti-inflammatory effects and improve immune effector functions, resulting in reduced disease incidence and viral infection susceptibility. Whether regular exercise also affects bacterial infection susceptibility is unknown. The aim...... of this study was to investigate whether regular voluntary exercise wheel running prior to a pulmonary infection with bacteria (P. aeruginosa) affects lung bacteriology, sickness severity and phagocyte immune function in mice. Balb/c mice were randomly placed in a cage with or without a running wheel. After 28...

  12. Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling

    CERN Document Server

    Yaghoobi, Mehrdad; Gribonval, Remi; Davies, Mike E

    2012-01-01

    We consider the problem of learning a low-dimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of the training samples using sparse synthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical lear...

  13. Portfolios and risk premia for the long run

    CERN Document Server

    Guasoni, Paolo; 10.1214/11-AAP767

    2012-01-01

    This paper develops a method to derive optimal portfolios and risk premia explicitly in a general diffusion model for an investor with power utility and a long horizon. The market has several risky assets and is potentially incomplete. Investment opportunities are driven by, and partially correlated with, state variables which follow an autonomous diffusion. The framework nests models of stochastic interest rates, return predictability, stochastic volatility and correlation risk. In models with several assets and a single state variable, long-run portfolios and risk premia admit explicit formulas up the solution of an ordinary differential equation which characterizes the principal eigenvalue of an elliptic operator. Multiple state variables lead to a quasilinear partial differential equation which is solvable for many models of interest. The paper derives the long-run optimal portfolio and the long-run optimal pricing measures depending on relative risk aversion, as well as their finite-horizon performance.

  14. Piketty in the long run.

    Science.gov (United States)

    Cowell, Frank A

    2014-12-01

    I examine the idea of 'the long run' in Piketty (2014) and related works. In contrast to simplistic interpretations of long-run models of income- and wealth-distribution Piketty (2014) draws on a rich economic analysis that models the intra- and inter-generational processes that underly the development of the wealth distribution. These processes inevitably involve both market and non-market mechanisms. To understand this approach, and to isolate the impact of different social and economic factors on inequality in the long run, we use the concept of an equilibrium distribution. However the long-run analysis of policy should not presume that there is an inherent tendency for the wealth distribution to approach equilibrium.

  15. A Novel Technique for Running the NASA Legacy Code LAPIN Synchronously With Simulations Developed Using Simulink

    Science.gov (United States)

    Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.

    2012-01-01

    This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.

  16. Model of environmental life cycle assessment for coal mining operations.

    Science.gov (United States)

    Burchart-Korol, Dorota; Fugiel, Agata; Czaplicka-Kolarz, Krystyna; Turek, Marian

    2016-08-15

    This paper presents a novel approach to environmental assessment of coal mining operations, which enables assessment of the factors that are both directly and indirectly affecting the environment and are associated with the production of raw materials and energy used in processes. The primary novelty of the paper is the development of a computational environmental life cycle assessment (LCA) model for coal mining operations and the application of the model for coal mining operations in Poland. The LCA model enables the assessment of environmental indicators for all identified unit processes in hard coal mines with the life cycle approach. The proposed model enables the assessment of greenhouse gas emissions (GHGs) based on the IPCC method and the assessment of damage categories, such as human health, ecosystems and resources based on the ReCiPe method. The model enables the assessment of GHGs for hard coal mining operations in three time frames: 20, 100 and 500years. The model was used to evaluate the coal mines in Poland. It was demonstrated that the largest environmental impacts in damage categories were associated with the use of fossil fuels, methane emissions and the use of electricity, processing of wastes, heat, and steel supports. It was concluded that an environmental assessment of coal mining operations, apart from direct influence from processing waste, methane emissions and drainage water, should include the use of electricity, heat and steel, particularly for steel supports. Because the model allows the comparison of environmental impact assessment for various unit processes, it can be used for all hard coal mines, not only in Poland but also in the world. This development is an important step forward in the study of the impacts of fossil fuels on the environment with the potential to mitigate the impact of the coal industry on the environment.

  17. Run-based multi-model interannual variability assessment of precipitation and temperature over Pakistan using two IPCC AR4-based AOGCMs

    Science.gov (United States)

    Asmat, U.; Athar, H.

    2017-01-01

    The interannual variability of precipitation and temperature is derived from all runs of the Intergovernmental Panel on Climate Change (IPCC) fourth Assessment Report (AR4)-based two Atmospheric Oceanic General Circulation Model (AOGCM) simulations, over Pakistan, on an annual basis. The models are the CM2.0 and CM2.1 versions of Geophysical Fluid Dynamics Laboratory (GFDL)-based AOGCM. Simulations for a recent 22-year period (1979-2000) are validated using Climate Research Unit (CRU) and NCEP/NCAR datasets over Pakistan, for the first time. The study area of Pakistan is divided into three regions: all Pakistan, northern Pakistan, and southern Pakistan. Bias, root mean square error, one sigma standard deviation, and coefficient of variance are used as validation metrics. For all Pakistan and northern Pakistan, all three runs of GFDL-CM2.0 perform better under the above metrics, both for precipitation and temperature (except for one sigma standard deviation and coefficient of variance), whereas for southern Pakistan, third run of GFDL-CM2.1 perform better expect for the root mean square error for temperature. A mean and variance-based bias correction is applied to bias in modeled precipitation and temperature variables. This resulted in a reduced bias, except for the months of June, July, and August, when the reduction in bias is relatively lower.

  18. Simplified Modeling, Analysis and Simulation of Permanent Magnet Brushless Direct Current Motors for Sensorless Operation

    Directory of Open Access Journals (Sweden)

    E. Kaliappan

    2012-01-01

    Full Text Available Problem statement: In this study, a simplified modeling and experimental analysis of Permanent Magnet Brushless DC (PMBLDC motors for Sensorless operation using MATLAB/SIMULINK. This model provides a mechanism for monitoring and controlling the voltage, current, speed and torque response. Approach: BLDC motor is modeled as sub-blocks. The inverter and switching function are implemented as S-function builder block. The Sensorless scheme employs direct back emf based zero crossing detection technique. Results: The proposed model with Sensorless control technique with back emf zero crossing detection is tested in the BLDC Motor and the performance was evaluated. The simulated and experimental results show that the proposed modeling works quite well during starting and running conditions. Conclusion/Recommendation: The developed model consists of several independent sub-blocks, that can be used in the modeling of Permanent Magnet Sinusoidal Motor and induction motor. Hence the developed simulation model is a design tool to study the dynamic behavior of Sensorless Controlled Brushless DC Motor.

  19. Neural Networks for Hydrological Modeling Tool for Operational Purposes

    Science.gov (United States)

    Bhatt, Divya; Jain, Ashu

    2010-05-01

    Hydrological models are useful in many water resources applications such as flood control, irrigation and drainage, hydro power generation, water supply, erosion and sediment control, etc. Estimates of runoff are needed in many water resources planning, design development, operation and maintenance activities. Runoff is generally computed using rainfall-runoff models. Computer based hydrologic models have become popular for obtaining hydrological forecasts and for managing water systems. Rainfall-runoff library (RRL) is computer software developed by Cooperative Research Centre for Catchment Hydrology (CRCCH), Australia consisting of five different conceptual rainfall-runoff models, and has been in operation in many water resources applications in Australia. Recently, soft artificial intelligence tools such as Artificial Neural Networks (ANNs) have become popular for research purposes but have not been adopted in operational hydrological forecasts. There is a strong need to develop ANN models based on real catchment data and compare them with the conceptual models actually in use in real catchments. In this paper, the results from an investigation on the use of RRL and ANNs are presented. Out of the five conceptual models in the RRL toolkit, SimHyd model has been used. Genetic Algorithm has been used as an optimizer in the RRL to calibrate the SimHyd model. Trial and error procedures were employed to arrive at the best values of various parameters involved in the GA optimizer to develop the SimHyd model. The results obtained from the best configuration of the SimHyd model are presented here. Feed-forward neural network model structure trained by back-propagation training algorithm has been adopted here to develop the ANN models. The daily rainfall and runoff data derived from Bird Creek Basin, Oklahoma, USA have been employed to develop all the models included here. A wide range of error statistics have been used to evaluate the performance of all the models

  20. Dynamic and adaptive policy models for coalition operations

    Science.gov (United States)

    Verma, Dinesh; Calo, Seraphin; Chakraborty, Supriyo; Bertino, Elisa; Williams, Chris; Tucker, Jeremy; Rivera, Brian; de Mel, Geeth R.

    2017-05-01

    It is envisioned that the success of future military operations depends on the better integration, organizationally and operationally, among allies, coalition members, inter-agency partners, and so forth. However, this leads to a challenging and complex environment where the heterogeneity and dynamism in the operating environment intertwines with the evolving situational factors that affect the decision-making life cycle of the war fighter. Therefore, the users in such environments need secure, accessible, and resilient information infrastructures where policy-based mechanisms adopt the behaviours of the systems to meet end user goals. By specifying and enforcing a policy based model and framework for operations and security which accommodates heterogeneous coalitions, high levels of agility can be enabled to allow rapid assembly and restructuring of system and information resources. However, current prevalent policy models (e.g., rule based event-condition-action model and its variants) are not sufficient to deal with the highly dynamic and plausibly non-deterministic nature of these environments. Therefore, to address the above challenges, in this paper, we present a new approach for policies which enables managed systems to take more autonomic decisions regarding their operations.

  1. A spatial operator algebra for manipulator modeling and control

    Science.gov (United States)

    Rodriguez, G.; Kreutz, K.; Milman, M.

    1988-01-01

    A powerful new spatial operator algebra for modeling, control, and trajectory design of manipulators is discussed along with its implementation in the Ada programming language. Applications of this algebra to robotics include an operator representation of the manipulator Jacobian matrix; the robot dynamical equations formulated in terms of the spatial algebra, showing the complete equivalence between the recursive Newton-Euler formulations to robot dynamics; the operator factorization and inversion of the manipulator mass matrix which immediately results in O(N) recursive forward dynamics algorithms; the joint accelerations of a manipulator due to a tip contact force; the recursive computation of the equivalent mass matrix as seen at the tip of a manipulator; and recursive forward dynamics of a closed chain system. Finally, additional applications and current research involving the use of the spatial operator algebra are discussed in general terms.

  2. Comparison of operation optimization methods in energy system modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2013-01-01

    , possibilities for decoupling production constraints may be valuable. Introduction of heat pumps in the district heating network may pose this ability. In order to evaluate if the introduction of heat pumps is economically viable, we develop calculation methods for the operation patterns of each of the used...... energy technologies. In the paper, three frequently used operation optimization methods are examined with respect to their impact on operation management of the combined technologies. One of the investigated approaches utilises linear programming for optimisation, one uses linear programming with binary...... operation constraints, while the third approach uses nonlinear programming. In the present case the non-linearity occurs in the boiler efficiency of power plants and the cv-value of an extraction plant. The linear programming model is used as a benchmark, as this type is frequently used, and has the lowest...

  3. RUN TO RUN CONTROL OF TIME-PRESSURE DISPENSING SYSTEM

    Institute of Scientific and Technical Information of China (English)

    Zhao Yixiang; Li Hanxiong; Ding Han; Xiong Youlun

    2004-01-01

    In electronics packaging the time-pressure dispensing system is widely used to squeeze the adhesive fluid in a syringe onto boards or sub-strates with the pressurized air.However,complexity of the process,which includes the air-fluid coupling and the nonlinear uncertainties,makes it diffi-cult to have a consistent process per-formance.An integrated dispensing process model is first introduced and then its input-output regression rela-tionship is used to design a run to run control methodology for this process.The controller takes EWMA scheme and its stability region is given.Ex-perimental results verify the effective-ness of the proposed run to run control method for dispensing process.

  4. Determinants Of Savings Behavior In Pakistan: Long Run - Short Run Association And Causality

    OpenAIRE

    Ahmad Fawad

    2015-01-01

    The existing studies on private savings have mostly investigated the long run and short association of different variables with private savings, whereas no known study has investigated both long run and short run causality of variables against private savings by using data of Pakistan. The current study used time series data of Pakistan over the period of 1972 to 2012 and employed long run cointegration test, first normalized equation for long run association, vector error correction model fo...

  5. Wave Run-Up on Rubble Breakwaters

    DEFF Research Database (Denmark)

    Van de Walle, Bjorn; De Rouck, Julien; Troch, Peter

    2005-01-01

    Seven sets of data for wave run-up on a rubble mound breakwater were combined and re-analysed, with full-scale, large-scale and small-scale model test results being taken into account. The dimensionless wave run-up value Ru-2%/Hm0 was considered, where R u-2% is the wave run-up height exceeded by...

  6. Chiefly Symmetric: Results on the Scalability of Probabilistic Model Checking for Operating-System Code

    Directory of Open Access Journals (Sweden)

    Marcus Völp

    2012-11-01

    Full Text Available Reliability in terms of functional properties from the safety-liveness spectrum is an indispensable requirement of low-level operating-system (OS code. However, with evermore complex and thus less predictable hardware, quantitative and probabilistic guarantees become more and more important. Probabilistic model checking is one technique to automatically obtain these guarantees. First experiences with the automated quantitative analysis of low-level operating-system code confirm the expectation that the naive probabilistic model checking approach rapidly reaches its limits when increasing the numbers of processes. This paper reports on our work-in-progress to tackle the state explosion problem for low-level OS-code caused by the exponential blow-up of the model size when the number of processes grows. We studied the symmetry reduction approach and carried out our experiments with a simple test-and-test-and-set lock case study as a representative example for a wide range of protocols with natural inter-process dependencies and long-run properties. We quickly see a state-space explosion for scenarios where inter-process dependencies are insignificant. However, once inter-process dependencies dominate the picture models with hundred and more processes can be constructed and analysed.

  7. Automated particulate sampler field test model operations guide

    Energy Technology Data Exchange (ETDEWEB)

    Bowyer, S.M.; Miley, H.S.

    1996-10-01

    The Automated Particulate Sampler Field Test Model Operations Guide is a collection of documents which provides a complete picture of the Automated Particulate Sampler (APS) and the Field Test in which it was evaluated. The Pacific Northwest National Laboratory (PNNL) Automated Particulate Sampler was developed for the purpose of radionuclide particulate monitoring for use under the Comprehensive Test Ban Treaty (CTBT). Its design was directed by anticipated requirements of small size, low power consumption, low noise level, fully automatic operation, and most predominantly the sensitivity requirements of the Conference on Disarmament Working Paper 224 (CDWP224). This guide is intended to serve as both a reference document for the APS and to provide detailed instructions on how to operate the sampler. This document provides a complete description of the APS Field Test Model and all the activity related to its evaluation and progression.

  8. Mathematical modelling of unglazed solar collectors under extreme operating conditions

    DEFF Research Database (Denmark)

    Bunea, M.; Perers, Bengt; Eicher, S.

    2015-01-01

    average temperature levels at the evaporator. Simulation of these systems requires a collector model that can take into account operation at very low temperatures (below freezing) and under various weather conditions, particularly operation without solar irradiation.A solar collector mathematical model......Combined heat pumps and solar collectors got a renewed interest on the heating system market worldwide. Connected to the heat pump evaporator, unglazed solar collectors can considerably increase their efficiency, but they also raise the coefficient of performance of the heat pump with higher...... was found due to the condensation phenomenon and up to 40% due to frost under no solar irradiation. This work also points out the influence of the operating conditions on the collector's characteristics.Based on experiments carried out at a test facility, every heat flux on the absorber was separately...

  9. FLUKA predictions of the absorbed dose in the HCAL Endcap scintillators using a Run1 (2012) CMS FLUKA model

    CERN Document Server

    CMS Collaboration

    2016-01-01

    Estimates of absorbed dose in HCAL Endcap (HE) region as predicted by FLUKA Monte Carlo code. Dose is calculated in an R-phi-Z grid overlaying HE region, with resolution 1cm in R, 1mm in Z, and a single 360 degree bin in phi. This allows calculation of absorbed dose within a single 4mm thick scintillator layer without including other regions or materials. This note shows estimates of the cumulative dose in scintillator layers 1 and 7 during the 2012 run.

  10. Simulation Modeling and Analysis of Operator-Machine Ratio

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on a simulation model of a semiconductor manufacturer, operator-machine ratio (OMR) analysis is made using work study and time study. Through sensitivity analysis, it is found that labor utilization decreases with the increase of lot size.Meanwhile, it is able to identify that the OMR for this company should be improved from 1∶3 to 1∶5. An application result shows that the proposed model can effectively improve the OMR by 33%.

  11. Computational Modeling in Support of the National Ignition Facilty Operations

    CERN Document Server

    Shaw, M J; Haynam, C A; Williams, W H

    2001-01-01

    Numerical simulation of the National Ignition Facility (NIF) laser performance and automated control of the laser setup process are crucial to the project's success. These functions will be performed by two closely coupled computer code: the virtual beamline (VBL) and the laser performance operations model (LPOM).

  12. Computational Modeling in Support of National Ignition Facility Operations

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, M J; Sacks, R A; Haynam, C A; Williams, W H

    2001-10-23

    Numerical simulation of the National Ignition Facility (NIF) laser performance and automated control of laser setup process are crucial to the project's success. These functions will be performed by two closely coupled computer codes: the virtual beamline (VBL) and the laser operations performance model (LPOM).

  13. Modeling of reservoir operation in UNH global hydrological model

    Science.gov (United States)

    Shiklomanov, Alexander; Prusevich, Alexander; Frolking, Steve; Glidden, Stanley; Lammers, Richard; Wisser, Dominik

    2015-04-01

    Climate is changing and river flow is an integrated characteristic reflecting numerous environmental processes and their changes aggregated over large areas. Anthropogenic impacts on the river flow, however, can significantly exceed the changes associated with climate variability. Besides of irrigation, reservoirs and dams are one of major anthropogenic factor affecting streamflow. They distort hydrological regime of many rivers by trapping of freshwater runoff, modifying timing of river discharge and increasing the evaporation rate. Thus, reservoirs is an integral part of the global hydrological system and their impacts on rivers have to be taken into account for better quantification and understanding of hydrological changes. We developed a new technique, which was incorporated into WBM-TrANS model (Water Balance Model-Transport from Anthropogenic and Natural Systems) to simulate river routing through large reservoirs and natural lakes based on information available from freely accessible databases such as GRanD (the Global Reservoir and Dam database) or NID (National Inventory of Dams for US). Different formulations were applied for unregulated spillway dams and lakes, and for 4 types of regulated reservoirs, which were subdivided based on main purpose including generic (multipurpose), hydropower generation, irrigation and water supply, and flood control. We also incorporated rules for reservoir fill up and draining at the times of construction and decommission based on available data. The model were tested for many reservoirs of different size and types located in various climatic conditions using several gridded meteorological data sets as model input and observed daily and monthly discharge data from GRDC (Global Runoff Data Center), USGS Water Data (US Geological Survey), and UNH archives. The best results with Nash-Sutcliffe model efficiency coefficient in the range of 0.5-0.9 were obtained for temperate zone of Northern Hemisphere where most of large

  14. DESIGN IMPROVEMENT OF THE LOCOMOTIVE RUNNING GEARS

    Directory of Open Access Journals (Sweden)

    S. V. Myamlin

    2013-09-01

    Full Text Available Purpose. To determine the dynamic qualities of the mainline freight locomotives characterizing the safe motion in tangent and curved track sections at all operational speeds, one needs a whole set of studies, which includes a selection of the design scheme, development of the corresponding mathematical model of the locomotive spatial fluctuations, construction of the computer calculation program, conducting of the theoretical and then experimental studies of the new designs. In this case, one should compare the results with existing designs. One of the necessary conditions for the qualitative improvement of the traction rolling stock is to define the parameters of its running gears. Among the issues related to this problem, an important place is occupied by the task of determining the locomotive dynamic properties on the stage of projection, taking into account the selected technical solutions in the running gear design. Methodology. The mathematical modeling studies are carried out by the numerical integration method of the dynamic loading for the mainline locomotive using the software package «Dynamics of Rail Vehicles » («DYNRAIL». Findings. As a result of research for the improvement of locomotive running gear design it can be seen that the creation of the modern locomotive requires from engineers and scientists the realization of scientific and technical solutions. The solutions enhancing design speed with simultaneous improvement of the traction, braking and dynamic qualities to provide a simple and reliable design, especially the running gear, reducing the costs for maintenance and repair, low initial cost and operating costs for the whole service life, high traction force when starting, which is as close as possible to the ultimate force of adhesion, the ability to work in multiple traction mode and sufficient design speed. Practical Value. The generalization of theoretical, scientific and methodological, experimental studies aimed

  15. Running of the Running and Entropy Perturbations During Inflation

    CERN Document Server

    van de Bruck, Carsten

    2016-01-01

    In single field slow-roll inflation, one expects that the spectral index $n_s -1$ is first order in slow-roll parameters. Similarly, its running $\\alpha_s = dn_s/d \\log k$ and the running of the running $\\beta_s = d\\alpha_s/d \\log k$ are second and third order and therefore expected to be progressively smaller, and usually negative. Hence, such models of inflation are in considerable tension with a recent analysis hinting that $\\beta_s$ may actually be positive, and larger than $\\alpha_s$. Motivated by this, in this work we ask the question of what kinds of inflationary models may be useful in achieving such a hierarchy of runnings, particularly focusing on two--field models of inflation in which the late-time transfer of power from isocurvature to curvature modes allows for a much more diverse range of phenomenology. We calculate the runnings due to this effect and briefly apply our results to assessing the feasibility of finding $|\\beta_s| \\gtrsim |\\alpha_s|$ in some specific models.

  16. Further investigation of the model-independent probe of heavy neutral Higgs bosons at LHC Run 2

    Science.gov (United States)

    Kuang, Yu-Ping; Ren, Hong-Yu; Xia, Ling-Hao

    2016-02-01

    In one of our previous papers, we provided general, effective Higgs interactions for the lightest Higgs boson h (SM-like) and a heavier neutral Higgs boson H based on the effective Lagrangian formulation up to the dim-6 interactions, and then proposed two sensitive processes for probing H. We showed in several examples that the resonance peak of H and its dim-6 effective coupling constants (ECC) can be detected at LHC Run 2 with reasonable integrated luminosity. In this paper, we further perform a more thorough study of the most sensitive process, pp→ VH* → VVV, providing information about the relations between the 1σ, 3σ, 5σ statistical significance and the corresponding ranges of the Higgs ECC for an integrated luminosity of 100 fb-1. These results have two useful applications in LHC Run 2: (A) realizing the experimental determination of the ECC in the dim-6 interactions if H is found and, (B) obtaining the theoretical exclusion bounds if H is not found. Some alternative processes sensitive for certain ranges of the ECC are also analyzed. Supported by National Natural Science Foundation of China (11135003 and 11275102)

  17. Further Investigation on Model-Independent Probe of Heavy Neutral Higgs Bosons at the LHC Run 2

    CERN Document Server

    Kuang, Yu-Ping; Xia, Ling-Hao

    2015-01-01

    In our previous paper, we provided general effective Higgs interactions for the lightest Higgs boson $h$ (SM-like) and a heavier neutral Higgs boson $H$ based on the effective Lagrangian formulation up to the dim-6 interactions, and then proposed two sensitive processes for probing $H$. We showed in several examples that the resonance peak of $H$ and its dim-6 effective coupling constants (ECC) can be detected at the LHC Run 2 with reasonable integrated luminosity. In this paper, we further perform a more thorough study of the most sensitive process, $pp\\to VH^\\ast\\to VVV$, on the information about the relations between the $1\\sigma,\\,3\\sigma,\\,5\\sigma$ statistical significance and the corresponding ranges of the Higgs ECC for an integrated luminosity of 100 fb$^{-1}$. These results have two useful applications in the LHC Run 2: (A) realizing the experimental determination of the ECC in the dim-6 interactions if $H$ is found and, (B) obtaining the theoretical exclusion bounds if $H$ is not found. Some alterna...

  18. Effect of long-term voluntary exercise wheel running on susceptibility to bacterial pulmonary infections in a mouse model.

    Directory of Open Access Journals (Sweden)

    Pauline B van de Weert-van Leeuwen

    Full Text Available Regular moderate exercise has been suggested to exert anti-inflammatory effects and improve immune effector functions, resulting in reduced disease incidence and viral infection susceptibility. Whether regular exercise also affects bacterial infection susceptibility is unknown. The aim of this study was to investigate whether regular voluntary exercise wheel running prior to a pulmonary infection with bacteria (P. aeruginosa affects lung bacteriology, sickness severity and phagocyte immune function in mice. Balb/c mice were randomly placed in a cage with or without a running wheel. After 28 days, mice were intranasally infected with P. aeruginosa. Our study showed that regular exercise resulted in a higher sickness severity score and bacterial (P. aeruginosa loads in the lungs. The phagocytic capacity of monocytes and neutrophils from spleen and lungs was not affected. Although regular moderate exercise has many health benefits, healthy mice showed increased bacterial (P. aeruginosa load and symptoms, after regular voluntary exercise, with perseverance of the phagocytic capacity of monocytes and neutrophils. Whether patients, suffering from bacterial infectious diseases, should be encouraged to engage in exercise and physical activities with caution requires further research.

  19. Quantum dimensions from local operator excitations in the Ising model

    Science.gov (United States)

    Caputa, Paweł; Rams, Marek M.

    2017-02-01

    We compare the time evolution of entanglement measures after local operator excitation in the critical Ising model with predictions from conformal field theory. For the spin operator and its descendants we find that Rényi entropies of a block of spins increase by a constant that matches the logarithm of the quantum dimension of the conformal family. However, for the energy operator we find a small constant contribution that differs from the conformal field theory answer equal to zero. We argue that the mismatch is caused by the subtleties in the identification between the local operators in conformal field theory and their lattice counterpart. Our results indicate that evolution of entanglement measures in locally excited states not only constraints this identification, but also can be used to extract non-trivial data about the conformal field theory that governs the critical point. We generalize our analysis to the Ising model away from the critical point, states with multiple local excitations, as well as the evolution of the relative entropy after local operator excitation and discuss universal features that emerge from numerics.

  20. Demand-based maintenance and operators support based on process models; Behovsstyrt underhaall och operatoersstoed baserat paa process modeller

    Energy Technology Data Exchange (ETDEWEB)

    Dahlquist, Erik; Widarsson, Bjoern; Tomas-Aparicio, Elena

    2012-02-15

    There is a strong demand for systems that can give early warnings on upcoming problems in process performance or sensor measurements. In this project we have developed and implemented such a system on-line. The goal with the system is to give warnings about both faults needing urgent actions, as well giving advice on roughly when service may be needed for specific functions. The use of process simulation models on-line can offer a significant tool for operators and process engineers to analyse the performance of the process and make the most correct and fastest decision when problems arise. In this project physical simulation models are used in combination with decision support tools. By using a physical model it is possible to compare the measured data to the data obtained from the simulation and give these deviations as input to a decision support tool with Bayesian Networks (BN) that will result in information about the probability for wrong measurement in the instruments, process problems and maintenance needs. The application has been implemented in a CFB boiler at Maelarenergi AB. After tuning the model the system has been used online during September - October 2010 and May - October 2011, showing that the system is working on-line with respect to running the simulation model but with batch runs with respect to the BN. Examples have been made for several variables where trends of the deviation between simulation results and measured data have been used as input to a BN, where the probability for different faults has been calculated. Combustion up in the separator/cyclones has been detected several times, problems with fuel feed on both sides of the boiler as well. A moisture sensor not functioning as it should and suspected malfunctioning temperature meters as well. Deeper investigations of the true cause of problems have been used as input to tune the BN

  1. A forced running wheel system with a microcontroller that provides high-intensity exercise training in an animal ischemic stroke model

    Energy Technology Data Exchange (ETDEWEB)

    Chen, C.C. [Department of Electrical Engineering, National Cheng-Kung University, Tainan, Taiwan (China); Chang, M.W. [Department of Electrical Engineering, Southern Taiwan University of Science and Technology, Tainan, Taiwan (China); Chang, C.P. [Department of Biotechnology, Southern Taiwan University of Science and Technology, Tainan, Taiwan (China); Chan, S.C.; Chang, W.Y.; Yang, C.L. [Department of Electrical Engineering, National Cheng-Kung University, Tainan, Taiwan (China); Lin, M.T. [Department of Medical Research, Chi Mei Medical Center, Tainan, Taiwan (China)

    2014-08-15

    We developed a forced non-electric-shock running wheel (FNESRW) system that provides rats with high-intensity exercise training using automatic exercise training patterns that are controlled by a microcontroller. The proposed system successfully makes a breakthrough in the traditional motorized running wheel to allow rats to perform high-intensity training and to enable comparisons with the treadmill at the same exercise intensity without any electric shock. A polyvinyl chloride runway with a rough rubber surface was coated on the periphery of the wheel so as to permit automatic acceleration training, and which allowed the rats to run consistently at high speeds (30 m/min for 1 h). An animal ischemic stroke model was used to validate the proposed system. FNESRW, treadmill, control, and sham groups were studied. The FNESRW and treadmill groups underwent 3 weeks of endurance running training. After 3 weeks, the experiments of middle cerebral artery occlusion, the modified neurological severity score (mNSS), an inclined plane test, and triphenyltetrazolium chloride were performed to evaluate the effectiveness of the proposed platform. The proposed platform showed that enhancement of motor function, mNSS, and infarct volumes was significantly stronger in the FNESRW group than the control group (P<0.05) and similar to the treadmill group. The experimental data demonstrated that the proposed platform can be applied to test the benefit of exercise-preconditioning-induced neuroprotection using the animal stroke model. Additional advantages of the FNESRW system include stand-alone capability, independence of subjective human adjustment, and ease of use.

  2. A forced running wheel system with a microcontroller that provides high-intensity exercise training in an animal ischemic stroke model.

    Science.gov (United States)

    Chen, C C; Chang, M W; Chang, C P; Chan, S C; Chang, W Y; Yang, C L; Lin, M T

    2014-10-01

    We developed a forced non-electric-shock running wheel (FNESRW) system that provides rats with high-intensity exercise training using automatic exercise training patterns that are controlled by a microcontroller. The proposed system successfully makes a breakthrough in the traditional motorized running wheel to allow rats to perform high-intensity training and to enable comparisons with the treadmill at the same exercise intensity without any electric shock. A polyvinyl chloride runway with a rough rubber surface was coated on the periphery of the wheel so as to permit automatic acceleration training, and which allowed the rats to run consistently at high speeds (30 m/min for 1 h). An animal ischemic stroke model was used to validate the proposed system. FNESRW, treadmill, control, and sham groups were studied. The FNESRW and treadmill groups underwent 3 weeks of endurance running training. After 3 weeks, the experiments of middle cerebral artery occlusion, the modified neurological severity score (mNSS), an inclined plane test, and triphenyltetrazolium chloride were performed to evaluate the effectiveness of the proposed platform. The proposed platform showed that enhancement of motor function, mNSS, and infarct volumes was significantly stronger in the FNESRW group than the control group (P<0.05) and similar to the treadmill group. The experimental data demonstrated that the proposed platform can be applied to test the benefit of exercise-preconditioning-induced neuroprotection using the animal stroke model. Additional advantages of the FNESRW system include stand-alone capability, independence of subjective human adjustment, and ease of use.

  3. Biomechanics of Distance Running.

    Science.gov (United States)

    Cavanagh, Peter R., Ed.

    Contributions from researchers in the field of running mechanics are included in the 13 chapters of this book. The following topics are covered: (1) "The Mechanics of Distance Running: A Historical Perspective" (Peter Cavanagh); (2) "Stride Length in Distance Running: Velocity, Body Dimensions, and Added Mass Effects" (Peter Cavanagh, Rodger…

  4. Operational ocean models in the Adriatic Sea: a skill assessment

    Directory of Open Access Journals (Sweden)

    J. Chiggiato

    2008-02-01

    Full Text Available In the framework of the Mediterranean Forecasting System (MFS project, the performance of regional numerical ocean forecasting systems is assessed by means of model-model and model-data comparison. Three different operational systems considered in this study are: the Adriatic REGional Model (AREG; the Adriatic Regional Ocean Modelling System (AdriaROMS and the Mediterranean Forecasting System General Circulation Model (MFS-GCM. AREG and AdriaROMS are regional implementations (with some dedicated variations of POM and ROMS, respectively, while MFS-GCM is an OPA based system. The assessment is done through standard scores. In situ and remote sensing data are used to evaluate the system performance. In particular, a set of CTD measurements collected in the whole western Adriatic during January 2006 and one year of satellite derived sea surface temperature measurements (SST allow to asses a full three-dimensional picture of the operational forecasting systems quality during January 2006 and to draw some preliminary considerations on the temporal fluctuation of scores estimated on surface quantities between summer 2005 and summer 2006.

    The regional systems share a negative bias in simulated temperature and salinity. Nonetheless, they outperform the MFS-GCM in the shallowest locations. Results on amplitude and phase errors are improved in areas shallower than 50 m, while degraded in deeper locations, where major models deficiencies are related to vertical mixing overestimation. In a basin-wide overview, the two regional models show differences in the local displacement of errors. In addition, in locations where the regional models are mutually correlated, the aggregated mean squared error was found to be smaller, that is a useful outcome of having several operational systems in the same region.

  5. Object-oriented model of railway stations operation

    Directory of Open Access Journals (Sweden)

    D.M. Kozachenko

    2013-08-01

    Full Text Available Purpose. The purpose of this article is improvement of the railway stations functional model; it leads to time expenditure cut for formalization technological processes of their work through the use of standard elements of technology. Methodology. Some technological operations, executives and technology objects are considered as main elements of the railway station functioning. Queuing techniques were used as the methods of research, simulation, finite state machines and object-oriented analysis. Findings. Formal data structures were developed as the result of research that can allow simulating the operation of the railway station with any degree of detail. In accordance with the principles of object-oriented approach in the developed model, separate elements of station technology are presented jointly with a description of their behavior. The proposed model is implemented as a software package. Originality. Functional model of railway stations was improved through the application of object-oriented approach to data management. It allow to create libraries of elementary technological processes and reduce time expenditure for formalization the technology of stations work. Practical value. Using of software package that it was developed on the base of proposed model will reduce time expenditure of technologists in order to obtain technical and operational assessment of projected and existing rail stations.

  6. A consistent collinear triad approximation for operational wave models

    Science.gov (United States)

    Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.

    2016-08-01

    In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.

  7. GASIFICATION TEST RUN TC06

    Energy Technology Data Exchange (ETDEWEB)

    Southern Company Services, Inc.

    2003-08-01

    This report discusses test campaign TC06 of the Kellogg Brown & Root, Inc. (KBR) Transport Reactor train with a Siemens Westinghouse Power Corporation (Siemens Westinghouse) particle filter system at the Power Systems Development Facility (PSDF) located in Wilsonville, Alabama. The Transport Reactor is an advanced circulating fluidized-bed reactor designed to operate as either a combustor or a gasifier using a particulate control device (PCD). The Transport Reactor was operated as a pressurized gasifier during TC06. Test run TC06 was started on July 4, 2001, and completed on September 24, 2001, with an interruption in service between July 25, 2001, and August 19, 2001, due to a filter element failure in the PCD caused by abnormal operating conditions while tuning the main air compressor. The reactor temperature was varied between 1,725 and 1,825 F at pressures from 190 to 230 psig. In TC06, 1,214 hours of solid circulation and 1,025 hours of coal feed were attained with 797 hours of coal feed after the filter element failure. Both reactor and PCD operations were stable during the test run with a stable baseline pressure drop. Due to its length and stability, the TC06 test run provided valuable data necessary to analyze long-term reactor operations and to identify necessary modifications to improve equipment and process performance as well as progressing the goal of many thousands of hours of filter element exposure.

  8. Modeling the Environmental Impact of Air Traffic Operations

    Science.gov (United States)

    Chen, Neil

    2011-01-01

    There is increased interest to understand and mitigate the impacts of air traffic on the climate, since greenhouse gases, nitrogen oxides, and contrails generated by air traffic can have adverse impacts on the climate. The models described in this presentation are useful for quantifying these impacts and for studying alternative environmentally aware operational concepts. These models have been developed by leveraging and building upon existing simulation and optimization techniques developed for the design of efficient traffic flow management strategies. Specific enhancements to the existing simulation and optimization techniques include new models that simulate aircraft fuel flow, emissions and contrails. To ensure that these new models are beneficial to the larger climate research community, the outputs of these new models are compatible with existing global climate modeling tools like the FAA's Aviation Environmental Design Tool.

  9. Operational ocean models in the Adriatic Sea: a skill assessment

    Directory of Open Access Journals (Sweden)

    J. Chiggiato

    2006-12-01

    Full Text Available In the framework of the Mediterranean Forecasting System project (MFS sub-regional and regional numerical ocean forecasting systems performance are assessed by mean of model-model and model-data comparison. Three different operational systems have been considered in this study: the Adriatic REGional Model (AREG; the AdriaROMS and the Mediterranean Forecasting System general circulation model (MFS model. AREG and AdriaROMS are regional implementations (with some dedicated variations of POM (Blumberg and Mellor, 1987 and ROMS (Shchepetkin and McWilliams, 2005 respectively, while MFS model is based on OPA (Madec et al., 1998 code. The assessment has been done by means of standard scores. The data used for operational systems assessment derive from in-situ and remote sensing measurements. In particular a set of CTDs covering the whole western Adriatic, collected in January 2006, one year of SST from space born sensors and six months of buoy data. This allowed to have a full three-dimensional picture of the operational forecasting systems quality during January 2006 and some preliminary considerations on the temporal fluctuation of scores estimated on surface (or near surface quantities between summer 2005 and summer 2006. In general, the regional models are found to be colder and fresher than observations. They eventually outperform the large scale model in the shallowest locations, as expected. Results on amplitude and phase errors are also much better in locations shallower than 50 m, while degraded in deeper locations, where the models tend to have a higher homogeneity along the vertical column compared to observations. In a basin-wide overview, the two regional models show some dissimilarities in the local displacement of errors, something suggested by the full three-dimensional picture depicted using CTDs, but also confirmed by the comparison with SSTs. In locations where the regional models are mutually correlated, the aggregated mean

  10. Quantum hidden Markov models based on transition operation matrices

    Science.gov (United States)

    Cholewa, Michał; Gawron, Piotr; Głomb, Przemysław; Kurzyk, Dariusz

    2017-04-01

    In this work, we extend the idea of quantum Markov chains (Gudder in J Math Phys 49(7):072105 [3]) in order to propose quantum hidden Markov models (QHMMs). For that, we use the notions of transition operation matrices and vector states, which are an extension of classical stochastic matrices and probability distributions. Our main result is the Mealy QHMM formulation and proofs of algorithms needed for application of this model: Forward for general case and Vitterbi for a restricted class of QHMMs. We show the relations of the proposed model to other quantum HMM propositions and present an example of application.

  11. Non-perturbative running and renormalization of kaon four-quark operators with nf=2+1 domain-wall fermions

    CERN Document Server

    Boyle, P A; Lytle, A T

    2011-01-01

    We compute the renormalization factors of four-quark operators needed for the study of $K\\to\\pi\\pi$ decay in the $\\Delta I=3/2$ channel. We evaluate the Z-factors at a low energy scale ($\\mu_0=1.145 \\GeV$) using four different non-exceptional RI-SMOM schemes on a large, coarse lattice ($a\\sim 0.14\\fm$) on which the bare matrix elements are also computed. Then we compute the universal, non-perturbative, scale evolution matrix of these renormalization factors between $\\mu_0$ and $3\\GeV$. We give the numerical results for the different steps of the computation in two different non-exceptional lattice schemes, and the connection to $\\msbar$ at $3\\GeV$ is made using one-loop perturbation theory.

  12. Analysis of operating model of electronic invoice colombian Colombian electronic billing analysis of the operational model

    Directory of Open Access Journals (Sweden)

    Sérgio Roberto da Silva

    2016-06-01

    Full Text Available Colombia has been one of the first countries to introduce electronic billing process on a voluntary basis, from a traditional to a digital version. In this context, the article analyzes the electronic billing process implemented in Colombia and the advantages. Methodological research is applied, qualitative, descriptive and documentary; where the regulatory framework and the conceptualization of the model is identified; the process of adoption of electronic billing is analyzed, and finally the advantages and disadvantages of its implementation is analyzed. The findings indicate that the model applied in Colombia to issue an electronic billing in sending and receiving process, is not complex, but it requires a small adequate infrastructure and trained personnel to reach all sectors, especially the micro and business which is the largest business network in the country.

  13. Towards operational modeling and forecasting of the Iberian shelves ecosystem.

    Directory of Open Access Journals (Sweden)

    Martinho Marta-Almeida

    Full Text Available There is a growing interest on physical and biogeochemical oceanic hindcasts and forecasts from a wide range of users and businesses. In this contribution we present an operational biogeochemical forecast system for the Portuguese and Galician oceanographic regions, where atmospheric, hydrodynamic and biogeochemical variables are integrated. The ocean model ROMS, with a horizontal resolution of 3 km, is forced by the atmospheric model WRF and includes a Nutrients-Phytoplankton-Zooplankton-Detritus biogeochemical module (NPZD. In addition to oceanographic variables, the system predicts the concentration of nitrate, phytoplankton, zooplankton and detritus (mmol N m(-3. Model results are compared against radar currents and remote sensed SST and chlorophyll. Quantitative skill assessment during a summer upwelling period shows that our modelling system adequately represents the surface circulation over the shelf including the observed spatial variability and trends of temperature and chlorophyll concentration. Additionally, the skill assessment also shows some deficiencies like the overestimation of upwelling circulation and consequently, of the duration and intensity of the phytoplankton blooms. These and other departures from the observations are discussed, their origins identified and future improvements suggested. The forecast system is the first of its kind in the region and provides free online distribution of model input and output, as well as comparisons of model results with satellite imagery for qualitative operational assessment of model skill.

  14. operational modelling and forecasting of the Iberian shelves ecosystem

    Science.gov (United States)

    Marta-Almeida, M.; Reboreda, R.; Rocha, C.; Dubert, J.; Nolasco, R.; Cordeiro, N.; Luna, T.; Rocha, A.; Silva, J. Lencart e.; Queiroga, H.; Peliz, A.; Ruiz-Villarreal, M.

    2012-04-01

    There is a growing interest on physical and biogeochemical oceanic hindcasts and forecasts from a wide range of users and businesses. In this contribution we present an operational biogeochemical forecast system for the Portuguese and Galician oceanographic regions, where atmospheric, hydrodynamic and biogeochemical variables are integrated. The ocean model ROMS, with a horizontal resolution of 3 km, is forced by the atmospheric model WRF and includes a NPZD biogeochemical module. In addition to oceanographic variables, the system predicts the concentration of nitrate, phytoplankton, zooplankton and detritus (mmolN m-3). Model results are compared against radar currents and remote sensed SST and chlorophyll. Quantitative skill assessment during a summer upwelling period shows that our modelling system adequately represents the surface circulation over the shelf including the observed spatial variability and trends of temperature and chlorophyll concentration. Additionally, the skill assessment also shows some deficiencies like the overestimation of upwelling circulation and consequently, of the duration and intensity of the phytoplankton blooms. These and other departures from the observations are discussed, their origins identified and future improvements suggested. The forecast system is the first of its kind in the region and provides free online distribution of model input and output, as well as comparisons of model results with satellite imagery for qualitative operational assessment of model skill.

  15. DISTRIBUTED PROCESSING TRADE-OFF MODEL FOR ELECTRIC UTILITY OPERATION

    Science.gov (United States)

    Klein, S. A.

    1994-01-01

    The Distributed processing Trade-off Model for Electric Utility Operation is based upon a study performed for the California Institute of Technology's Jet Propulsion Laboratory. This study presented a technique that addresses the question of trade-offs between expanding a communications network or expanding the capacity of distributed computers in an electric utility Energy Management System (EMS). The technique resulted in the development of a quantitative assessment model that is presented in a Lotus 1-2-3 worksheet environment. The model gives EMS planners a macroscopic tool for evaluating distributed processing architectures and the major technical and economic tradeoffs as well as interactions within these architectures. The model inputs (which may be varied according to application and need) include geographic parameters, data flow and processing workload parameters, operator staffing parameters, and technology/economic parameters. The model's outputs are total cost in various categories, a number of intermediate cost and technical calculation results, as well as graphical presentation of Costs vs. Percent Distribution for various parameters. The model has been implemented on an IBM PC using the LOTUS 1-2-3 spreadsheet environment and was developed in 1986. Also included with the spreadsheet model are a number of representative but hypothetical utility system examples.

  16. Effects of Obstacles on the Dynamics of Kinesins, Including Velocity and Run Length, Predicted by a Model of Two Dimensional Motion.

    Directory of Open Access Journals (Sweden)

    Woochul Nam

    Full Text Available Kinesins are molecular motors which walk along microtubules by moving their heads to different binding sites. The motion of kinesin is realized by a conformational change in the structure of the kinesin molecule and by a diffusion of one of its two heads. In this study, a novel model is developed to account for the 2D diffusion of kinesin heads to several neighboring binding sites (near the surface of microtubules. To determine the direction of the next step of a kinesin molecule, this model considers the extension in the neck linkers of kinesin and the dynamic behavior of the coiled-coil structure of the kinesin neck. Also, the mechanical interference between kinesins and obstacles anchored on the microtubules is characterized. The model predicts that both the kinesin velocity and run length (i.e., the walking distance before detaching from the microtubule are reduced by static obstacles. The run length is decreased more significantly by static obstacles than the velocity. Moreover, our model is able to predict the motion of kinesin when other (several motors also move along the same microtubule. Furthermore, it suggests that the effect of mechanical interaction/interference between motors is much weaker than the effect of static obstacles. Our newly developed model can be used to address unanswered questions regarding degraded transport caused by the presence of excessive tau proteins on microtubules.

  17. Hydrologic Modeling at the National Water Center: Operational Implementation of the WRF-Hydro Model to support National Weather Service Hydrology

    Science.gov (United States)

    Cosgrove, B.; Gochis, D.; Clark, E. P.; Cui, Z.; Dugger, A. L.; Fall, G. M.; Feng, X.; Fresch, M. A.; Gourley, J. J.; Khan, S.; Kitzmiller, D.; Lee, H. S.; Liu, Y.; McCreight, J. L.; Newman, A. J.; Oubeidillah, A.; Pan, L.; Pham, C.; Salas, F.; Sampson, K. M.; Smith, M.; Sood, G.; Wood, A.; Yates, D. N.; Yu, W.; Zhang, Y.

    2015-12-01

    The National Weather Service (NWS) National Water Center(NWC) is collaborating with the NWS National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR) to implement a first-of-its-kind operational instance of the Weather Research and Forecasting (WRF)-Hydro model over the Continental United States (CONUS) and contributing drainage areas on the NWS Weather and Climate Operational Supercomputing System (WCOSS) supercomputer. The system will provide seamless, high-resolution, continuously cycling forecasts of streamflow and other hydrologic outputs of value from both deterministic- and ensemble-type runs. WRF-Hydro will form the core of the NWC national water modeling strategy, supporting NWS hydrologic forecast operations along with emergency response and water management efforts of partner agencies. Input and output from the system will be comprehensively verified via the NWC Water Resource Evaluation Service. Hydrologic events occur on a wide range of temporal scales, from fast acting flash floods, to long-term flow events impacting water supply. In order to capture this range of events, the initial operational WRF-Hydro configuration will feature 1) hourly analysis runs, 2) short-and medium-range deterministic forecasts out to two day and ten day horizons and 3) long-range ensemble forecasts out to 30 days. All three of these configurations are underpinned by a 1km execution of the NoahMP land surface model, with channel routing taking place on 2.67 million NHDPlusV2 catchments covering the CONUS and contributing areas. Additionally, the short- and medium-range forecasts runs will feature surface and sub-surface routing on a 250m grid, while the hourly analyses will feature this same 250m routing in addition to nudging-based assimilation of US Geological Survey (USGS) streamflow observations. A limited number of major reservoirs will be configured within the model to begin to represent the first-order impacts of

  18. Transparent settlement model between mobile network operator and mobile voice over Internet protocol operator

    Directory of Open Access Journals (Sweden)

    Luzango Pangani Mfupe

    2014-12-01

    Full Text Available Advances in technology have enabled network-less mobile voice over internet protocol operator (MVoIPO to offer data services (i.e. voice, text and video to mobile network operator's (MNO's subscribers through an application enabled on subscriber's user equipment using MNO's packet-based cellular network infrastructure. However, this raises the problem of how to handle interconnection settlements between the two types of operators, particularly how to deal with users who now have the ability to make ‘free’ on-net MVoIP calls among themselves within the MNO's network. This study proposes a service level agreement-based transparent settlement model (TSM to solve this problem. The model is based on concepts of achievement and reward, not violation and punishment. The TSM calculates the MVoIPO's throughput distribution by monitoring the variations of peaks and troughs at the edge of a network. This facilitates the determination of conformance and non-conformance levels to the pre-set throughput thresholds and, subsequently, the issuing of compensation to the MVoIPO by the MNO as a result of generating an economically acceptable volume of data traffic.

  19. Ethical issues in engineering models: an operations researcher's reflections.

    Science.gov (United States)

    Kleijnen, J

    2011-09-01

    This article starts with an overview of the author's personal involvement--as an Operations Research consultant--in several engineering case-studies that may raise ethical questions; e.g., case-studies on nuclear waste, water management, sustainable ecology, military tactics, and animal welfare. All these case studies employ computer simulation models. In general, models are meant to solve practical problems, which may have ethical implications for the various stakeholders; namely, the modelers, the clients, and the public at large. The article further presents an overview of codes of ethics in a variety of disciples. It discusses the role of mathematical models, focusing on the validation of these models' assumptions. Documentation of these model assumptions needs special attention. Some ethical norms and values may be quantified through the model's multiple performance measures, which might be optimized. The uncertainty about the validity of the model leads to risk or uncertainty analysis and to a search for robust models. Ethical questions may be pressing in military models, including war games. However, computer games and the related experimental economics may also provide a special tool to study ethical issues. Finally, the article briefly discusses whistleblowing. Its many references to publications and websites enable further study of ethical issues in modeling.

  20. Operational Space Weather Models: Trials, Tribulations and Rewards

    Science.gov (United States)

    Schunk, R. W.; Scherliess, L.; Sojka, J. J.; Thompson, D. C.; Zhu, L.

    2009-12-01

    There are many empirical, physics-based, and data assimilation models that can probably be used for space weather applications and the models cover the entire domain from the surface of the Sun to the Earth’s surface. At Utah State University we developed two physics-based data assimilation models of the terrestrial ionosphere as part of a program called Global Assimilation of Ionospheric Measurements (GAIM). One of the data assimilation models is now in operational use at the Air Force Weather Agency (AFWA) in Omaha, Nebraska. This model is a Gauss-Markov Kalman Filter (GAIM-GM) model, and it uses a physics-based model of the ionosphere and a Kalman filter as a basis for assimilating a diverse set of real-time (or near real-time) measurements. The physics-based model is the Ionosphere Forecast Model (IFM), which is global and covers the E-region, F-region, and topside ionosphere from 90 to 1400 km. It takes account of five ion species (NO+, O2+, N2+, O+, H+), but the main output of the model is a 3-dimensional electron density distribution at user specified times. The second data assimilation model uses a physics-based Ionosphere-Plasmasphere Model (IPM) and an ensemble Kalman filter technique as a basis for assimilating a diverse set of real-time (or near real-time) measurements. This Full Physics model (GAIM-FP) is global, covers the altitude range from 90 to 30,000 km, includes six ions (NO+, O2+, N2+, O+, H+, He+), and calculates the self-consistent ionospheric drivers (electric fields and neutral winds). The GAIM-FP model is scheduled for delivery in 2012. Both of these GAIM models assimilate bottom-side Ne profiles from a variable number of ionosondes, slant TEC from a variable number of ground GPS/TEC stations, in situ Ne from four DMSP satellites, line-of-sight UV emissions measured by satellites, and occultation data. Quality control algorithms for all of the data types are provided as an integral part of the GAIM models and these models take account of

  1. Operations and support cost modeling using Markov chains

    Science.gov (United States)

    Unal, Resit

    1989-01-01

    Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.

  2. Optimizing Biorefinery Design and Operations via Linear Programming Models

    Energy Technology Data Exchange (ETDEWEB)

    Talmadge, Michael; Batan, Liaw; Lamers, Patrick; Hartley, Damon; Biddy, Mary; Tao, Ling; Tan, Eric

    2017-03-28

    The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LP models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools for

  3. Assessment of Eco-operation Effect of Three Gorges Reservoir During Trial Run Period%三峡水库试验性运行期生态调度效果评价

    Institute of Scientific and Technical Information of China (English)

    陈进; 李清清

    2015-01-01

    As a key water control project to manage and develop the Yangtze river,the Three Gorges project has profound impact on national economy,society and ecological environment.It not only has played a significant role in defending flood threat of the Yangtze river,but also has created remarkable economic benefits in power genera-tion and river shipping since it was built.To alleviate the adverse impact caused by the operation of the Three Gor-ges reservoir,ecological operation practice was carried out from 2011 to 2014.In order to promote the ecological operation of the reservoir in the future,the effect of the ecological operation which was carried out during trial run period is scientifically evaluated combining the assessment conclusions of ecological impact of the Three Gorges pro-ject in design phase and the monitoring results of ecological operation target in trial run period.Finally some sug-gestions are put forward for the future ecological operation.%三峡工程是治理开发长江的骨干型水利枢纽工程,对国家经济、社会和生态环境影响深远。三峡水库建成以来,不仅在防御长江洪水威胁方面作用巨大,而且发电、航运等综合经济效益显著。为减轻工程运行造成的不利影响,对三峡水库2011—2014年运行期间开展了生态调度试验。为了更好地推进三峡水库生态调度工作,结合三峡工程设计阶段的生态环境影响评估结论和试验运行期生态调度目标的监测结果,对已开展的生态调度效果进行科学的评价,并对未来生态调度提出建议。

  4. Globally nilpotent differential operators and the square Ising model

    Energy Technology Data Exchange (ETDEWEB)

    Bostan, A [INRIA Rocquencourt, Domaine de Voluceau, BP 105 78153 Le Chesnay Cedex (France); Boukraa, S [LPTHIRM and Departement d' Aeronautique, Universite de Blida (Algeria); Hassani, S; Zenine, N [Centre de Recherche Nucleaire d' Alger, 2 Bd. Frantz Fanon, BP 399, 16000 Alger (Algeria); Maillard, J-M [LPTMC, CNRS, Universite de Paris, Tour 24, 4eme etage, Case 121, 4 Place Jussieu, 75252 Paris Cedex 05 (France); Weil, J-A [LACO, XLIM, Universite de Limoges, 123 Avenue Albert Thomas, 87060 Limoges Cedex (France)], E-mail: alin.bostan@inria.fr, E-mail: boukraa@mail.univ-blida.dz, E-mail: maillard@lptmc.jussieu.fr, E-mail: jacques-arthur.weil@unilim.fr, E-mail: njzenine@yahoo.com

    2009-03-27

    We recall various multiple integrals with one parameter, related to the isotropic square Ising model, and corresponding, respectively, to the n-particle contributions of the magnetic susceptibility, to the (lattice) form factors, to the two-point correlation functions and to their {lambda}-extensions. The univariate analytic functions defined by these integrals are holonomic and even G-functions: they satisfy Fuchsian linear differential equations with polynomial coefficients and have some arithmetic properties. We recall the explicit forms, found in previous work, of these Fuchsian equations, as well as their Russian-doll and direct sum structures. These differential operators are selected Fuchsian linear differential operators, and their remarkable properties have a deep geometrical origin: they are all globally nilpotent, or, sometimes, even have zero p-curvature. We also display miscellaneous examples of globally nilpotent operators emerging from enumerative combinatorics problems for which no integral representation is yet known. Focusing on the factorized parts of all these operators, we find out that the global nilpotence of the factors (resp. p-curvature nullity) corresponds to a set of selected structures of algebraic geometry: elliptic curves, modular curves, curves of genus five, six,..., and even a remarkable weight-1 modular form emerging in the three-particle contribution {chi}{sup (3)} of the magnetic susceptibility of the square Ising model. Noticeably, this associated weight-1 modular form is also seen in the factors of the differential operator for another n-fold integral of the Ising class, {phi}{sup (3)}{sub H}, for the staircase polygons counting, and in Apery's study of {zeta}(3). G-functions naturally occur as solutions of globally nilpotent operators. In the case where we do not have G-functions, but Hamburger functions (one irregular singularity at 0 or {infinity}) that correspond to the confluence of singularities in the scaling limit

  5. A model technology transfer program for independent operators

    Energy Technology Data Exchange (ETDEWEB)

    Schoeling, L.G.

    1996-08-01

    In August 1992, the Energy Research Center (ERC) at the University of Kansas was awarded a contract by the US Department of Energy (DOE) to develop a technology transfer regional model. This report describes the development and testing of the Kansas Technology Transfer Model (KTTM) which is to be utilized as a regional model for the development of other technology transfer programs for independent operators throughout oil-producing regions in the US. It describes the linkage of the regional model with a proposed national technology transfer plan, an evaluation technique for improving and assessing the model, and the methodology which makes it adaptable on a regional basis. The report also describes management concepts helpful in managing a technology transfer program.

  6. Study on social capital running healthcare providers model in China%我国社会资本举办医疗机构模式研究

    Institute of Scientific and Technical Information of China (English)

    魏超; 叶睿; 孟开; 汝宇龙; 王若蒙

    2014-01-01

    基于案例分析法,对收集的11个社会资本办医典型案例从14个方面进行分析。根据合作对象、产权、合作方式提出了社会资本办医模式的5种分类标准,并将社会资本办医模式分为社会资本直接举办医院、银行贷款、国外贷款、融资租赁、业务托管、国内资本合作、中外合资、原有公立医院股份制改造、股份合作制和整体转让等10种模式。%In this paper, eleven typical cases of social capital running healthcare providers are collected and analyzed from fourteen aspects based on case analysis methods and proposes five standards of classification based on cooperating object, property right, cooperative way. According to above standards, social capital running healthcare providers models are divided into ten categories:social capital run hospital directly, bank loan, foreign loan, finance lease, business hosting, domestic capital cooperation, sino-foreign joint, joint-stock reform of public hospitals, stock cooperative system and overall transfer. The results of the study can be used as a reference for social capital running healthcare providers.

  7. Assessment of Quantitative Precipitation Forecasts from Operational NWP Models (Invited)

    Science.gov (United States)

    Sapiano, M. R.

    2010-12-01

    Previous work has shown that satellite and numerical model estimates of precipitation have complimentary strengths, with satellites having greater skill at detecting convective precipitation events and model estimates having greater skill at detecting stratiform precipitation. This is due in part to the challenges associated with retrieving stratiform precipitation from satellites and the difficulty in resolving sub-grid scale processes in models. These complimentary strengths can be exploited to obtain new merged satellite/model datasets, and several such datasets have been constructed using reanalysis data. Whilst reanalysis data are stable in a climate sense, they also have relatively coarse resolution compared to the satellite estimates (many of which are now commonly available at quarter degree resolution) and they necessarily use fixed forecast systems that are not state-of-the-art. An alternative to reanalysis data is to use Operational Numerical Weather Prediction (NWP) model estimates, which routinely produce precipitation with higher resolution and using the most modern techniques. Such estimates have not been combined with satellite precipitation and their relative skill has not been sufficiently assessed beyond model validation. The aim of this work is to assess the information content of the models relative to satellite estimates with the goal of improving techniques for merging these data types. To that end, several operational NWP precipitation forecasts have been compared to satellite and in situ data and their relative skill in forecasting precipitation has been assessed. In particular, the relationship between precipitation forecast skill and other model variables will be explored to see if these other model variables can be used to estimate the skill of the model at a particular time. Such relationships would be provide a basis for determining weights and errors of any merged products.

  8. Model for Determining Fixed Costs for the Winter Service Operation

    Directory of Open Access Journals (Sweden)

    Matija Glad

    2006-07-01

    Full Text Available From the season 2005!06 a new dynamic model for the operationof the Winter Service in the Republic of Croatia will beused. The old model was based on three levels of readiness, andthe roads were categorised primarily according to their administrativedistribution. The new dynamic model has three levelsof readiness, while the first level is further divided into two servicelevels. The road is classified to a certain readiness and servicelevel according to the traffic, climate and economic conditions.The new model splits the cost structure into fixed and variablecosts. The investor wants to keep the fixed costs at a minimal/eve~ which will guarantee proper readiness for quick intervention.The investor wants to ensure a technological infrastructurefor quality cleaning of roads is created. The capitalcompanies want larger fixed costs to ensure certain profit, anddefined fixed costs enable them to asses the profitability of theWinter Service operation. Such structure fonils the followingrelationship: in mild winters the capital companies "profit" andthe investor "loses", and vice versa for cold winters. Mathematically,such relationship should be treated as a finite strategictwo-player game.This paper will show the model needed to forecast fixedcosts in the new dynamic model for operation of Winter Ser·vice, through consideration of connection of linear programmingand the matrix game theory, to study the problem in parallel,from the standpoint of both players.

  9. How to run 100 meters?

    CERN Document Server

    Aftalion, Amandine

    2016-01-01

    The aim of this paper is to bring a mathematical justification to the optimal way of organizing one's effort when running. It is well known from physiologists that all running exercises of duration less than 3mn are run with a strong initial acceleration and a decelerating end; on the contrary, long races are run with a final sprint. This can be explained using a mathematical model describing the evolution of the velocity, the anaerobic energy, and the propulsive force: a system of ordinary differential equations, based on Newton's second law and energy conservation, is coupled to the condition of optimizing the time to run a fixed distance. We show that the monotony of the velocity curve vs time is the opposite of that of the oxygen uptake (V O2) vs time. Since the oxygen uptake is monotone increasing for a short run, we prove that the velocity is exponentially increasing to its maximum and then decreasing. For longer races, the oxygen uptake has an increasing start and a decreasing end and this accounts for...

  10. Quantification of evaporative running loss emissions from gasoline-powered passenger cars in California. Final report

    Energy Technology Data Exchange (ETDEWEB)

    McClement, D.

    1992-01-01

    The purpose of the study was to collect evaporative running emissions data from a cross section of in-use, light-duty passenger cars. Forty vehicles were procured and tested using the 'LA-4' cycle (the EPA Urban Dynamometer Driving Cycle (UDDS)) and the New York City Cycle (NYCC). The LA-4 cycle was run three times with a two minute idle period between the first two runs. The NYCC was run six times with a two minute idle between the first five runs of the cycle. Tests were performed at 95 and 105 degrees Farenheit, and using 7.5 and 9.0 Reid Vapor Pressure (RVP) fuel. The report describes two types of running losses - Type 1 where emissions are emitted at a constant, low level (typical of late model, properly operating vehicles), and Type II emissions, where there is a high rate of emissions (typical in uncontrolled vehicles).

  11. WEB-BASED VIRTUAL CNC MACHINE MODELING AND OPERATION

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A CNC simulation system based on internet for operation training of manufacturing facility and manufacturing process simulation is proposed. Firstly, the system framework and a rapid modeling method of CNC machine tool are studied under the virtual environment based on PolyTrans and CAD software. Then, a new method is proposed to enhance and expand the interactive ability of virtual reality modeling language(VRML) by attaining communication among VRML, JavaApplet, JavaScript and Html so as to realize the virtual operation for CNC machine tool. Moreover, the algorithm of material removed simulation based on VRML Z-map is presented. The advantages of this algorithm include less memory requirement and much higher computation. Lastly, the CNC milling machine is taken as an illustrative example for the prototype development in order to validate the feasibility of the proposed approach.

  12. Assessment model of dam operation risk based on monitoring data

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Although the dams produce remarkable social and economic benefits,the threat made by unsafe dams to the life and property of people who live in the lower river area is un-negligible.Based on the monitoring data which reflect the safety condition of dams,the risk degree concept is proposed and the analysis system and model for evaluating risk degree (rate) are established in this paper by combining the reliability theory and field monitoring data.The analysis method for risk degree is presented based on Bayesian approach.A five-grade risk degree system for dam operation risk and corresponding risk degree is put forward according to the safety condition of dams.The operation risks of four cascade dams on some river are analyzed by the model and approach presented here and the result is adopted by the owner.

  13. Optimization of Operations Resources via Discrete Event Simulation Modeling

    Science.gov (United States)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  14. A Modeling Method of Agent Based on Milk-run in Automobile Parts%基于Agent的汽车零部件循环取货模型

    Institute of Scientific and Technical Information of China (English)

    屈新怀; 盛敏; 丁必荣

    2013-01-01

    Milk-Run,as a new method in supply system management in automobile pasts inbound logistic,can be considered as a kind of complex adaptive system.It is composed of suppliers,3PL,and automobile firm.According to its conceptual model,the agent-based model method has been used.After define the research purpose,the abstract lever of these agents focus on the corporate sector in the milk-run system,such as the product Agent,purchase Agent,schedule Agent etc.First to analysis the internal model of all agents,then to adopt the formalization description method to describe the agent behaviors.At last interactive processing between these agents are been explained in Agent UML.Apparently,the agent-based modeling method has a strong performer on the principle of milk-run system.It will be easy to achieve the simulation about the milk-run based on the agent model.%汽车零部件的循环取货模式作为一种新型物料供应体系,是由供应商、3PL、主机厂多个主体组成,属于复杂适应性系统.根据循环取货的概念模型,采用多Agent建模理论对循环取货进行建模.将目标系统的Agent粒度抽象为企业级以下的职能部门,设计生产Agent、采购Agent、调度Agent等几类主体.分析Agent内部模型,并采用形式化方法对主体行为进行描述,应用Agent UML分析主体之间的动态交互行为.基于Agent的建模描述了循环取货的运行机制,从而为后续的计算机仿真实现提供基础.

  15. Modeling Reservoir-River Networks in Support of Optimizing Seasonal-Scale Reservoir Operations

    Science.gov (United States)

    Villa, D. L.; Lowry, T. S.; Bier, A.; Barco, J.; Sun, A.

    2011-12-01

    each timestep and minimize computational overhead. Power generation for each reservoir is estimated using a 2-dimensional regression that accounts for both the available head and turbine efficiency. The object-oriented architecture makes run configuration easy to update. The dynamic model inputs include inflow and meteorological forecasts while static inputs include bathymetry data, reservoir and power generation characteristics, and topological descriptors. Ensemble forecasts of hydrological and meteorological conditions are supplied in real-time by Pacific Northwest National Laboratory and are used as a proxy for uncertainty, which is carried through the simulation and optimization process to produce output that describes the probability that different operational scenario's will be optimal. The full toolset, which includes HydroSCOPE, is currently being tested on the Feather River system in Northern California and the Upper Colorado Storage Project.

  16. Experimentally based characteristics model for performance mapping of dry-running helical screw expanders in closed-cycle applications. Experimentell gestuetztes Kennzahlmodell zur Beschreibung des Betriebsverhaltens trockenlaufender Schrauben-Expansionsmaschinen in Kreisprozessen

    Energy Technology Data Exchange (ETDEWEB)

    Hinsenkamp, G. (Lehrstuhl und Institut fuer Thermische Stroemungsmaschinen, Univ. Karlsruhe (Germany)); Willibald, U. (Steinmueller (L. und C.) GmbH, Gummersbach (Germany)); Wittig, S. (Lehrstuhl und Institut fuer Thermische Stroemungsmaschinen, Univ. Karlsruhe (Germany))

    1992-04-01

    Earlier investigations at the Institute for Thermal Turbomachinery (University of Karlsruhe) have shown favorable operating characteristics of positive-displacement type prime movers in the range of low power output. The main goal of this study is the development of a new parameter model which characterizes dry-running helical screw-expanders. This model is derived by use of dimensional analysis. It is therefore independent from absolute values of machine size and process boundary conditions. As a special feature of the chosen model, all loss-describing specific work parameters at a constant circumferential Mach number are shown to be straight lines. Therefore, the specification of only two operating points is sufficient to carry out a quantitative loss analysis for design and off-design conditions of any characteristic line. For the first time, the presented model provides for the calculation and extrapolation of generalized volumetric and isentropic efficiencies. Finally, the documented analysis technique may readily be applied to other types of positive displacement expanders and -compressors by modifying the presented parameters accordingly. (orig./HW)

  17. Modelling of Reservoir Operations using Fuzzy Logic and ANNs

    Science.gov (United States)

    Van De Giesen, N.; Coerver, B.; Rutten, M.

    2015-12-01

    Today, almost 40.000 large reservoirs, containing approximately 6.000 km3 of water and inundating an area of almost 400.000 km2, can be found on earth. Since these reservoirs have a storage capacity of almost one-sixth of the global annual river discharge they have a large impact on the timing, volume and peaks of river discharges. Global Hydrological Models (GHM) are thus significantly influenced by these anthropogenic changes in river flows. We developed a parametrically parsimonious method to extract operational rules based on historical reservoir storage and inflow time-series. Managing a reservoir is an imprecise and vague undertaking. Operators always face uncertainties about inflows, evaporation, seepage losses and various water demands to be met. They often base their decisions on experience and on available information, like reservoir storage and the previous periods inflow. We modeled this decision-making process through a combination of fuzzy logic and artificial neural networks in an Adaptive-Network-based Fuzzy Inference System (ANFIS). In a sensitivity analysis, we compared results for reservoirs in Vietnam, Central Asia and the USA. ANFIS can indeed capture reservoirs operations adequately when fed with a historical monthly time-series of inflows and storage. It was shown that using ANFIS, operational rules of existing reservoirs can be derived without much prior knowledge about the reservoirs. Their validity was tested by comparing actual and simulated releases with each other. For the eleven reservoirs modelled, the normalised outflow, , was predicted with a MSE of 0.002 to 0.044. The rules can be incorporated into GHMs. After a network for a specific reservoir has been trained, the inflow calculated by the hydrological model can be combined with the release and initial storage to calculate the storage for the next time-step using a mass balance. Subsequently, the release can be predicted one time-step ahead using the inflow and storage.

  18. Simulation of primary static recrystallization with cellular operator model

    OpenAIRE

    Mukhopadhyay, Prantik

    2005-01-01

    1. Based on the modified cellular automata approach of Reher [60] a cellular operator model has been developed that is capable of accounting for spatial and temporal inhomogeneity on a finer scale. For this a scalable subgrid automaton is introduced that allows for a high spatial resolution on demand and still high computational efficiency. The scalable subgrid permits to track the minute changes of growth front during recrystallization owing to local variations of boundary mobility and net d...

  19. Modeling operation mode of pellet boilers for residential heating

    Science.gov (United States)

    Petrocelli, D.; Lezzi, A. M.

    2014-11-01

    In recent years the consumption of wood pellets as energy source for residential heating lias increased, not only as fuel for stoves, but also for small-scale residential boilers that, produce hot water used for both space heating and domestic hot water. Reduction of fuel consumption and pollutant emissions (CO, dust., HC) is an obvious target of wood pellet boiler manufacturers, however they are also quite interested in producing low- maintenance appliances. The need of frequent maintenance turns in higher operating costs and inconvenience for the user, and in lower boiler efficiency and higher emissions also. The aim of this paper is to present a theoretical model able to simulate the dynamic behavior of a pellet boiler. The model takes into account many features of real pellet boilers. Furthermore, with this model, it is possible to pay more attention to the influence of the boiler control strategy. Control strategy evaluation is based not only on pellet consumption and on total emissions, but also on critical operating conditions such as start-up and stop or prolonged operation at substantially reduced power level. Results are obtained for a residential heating system based on a wood pellet boiler coupled with a thermal energy storage. Results obtained so far show a weak dependence of performance in terms of fuel consumption and total emissions on control strategy, however some control strategies present some critical issues regarding maintenance frequency.

  20. Dunkl operator, integrability, and pairwise scattering in rational Calogero model

    Science.gov (United States)

    Karakhanyan, David

    2017-05-01

    The integrability of the Calogero model can be expressed as zero curvature condition using Dunkl operators. The corresponding flat connections are non-local gauge transformations, which map the Calogero wave functions to symmetrized wave functions of the set of N free particles, i.e. it relates the corresponding scattering matrices to each other. The integrability of the Calogero model implies that any k-particle scattering is reduced to successive pairwise scatterings. The consistency condition of this requirement is expressed by the analog of the Yang-Baxter relation.

  1. Weather modeling and forecasting of PV systems operation

    CERN Document Server

    Paulescu, Marius; Gravila, Paul; Badescu, Viorel

    2012-01-01

    In the past decade, there has been a substantial increase of grid-feeding photovoltaic applications, thus raising the importance of solar electricity in the energy mix. This trend is expected to continue and may even increase. Apart from the high initial investment cost, the fluctuating nature of the solar resource raises particular insertion problems in electrical networks. Proper grid managing demands short- and long-time forecasting of solar power plant output. Weather modeling and forecasting of PV systems operation is focused on this issue. Models for predicting the state of the sky, nowc

  2. Operational dynamic modeling transcending quantum and classical mechanics.

    Science.gov (United States)

    Bondar, Denys I; Cabrera, Renan; Lompay, Robert R; Ivanov, Misha Yu; Rabitz, Herschel A

    2012-11-09

    We introduce a general and systematic theoretical framework for operational dynamic modeling (ODM) by combining a kinematic description of a model with the evolution of the dynamical average values. The kinematics includes the algebra of the observables and their defined averages. The evolution of the average values is drawn in the form of Ehrenfest-like theorems. We show that ODM is capable of encompassing wide-ranging dynamics from classical non-relativistic mechanics to quantum field theory. The generality of ODM should provide a basis for formulating novel theories.

  3. Model Predictive Control for the Operation of Building Cooling Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Yudong; Borrelli, Francesco; Hencey, Brandon; Coffey, Brian; Bengea, Sorin; Haves, Philip

    2010-06-29

    A model-based predictive control (MPC) is designed for optimal thermal energy storage in building cooling systems. We focus on buildings equipped with a water tank used for actively storing cold water produced by a series of chillers. Typically the chillers are operated at night to recharge the storage tank in order to meet the building demands on the following day. In this paper, we build on our previous work, improve the building load model, and present experimental results. The experiments show that MPC can achieve reduction in the central plant electricity cost and improvement of its efficiency.

  4. Communicating Sustainability: An Operational Model for Evaluating Corporate Websites

    Directory of Open Access Journals (Sweden)

    Alfonso Siano

    2016-09-01

    Full Text Available The interest in corporate sustainability has increased rapidly in recent years and has encouraged organizations to adopt appropriate digital communication strategies, in which the corporate website plays a key role. Despite this growing attention in both the academic and business communities, models for the analysis and evaluation of online sustainability communication have not been developed to date. This paper aims to develop an operational model to identify and assess the requirements of sustainability communication in corporate websites. It has been developed from a literature review on corporate sustainability and digital communication and the analysis of the websites of the organizations included in the “Global CSR RepTrak 2015” by the Reputation Institute. The model identifies the core dimensions of online sustainability communication (orientation, structure, ergonomics, content—OSEC, sub-dimensions, such as stakeholder engagement and governance tools, communication principles, and measurable items (e.g., presence of the materiality matrix, interactive graphs. A pilot study on the websites of the energy and utilities companies included in the Dow Jones Sustainability World Index 2015 confirms the applicability of the OSEC framework. Thus, the model can provide managers and digital communication consultants with an operational tool that is useful for developing an industry ranking and assessing the best practices. The model can also help practitioners to identify corrective actions in the critical areas of digital sustainability communication and avoid greenwashing.

  5. Model validity and frequency band selection in operational modal analysis

    Science.gov (United States)

    Au, Siu-Kui

    2016-12-01

    Experimental modal analysis aims at identifying the modal properties (e.g., natural frequencies, damping ratios, mode shapes) of a structure using vibration measurements. Two basic questions are encountered when operating in the frequency domain: Is there a mode near a particular frequency? If so, how much spectral data near the frequency can be included for modal identification without incurring significant modeling error? For data with high signal-to-noise (s/n) ratios these questions can be addressed using empirical tools such as singular value spectrum. Otherwise they are generally open and can be challenging, e.g., for modes with low s/n ratios or close modes. In this work these questions are addressed using a Bayesian approach. The focus is on operational modal analysis, i.e., with 'output-only' ambient data, where identification uncertainty and modeling error can be significant and their control is most demanding. The approach leads to 'evidence ratios' quantifying the relative plausibility of competing sets of modeling assumptions. The latter involves modeling the 'what-if-not' situation, which is non-trivial but is resolved by systematic consideration of alternative models and using maximum entropy principle. Synthetic and field data are considered to investigate the behavior of evidence ratios and how they should be interpreted in practical applications.

  6. Operational use of distributed hydrological models. Experiences and challenges at a Norwegian hydropower company (Agder Energi).

    Science.gov (United States)

    Viggo Matheussen, Bernt; Andresen, Arne; Weisser, Claudia

    2014-05-01

    The Scandinavian hydropower industry has traditionally adopted the lumped conceptual hydrological model - HBV, as the tool for producing forecasts of inflows and mountain snow packs. Such forecasting systems - based on lumped conceptual models - have several drawbacks. Firstly, a lumped model does not produce spatial data, and comparisons with remote sensed snow cover data (which are now available) are complicated. Secondly, several climate parameters such as wind speed are now becoming more available and can potentially improve forecasts due to improved estimates of precipitation gauge efficiency, and more physically correct calculation of turbulent heat fluxes. At last, when the number of catchments increases, it is cumbersome and slow to run multiple hydrology models compared to running one model for all catchments. With the drawbacks of the lumped hydrology models in mind, and with inspiration from other forecasting systems using distributed models, Agder Energy decided to develop a forecasting system applying a physically based distributed model. In this paper we describe an operational inflow and snowpack forecast system developed for the Scandinavian mountain range. The system applies a modern macroscale land surface hydrology model (VIC) which in combination with historical climate data and weather predictions can be used to produce both short-term, and seasonal forecasts of inflow and mountain snowpack. Experiences with the forecast system are illustrated using results from individual subcatchments as well as aggregated regional forecasts of inflow and snowpack. Conversion of water volumes into effective energy inflow are also presented and compared to data from the Nordic hydropower system. Further on, we document several important "lessons-learned" that may be of interest to the hydrological research community. Specifically a semi-automatic data cleansing system combining spatial and temporal visualization techniques with statistical procedures are

  7. New Calculations in Dirac Gaugino Models: Operators, Expansions, and Effects

    CERN Document Server

    Carpenter, Linda M

    2015-01-01

    In this work we calculate important one loop SUSY-breaking parameters in models with Dirac gauginos, which are implied by the existence of heavy messenger fields. We find that these SUSY-breaking effects are all related by a small number of parameters, thus the general theory is tightly predictive. In order to make the most accurate analyses of one loop effects, we introduce calculations using an expansion in SUSY breaking messenger mass, rather than relying on postulating the forms of effective operators. We use this expansion to calculate one loop contributions to gaugino masses, non-holomorphic SM adjoint masses, new A-like and B-like terms, and linear terms. We also test the Higgs potential in such models, and calculate one loop contributions to the Higgs mass in certain limits of R-symmetric models, finding a very large contribution in many regions of the $\\mu$-less MSSM, where Higgs fields couple to standard model adjoint fields.

  8. Addressing the Challenges of Distributed Hydrologic Modeling for Operational Forecasting

    Science.gov (United States)

    Butts, M. B.; Yamagata, K.; Kobor, J.; Fontenot, E.

    2008-05-01

    Operational forecasting systems must provide reliable, accurate and timely flood forecasts for a range of catchments from small rapidly responding mountain catchments and urban areas to large, complex but more slowly responding fluvial systems. Flood forecasting systems have evolved from simple forecasting for flood mitigation to real-time decision support systems for real-time reservoir operations for water supply, navigation, hydropower, for managing environmental flows and habitat protection, cooling water and water quality forecasting. These different requirements lead to a number of challenges in applying distributed modelling in an operational context. These challenges include, the often short time available for forecasting that requires a trade-off between model complexity and accuracy on the one hand and on the other hand the need for efficient calculations to reduce the computation times. Limitations in the data available in real-time require modelling tools that can not only operate on a minimum of data but also take advantage of new data sources such as weather radar, satellite remote sensing, wireless sensors etc. Finally, models must not only accurately predict flood peaks but also forecast low flows and surface water-groundwater interactions, water quality, water temperature, optimal reservoir levels, and inundated areas. This paper shows how these challenges are being addressed in a number of case studies. The central strategy has been to develop a flexible modelling framework that can be adapted to different data sources, different levels of complexity and spatial distribution and different modelling objectives. The resulting framework allows amongst other things, optimal use of grid-based precipitation fields from weather radar and numerical weather models, direct integration of satellite remote sensing, a unique capability to treat a range of new forecasting problems such as flooding conditioned by surface water-groundwater interactions. Results

  9. Are multiple runs better than one?

    Energy Technology Data Exchange (ETDEWEB)

    Cantu-Paz, E

    2001-01-04

    This paper investigates whether it is better to use a certain constant amount of computational resources in a single run with a large population, or in multiple runs with smaller populations. The paper presents the primary tradeoffs involved in this problem and identifies the conditions under which there is an advantage to use multiple small runs. The paper uses an existing model that relates the quality of the solutions reached by a GA with its population size. The results suggest that in most cases a single run with the largest population possible reaches a better solution than multiple isolated runs. The findings are validated with experiments on functions of varying difficulty.

  10. Plasmaspheric electron densities: the importance in modelling radiation belts and in SSA operation

    Science.gov (United States)

    Lichtenberger, János; Jorgensen, Anders; Koronczay, Dávid; Ferencz, Csaba; Hamar, Dániel; Steinbach, Péter; Clilverd, Mark; Rodger, Craig; Juhász, Lilla; Sannikov, Dmitry; Cherneva, Nina

    2016-04-01

    The Automatic Whistler Detector and Analyzer Network (AWDANet, Lichtenberger et al., J. Geophys. Res., 113, 2008, A12201, doi:10.1029/2008JA013467) is able to detect and analyze whistlers in quasi-realtime and can provide equatorial electron density data. The plasmaspheric electron densities are key parameters for plasmasphere models in Space Weather related investigations, particularly in modeling charged particle accelerations and losses in Radiation Belts. The global AWDANet detects millions of whistlers in a year. The network operates since early 2002 with automatic whistler detector capability and it has been recently completed with automatic analyzer capability in PLASMON (http://plasmon.elte.hu, Lichtenberger et al., Space Weather Space Clim. 3 2013, A23 DOI: 10.1051/swsc/2013045.) Eu FP7-Space project. It is based on a recently developed whistler inversion model (Lichtenberger, J. J. Geophys. Res., 114, 2009, A07222, doi:10.1029/2008JA013799), that opened the way for an automated process of whistler analysis, not only for single whistler events but for complex analysis of multiple-path propagation whistler groups. The network operates in quasi real-time mode since mid-2014, fifteen stations provide equatorial electron densities that are used as inputs for a data assimilative plasmasphere model but they can also be used directly in space weather research and models. We have started to process the archive data collected by AWDANet stations since 2002 and in this paper we present the results of quasi-real-time and off-line runs processing whistlers from quiet and disturb periods. The equatorial electron densities obtained by whistler inversion are fed into the assimilative model of the plasmasphere providing a global view of the region for processed the periods

  11. Runs of homozygosity associated with speech delay in autism in a taiwanese han population: evidence for the recessive model.

    Directory of Open Access Journals (Sweden)

    Ping-I Lin

    Full Text Available Runs of homozygosity (ROH may play a role in complex diseases. In the current study, we aimed to test if ROHs are linked to the risk of autism and related language impairment. We analyzed 546,080 SNPs in 315 Han Chinese affected with autism and 1,115 controls. ROH was defined as an extended homozygous haplotype spanning at least 500 kb. Relative extended haplotype homozygosity (REHH for the trait-associated ROH region was calculated to search for the signature of selection sweeps. Totally, we identified 676 ROH regions. An ROH region on 11q22.3 was significantly associated with speech delay (corrected p = 1.73×10(-8. This region contains the NPAT and ATM genes associated with ataxia telangiectasia characterized by language impairment; the CUL5 (culin 5 gene in the same region may modulate the neuronal migration process related to language functions. These three genes are highly expressed in the cerebellum. No evidence for recent positive selection was detected on the core haplotypes in this region. The same ROH region was also nominally significantly associated with speech delay in another independent sample (p = 0.037; combinatorial analysis Stouffer's z trend = 0.0005. Taken together, our findings suggest that extended recessive loci on 11q22.3 may play a role in language impairment in autism. More research is warranted to investigate if these genes influence speech pathology by perturbing cerebellar functions.

  12. Runs of homozygosity associated with speech delay in autism in a taiwanese han population: evidence for the recessive model.

    Science.gov (United States)

    Lin, Ping-I; Kuo, Po-Hsiu; Chen, Chia-Hsiang; Wu, Jer-Yuarn; Gau, Susan S-F; Wu, Yu-Yu; Liu, Shih-Kai

    2013-01-01

    Runs of homozygosity (ROH) may play a role in complex diseases. In the current study, we aimed to test if ROHs are linked to the risk of autism and related language impairment. We analyzed 546,080 SNPs in 315 Han Chinese affected with autism and 1,115 controls. ROH was defined as an extended homozygous haplotype spanning at least 500 kb. Relative extended haplotype homozygosity (REHH) for the trait-associated ROH region was calculated to search for the signature of selection sweeps. Totally, we identified 676 ROH regions. An ROH region on 11q22.3 was significantly associated with speech delay (corrected p = 1.73×10(-8)). This region contains the NPAT and ATM genes associated with ataxia telangiectasia characterized by language impairment; the CUL5 (culin 5) gene in the same region may modulate the neuronal migration process related to language functions. These three genes are highly expressed in the cerebellum. No evidence for recent positive selection was detected on the core haplotypes in this region. The same ROH region was also nominally significantly associated with speech delay in another independent sample (p = 0.037; combinatorial analysis Stouffer's z trend = 0.0005). Taken together, our findings suggest that extended recessive loci on 11q22.3 may play a role in language impairment in autism. More research is warranted to investigate if these genes influence speech pathology by perturbing cerebellar functions.

  13. LHCf completes its first run

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    LHCf, one of the three smaller experiments at the LHC, has completed its first run. The detectors were removed last week and the analysis of data is continuing. The first results will be ready by the end of the year.   One of the two LHCf detectors during the removal operations inside the LHC tunnel. LHCf is made up of two independent detectors located in the tunnel 140 m either side of the ATLAS collision point. The experiment studies the secondary particles created during the head-on collisions in the LHC because they are similar to those created in a cosmic ray shower produced when a cosmic particle hits the Earth’s atmosphere. The focus of the experiment is to compare the various shower models used to estimate the primary energy of ultra-high-energy cosmic rays. The energy of proton-proton collisions at the LHC will be equivalent to a cosmic ray of 1017eV hitting the atmosphere, very close to the highest energies observed in the sky. “We have now completed the fir...

  14. MPS Solidification Model. Volume 2: Operating guide and software documentation for the unsteady model

    Science.gov (United States)

    Maples, A. L.

    1981-01-01

    The operation of solidification Model 2 is described and documentation of the software associated with the model is provided. Model 2 calculates the macrosegregation in a rectangular ingot of a binary alloy as a result of unsteady horizontal axisymmetric bidirectional solidification. The solidification program allows interactive modification of calculation parameters as well as selection of graphical and tabular output. In batch mode, parameter values are input in card image form and output consists of printed tables of solidification functions. The operational aspects of Model 2 that differ substantially from Model 1 are described. The global flow diagrams and data structures of Model 2 are included. The primary program documentation is the code itself.

  15. Running surface couplings

    OpenAIRE

    1995-01-01

    We discuss the renormalization group improved effective action and running surface couplings in curved spacetime with boundary. Using scalar self-interacting theory as an example, we study the influence of the boundary effects to effective equations of motion in spherical cap and the relevance of surface running couplings to quantum cosmology and symmetry breaking phenomenon. Running surface couplings in the asymptotically free SU(2) gauge theory are found.

  16. Island operation - modelling of a small hydro power system

    Energy Technology Data Exchange (ETDEWEB)

    Skarp, Stefan

    2000-02-01

    Simulation is a useful tool for investigating a system behaviour. It is a way to examine operating situations without having to perform them in reality. If someone for example wants to test an operating situation where the system possibly will demolish, a computer simulation could be a both cheaper and safer way than to do the test in reality. This master thesis performs and analyses a simulation, modelling an electronic power system. The system consists of a minor hydro power station, a wood refining industry, and interconnecting power system components. In the simulation situation the system works in a so called island operation. The thesis aims at making a capacity analysis of the current system. Above all, the goal is to find restrictions in load power profile of the consumer, under given circumstances. The computer software used in simulations is Matlab and its additional program PSB (Power System Blockset). The work has been carried out in co-operation with the power supplier Skellefteaa Kraft, where the problem formulation of this master thesis was founded.

  17. Energy balance model of a SOFC cogenerator operated with biogas

    Science.gov (United States)

    Van herle, Jan; Maréchal, F.; Leuenberger, S.; Favrat, D.

    A small cogeneration system based on a Solid Oxide Fuel Cell (SOFC) fed on the renewable energy source biogas is presented. An existing farm biogas production site (35 m 3 per day), currently equipped with a SOFC demonstration stack, is taken for reference. A process flow diagram was defined in a software package allowing to vary system operating parameters like the fuel inlet composition, reforming technology, stack temperature and stack current (or fuel conversion). For system reforming simplicity, a base case parameter set was defined as the fuel inlet of 60% CH 4:40% CO 2 mixed with air in a 1:1 ratio, together with 800 °C operating temperature and 80% fuel conversion. A model stack, consisting of 100 series elements of anode supported electrolyte cells of 100 cm 2 each, was calculated to deliver 3.1 kW el and 5.16 kW th from an input of 1.5 N m 3/h of biogas (8.95 kW LHV), corresponding to 33.8 and 57.6% electrical and thermal efficiencies (Lower Heating Values (LHVs)), respectively. The incidence on the efficiencies of the model system was examined by the variation of a number of parameters such as the CO 2 content in the biogas, the amount of air addition to the biogas stream, the addition of steam to the fuel inlet, the air excess ratio λ and the stack operating temperature, and the results discussed.

  18. Evaluation of the operational Air-Quality forecast model for Austria ALARO-CAMx

    Science.gov (United States)

    Flandorfer, Claudia; Hirtl, Marcus

    2016-04-01

    The Air-Quality model for Austria (AQA) is operated at ZAMG by order of the regional governments of Vienna, Lower Austria, and Burgenland since 2005. The emphasis of this modeling system is on predicting ozone peaks in the North-east Austrian flatlands. The modeling system is currently a combination of the meteorological model ALARO and the photochemical dispersion model CAMx. Two modeling domains are used with the highest resolution (5 km) in the alpine region. Various extensions with external data sources have been conducted in the past to improve the daily forecasts of the model, e.g. data assimilation of O3- and PM10-observations from the Austrian measurement network (with optimum interpolation method technique), MACC-II boundary conditions; combination of high resolved emission inventories for Austria with TNO and EMEP data. The biogenic emissions are provided by the SMOKE model. The model runs 2 times per day for a period of 48 hours. ZAMG provides daily forecasts of O3, PM10 and NO2 to the regional governments of Austria. The evaluation of these forecasts is done for January to September 2015, with the main focus on the summer peaks of ozone. The measurements of the Air-Quality stations are compared with the punctual forecasts at the sites of the stations and the area forecasts for every province of Austria. Several heat waves occurred between June and September 2015 (new temperature records for St. Pölten and Linz). During these periods the information threshold for ozone has been exceeded 19 times, mostly in the Eastern regions of Austria. Values above the alert threshold have been measured at some stations in Lower Austria and Vienna at the beginning of July. For the evaluation, the results for the periods with exceedances in Eastern Austria will be discussed in detail.

  19. Groundwater flow modelling of the excavation and operational phases - Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Urban (Computer-aided Fluid Engineering AB, Lyckeby (Sweden)); Rhen, Ingvar (SWECO Environment AB, Falun (Sweden))

    2010-12-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken a series of groundwater flow modelling studies. These represent time periods with different hydraulic conditions and the simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. The modelling study reported here presents calculated inflow rates, drawdown of the groundwater table and upconing of deep saline water for different levels of grouting efficiency during the excavation and operational phases of a final repository at Laxemar. The inflow calculations were accompanied by a sensitivity study, which among other matters handled the impact of different deposition hole rejection criteria. The report also presents tentative modelling results for the duration of the saturation phase, which starts once the used parts of the repository are being backfilled

  20. Applying profile- and catchment-based mathematical models for evaluating the run-off from a Nordic catchment

    Directory of Open Access Journals (Sweden)

    Farkas Csilla

    2016-09-01

    Full Text Available Knowledge of hydrological processes and water balance elements are important for climate adaptive water management as well as for introducing mitigation measures aiming to improve surface water quality. Mathematical models have the potential to estimate changes in hydrological processes under changing climatic or land use conditions. These models, indeed, need careful calibration and testing before being applied in decision making. The aim of this study was to compare the capability of five different hydrological models to predict the runoff and the soil water balance elements of a small catchment in Norway. The models were harmonised and calibrated against the same data set. In overall, a good agreement between the measured and simulated runoff was obtained for the different models when integrating the results over a week or longer periods. Model simulations indicate that forest appears to be very important for the water balance in the catchment, and that there is a lack of information on land use specific water balance elements. We concluded that joint application of hydrological models serves as a good background for ensemble modelling of water transport processes within a catchment and can highlight the uncertainty of models forecast.

  1. Running gratings in photoconductive materials

    DEFF Research Database (Denmark)

    Kukhtarev, N. V.; Kukhtareva, T.; Lyuksyutov, S. F.

    2005-01-01

    Starting from the three-dimensional version of a standard photorefractive model (STPM), we obtain a reduced compact Set of equations for an electric field based on the assumption of a quasi-steady-state fast recombination. The equations are suitable for evaluation of a current induced by running...

  2. An Operational Configuration of the ARPS Data Analysis System to Initialize WRF in the NM'S Environmental Modeling System

    Science.gov (United States)

    Case, Jonathan; Blottman, Pete; Hoeth, Brian; Oram, Timothy

    2006-01-01

    The Weather Research and Forecasting (WRF) model is the next generation community mesoscale model designed to enhance collaboration between the research and operational sectors. The NM'S as a whole has begun a transition toward WRF as the mesoscale model of choice to use as a tool in making local forecasts. Currently, both the National Weather Service in Melbourne, FL (NWS MLB) and the Spaceflight Meteorology Group (SMG) are running the Advanced Regional Prediction System (AIRPS) Data Analysis System (ADAS) every 15 minutes over the Florida peninsula to produce high-resolution diagnostics supporting their daily operations. In addition, the NWS MLB and SMG have used ADAS to provide initial conditions for short-range forecasts from the ARPS numerical weather prediction (NWP) model. Both NM'S MLB and SMG have derived great benefit from the maturity of ADAS, and would like to use ADAS for providing initial conditions to WRF. In order to assist in this WRF transition effort, the Applied Meteorology Unit (AMU) was tasked to configure and implement an operational version of WRF that uses output from ADAS for the model initial conditions. Both agencies asked the AMU to develop a framework that allows the ADAS initial conditions to be incorporated into the WRF Environmental Modeling System (EMS) software. Developed by the NM'S Science Operations Officer (S00) Science and Training Resource Center (STRC), the EMS is a complete, full physics, NWP package that incorporates dynamical cores from both the National Center for Atmospheric Research's Advanced Research WRF (ARW) and the National Centers for Environmental Prediction's Non-Hydrostatic Mesoscale Model (NMM) into a single end-to-end forecasting system. The EMS performs nearly all pre- and postprocessing and can be run automatically to obtain external grid data for WRF boundary conditions, run the model, and convert the data into a format that can be readily viewed within the Advanced Weather Interactive Processing System

  3. Ice-ocean-ecosystem operational model of the Baltic Sea

    Science.gov (United States)

    Janecki, M.; Dzierzbicka-Glowacka, L.; Jakacki, J.; Nowicki, A.

    2012-04-01

    3D-CEMBS is a fully coupled model adopted for the Baltic Sea and have been developed within the grant, wchich is supported by the Polish State Committee of Scientific Reasearch. The model is based on CESM1.0 (Community Earth System Model), in our configuration it consists of two active components (ocean and ice) driven by central coupler (CPL7). Ocean (POP version 2.1) and ice models (CICE model, version 4.0) are forced by atmospheric and land data models. Atmospheric data sets are provided by ICM-UM model from University of Warsaw. Additionally land model provides runoff of the Baltic Sea (currently 78 rivers). Ecosystem model is based on an intermediate complexity marine ecosystem model for the global domain (J.K. Moore et. al., 2002) and consists of 11 main components: zooplankton, small phytoplankton, diatoms, cyanobacteria, two detrital classes, dissolved oxygen and the nutrients nitrate, ammonium, phosphate and silicate. The model is configured at two horizontal resolutions, approximately 9km and 2km (1/12° and 1/48° respectively). The model bathymetry is represented as 21 vertical levels and the thickness of the first four layers were chosen to be five metres. 3D-CEMBS model grid is based on stereographic coordinates, but equator of these coordinates is in the centre of the Baltic Sea (rotated stereographic coordinates) and we can assume that shape of the cells are square and they are identical. Currently model works in a operational state. The model creates 48-hour forecasts every 6 hours (or when new atmospheric dataset is available). Prognostic variables such as temperature, salinity, ice cover, currents, sea surface height and phytoplankton concentration are presented online on a the website and are available for registered users. Also time series for any location are accessible. This work was carried out in support of grant No NN305 111636 and No NN306 353239 - the Polish state Committee of Scientific Research. The partial support for this study was

  4. Using Model-Based Reasoning for Autonomous Instrument Operation

    Science.gov (United States)

    Johnson, Mike; Rilee, M.; Truszkowski, W.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    of environmental hazards, frame the problem of constructing autonomous science instruments. we are developing a model of the Low Energy Neutral Atom instrument (LENA) that is currently flying on board the Imager for Magnetosphere-to-Aurora Global Exploration (IMAGE) spacecraft. LENA is a particle detector that uses high voltage electrostatic optics and time-of-flight mass spectrometry to image neutral atom emissions from the denser regions of the Earth's magnetosphere. As with most spacecraft borne science instruments, phenomena in addition to neutral atoms are detected by LENA. Solar radiation and energetic particles from Earth's radiation belts are of particular concern because they may help generate currents that may compromise LENA's long term performance. An explicit model of the instrument response has been constructed and is currently in use on board IMAGE to dynamically adapt LENA to the presence or absence of energetic background radiations. The components of LENA are common in space science instrumentation, and lessons learned by modelling this system may be applied to other instruments. This work demonstrates that a model-based approach can be used to enhance science instrument effectiveness. Our future work involves the extension of these methods to cover more aspects of LENA operation and the generalization to other space science instrumentation.

  5. Overuse injuries in running

    DEFF Research Database (Denmark)

    Larsen, Lars Henrik; Rasmussen, Sten; Jørgensen, Jens Erik

    2016-01-01

    What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence.......What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence....

  6. Running to Extremes

    Institute of Scientific and Technical Information of China (English)

    PHILIP JONES

    2010-01-01

    @@ For some, simply running 21 km, or a full marathon at 42 kin, isn't enough of an achievement. I mean, you can run a marathon in almost every major city in the world and many of them are centerpiece events watched by a global audience.

  7. MODELLING AND EVALUATION OF OPERATIONAL COMPETITIVENESS OF MANUFACTURING ENTERPRISES

    Directory of Open Access Journals (Sweden)

    YANG LIU

    2009-12-01

    Full Text Available This paper is aiming to connect previous research in global competitiveness analysis. Research is based on doing numerous case studies and creating analytical models to evaluate the overall competitiveness, which is a novel concept by integrating the evaluation of manufacturing strategy and transformational leadership including technology level together. The empirical studies are focused to case companies in China especially Chinese State-Owned Manufacturing Enterprise (CSOME. The main emphasises of this research are manufacturing strategy and transformational leadership for CSOME. We have brought the influence of “China effect” to study how it will impact the operational competitiveness of CSOME on top of their manufacturing strategy and transformational leadership.

  8. Computing Debris-flow Mobilization and Run-out with a Two-phase Depth-averaged Model

    Science.gov (United States)

    George, D. L.; Iverson, R. M.

    2011-12-01

    Large-scale, shallow earth-surface flows, such as river flows, overland flooding, and tsunami propagation and inundation, are commonly modeled with depth-averaged equations for the evolution of mass and momentum distributions. Depth-averaging three-dimensional conservation equations results in a tractable two-dimensional model that predicts macroscopic flow features with reasonable accuracy. For example, the simplest of the depth-averaged models---the shallow water equations---has proven to accurately describe water flooding and inundation. We have developed a depth-averaged, two-phase model applicable to granular-fluid mixtures such as landslides and debris flows. While the model relies on relatively simple assumptions for Coulomb frictional stress, the governing equations are more complex than those for shallow water flow. Our new equations include important feedback effects due to coupled evolution of the solid volume fraction and pore-fluid pressure, which mediates frictional stress. While pore-fluid pressure has long been known to be an important factor influencing debris-flow mobility, previous models lacked explicit coupling between pressure and granular dilation. Consequently, traditional models have also lacked the ability to account for the quasi-static transition of a stable mass of water-laden sediment into a debris flow. These models must be initialized by assuming a force balance far from equilibrium, ignoring the important transition to instability. By explicitly tracking the coupled pore-fluid pressure and solid volume fraction, our model captures this important transition and therefore can be used to investigate stability and mobility in addition to flow routing and deposition. Our model equations are a nonlinear hyperbolic system similar in mathematical structure to the shallow water equations, but having two additional equations for the solid volume fraction and pore-fluid pressure. Because of the mathematical similarities, numerical techniques

  9. Full Spectrum Operations: A Running Start

    Science.gov (United States)

    2009-03-31

    to destroy bacteria and viruses include: electro dialysis (electric current and ion exchange), oxidation (treatment with ozone ), and photo-oxidation...better, more thorough, albeit less conveniently measured method, such as using UV, ozone , or electrolysis. Also, if RO is unnecessary for desalination...Online database. http://www.pristinewater.com/Chlorine%20Hazards.htm. November 2008. 50 Global Healing Center. Internet Article, “Chlorine, Cancer, And

  10. LHCb siliicon detectors: the Run 1 to Run 2 transition and first experience of Run 2

    CERN Document Server

    Rinnert, Kurt

    2015-01-01

    LHCb is a dedicated experiment to study New Physics in the decays of heavy hadrons at the Large Hadron Collider (LHC) at CERN. The detector includes a high precision tracking system consisting of a silicon-strip vertex detector (VELO) surrounding the pp interaction region, a large- area silicon-strip detector located upstream of a dipole magnet (TT), and three stations of silicon- strip detectors (IT) and straw drift tubes placed downstream (OT). The operational transition of the silicon detectors VELO, TT and IT from LHC Run 1 to Run 2 and first Run 2 experiences will be presented. During the long shutdown of the LHC the silicon detectors have been maintained in a safe state and operated regularly to validate changes in the control infrastructure, new operational procedures, updates to the alarm systems and monitoring software. In addition, there have been some infrastructure related challenges due to maintenance performed in the vicinity of the silicon detectors that will be discussed. The LHCb silicon dete...

  11. CMS software and computing for LHC Run 2

    CERN Document Server

    Bloom, Kenneth

    2016-01-01

    The CMS offline software and computing system has successfully met the challenge of LHC Run 2. In this presentation, we will discuss how the entire system was improved in anticipation of increased trigger output rate, increased rate of pileup interactions and the evolution of computing technology. The primary goals behind these changes was to increase the flexibility of computing facilities where ever possible, as to increase our operational efficiency, and to decrease the computing resources needed to accomplish the primary offline computing workflows. These changes have resulted in a new approach to distributed computing in CMS for Run 2 and for the future as the LHC luminosity should continue to increase. We will discuss changes and plans to our data federation, which was one of the key changes towards a more flexible computing model for Run 2. Our software framework and algorithms also underwent significant changes. We will summarize the our experience with a new multi-threaded framework as deployed on ou...

  12. ATLAS computing challenges before the next LHC run

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2014-01-01

    ATLAS software and computing is in a period of intensive evolution. The current long shutdown presents an opportunity to assimilate lessons from the very successful Run 1 (2009-2013) and to prepare for the substantially increased computing requirements for Run 2 (from spring 2015). Run 2 will bring a near doubling of the energy and the data rate, high event pile-up levels, and higher event complexity from detector upgrades, meaning the number and complexity of events to be analyzed will increase dramatically. At the same time operational loads must be reduced through greater automation, a wider array of opportunistic resources must be supported, costly storage must be used with greater efficiency, a sophisticated new analysis model must be integrated, and concurrency features of new processors must be exploited. This paper surveys the distributed computing aspects of the upgrade program and the plans for 2014 to exercise the new capabilities in a large scale Data Challenge.

  13. Geostationary Operational Environmental Satellite (GOES) Gyro Temperature Model

    Science.gov (United States)

    Rowe, J. N.; Noonan, C. H.; Garrick, J.

    1996-01-01

    The geostationary Operational Environmental Satellite (GOES) 1/M series of spacecraft are geostationary weather satellites that use the latest in weather imaging technology. The inertial reference unit package onboard consists of three gyroscopes measuring angular velocity along each of the spacecraft's body axes. This digital integrating rate assembly (DIRA) is calibrated and used to maintain spacecraft attitude during orbital delta-V maneuvers. During the early orbit support of GOES-8 (April 1994), the gyro drift rate biases exhibited a large dependency on gyro temperature. This complicated the calibration and introduced errors into the attitude during delta-V maneuvers. Following GOES-8, a model of the DIRA temperature and drift rate bias variation was developed for GOES-9 (May 1995). This model was used to project a value of the DIRA bias to use during the orbital delta-V maneuvers based on the bias change observed as the DIRA warmed up during the calibration. The model also optimizes the yaw reorientation necessary to achieve the correct delta-V pointing attitude. As a result, a higher accuracy was achieved on GOES-9 leading to more efficient delta-V maneuvers and a propellant savings. This paper summarizes the: Data observed on GOES-8 and the complications it caused in calibration; DIRA temperature/drift rate model; Application and results of the model on GOES-9 support.

  14. Operational Testing of Satellite based Hydrological Model (SHM)

    Science.gov (United States)

    Gaur, Srishti; Paul, Pranesh Kumar; Singh, Rajendra; Mishra, Ashok; Gupta, Praveen Kumar; Singh, Raghavendra P.

    2017-04-01

    Incorporation of the concept of transposability in model testing is one of the prominent ways to check the credibility of a hydrological model. Successful testing ensures ability of hydrological models to deal with changing conditions, along with its extrapolation capacity. For a newly developed model, a number of contradictions arises regarding its applicability, therefore testing of credibility of model is essential to proficiently assess its strength and limitations. This concept emphasizes to perform 'Hierarchical Operational Testing' of Satellite based Hydrological Model (SHM), a newly developed surface water-groundwater coupled model, under PRACRITI-2 program initiated by Space Application Centre (SAC), Ahmedabad. SHM aims at sustainable water resources management using remote sensing data from Indian satellites. It consists of grid cells of 5km x 5km resolution and comprises of five modules namely: Surface Water (SW), Forest (F), Snow (S), Groundwater (GW) and Routing (ROU). SW module (functions in the grid cells with land cover other than forest and snow) deals with estimation of surface runoff, soil moisture and evapotranspiration by using NRCS-CN method, water balance and Hragreaves method, respectively. The hydrology of F module is dependent entirely on sub-surface processes and water balance is calculated based on it. GW module generates baseflow (depending on water table variation with the level of water in streams) using Boussinesq equation. ROU module is grounded on a cell-to-cell routing technique based on the principle of Time Variant Spatially Distributed Direct Runoff Hydrograph (SDDH) to route the generated runoff and baseflow by different modules up to the outlet. For this study Subarnarekha river basin, flood prone zone of eastern India, has been chosen for hierarchical operational testing scheme which includes tests under stationary as well as transitory conditions. For this the basin has been divided into three sub-basins using three flow

  15. Hysteresis Modeling in Magnetostrictive Materials Via Preisach Operators

    Science.gov (United States)

    Smith, R. C.

    1997-01-01

    A phenomenological characterization of hysteresis in magnetostrictive materials is presented. Such hysteresis is due to both the driving magnetic fields and stress relations within the material and is significant throughout, most of the drive range of magnetostrictive transducers. An accurate characterization of the hysteresis and material nonlinearities is necessary, to fully utilize the actuator/sensor capabilities of the magnetostrictive materials. Such a characterization is made here in the context of generalized Preisach operators. This yields a framework amenable to proving the well-posedness of structural models that incorporate the magnetostrictive transducers. It also provides a natural setting in which to develop practical approximation techniques. An example illustrating this framework in the context of a Timoshenko beam model is presented.

  16. Learning obstacle avoidance with an operant behavior model.

    Science.gov (United States)

    Gutnisky, D A; Zanutto, B S

    2004-01-01

    Artificial intelligence researchers have been attracted by the idea of having robots learn how to accomplish a task, rather than being told explicitly. Reinforcement learning has been proposed as an appealing framework to be used in controlling mobile agents. Robot learning research, as well as research in biological systems, face many similar problems in order to display high flexibility in performing a variety of tasks. In this work, the controlling of a vehicle in an avoidance task by a previously developed operant learning model (a form of animal learning) is studied. An environment in which a mobile robot with proximity sensors has to minimize the punishment for colliding against obstacles is simulated. The results were compared with the Q-Learning algorithm, and the proposed model had better performance. In this way a new artificial intelligence agent inspired by neurobiology, psychology, and ethology research is proposed.

  17. Operational modal analysis modeling, Bayesian inference, uncertainty laws

    CERN Document Server

    Au, Siu-Kui

    2017-01-01

    This book presents operational modal analysis (OMA), employing a coherent and comprehensive Bayesian framework for modal identification and covering stochastic modeling, theoretical formulations, computational algorithms, and practical applications. Mathematical similarities and philosophical differences between Bayesian and classical statistical approaches to system identification are discussed, allowing their mathematical tools to be shared and their results correctly interpreted. Many chapters can be used as lecture notes for the general topic they cover beyond the OMA context. After an introductory chapter (1), Chapters 2–7 present the general theory of stochastic modeling and analysis of ambient vibrations. Readers are first introduced to the spectral analysis of deterministic time series (2) and structural dynamics (3), which do not require the use of probability concepts. The concepts and techniques in these chapters are subsequently extended to a probabilistic context in Chapter 4 (on stochastic pro...

  18. Combined search for the standard model Higgs boson decaying to bb using the D0 run II data set.

    Science.gov (United States)

    Abazov, V M; Abbott, B; Acharya, B S; Adams, M; Adams, T; Alexeev, G D; Alkhazov, G; Alton, A; Alverson, G; Askew, A; Atkins, S; Augsten, K; Avila, C; Badaud, F; Bagby, L; Baldin, B; Bandurin, D V; Banerjee, S; Barberis, E; Baringer, P; Bartlett, J F; Bassler, U; Bazterra, V; Bean, A; Begalli, M; Bellantoni, L; Beri, S B; Bernardi, G; Bernhard, R; Bertram, I; Besançon, M; Beuselinck, R; Bhat, P C; Bhatia, S; Bhatnagar, V; Blazey, G; Blessing, S; Bloom, K; Boehnlein, A; Boline, D; Boos, E E; Borissov, G; Bose, T; Brandt, A; Brandt, O; Brock, R; Bross, A; Brown, D; Brown, J; Bu, X B; Buehler, M; Buescher, V; Bunichev, V; Burdin, S; Buszello, C P; Camacho-Pérez, E; Casey, B C K; Castilla-Valdez, H; Caughron, S; Chakrabarti, S; Chakraborty, D; Chan, K M; Chandra, A; Chapon, E; Chen, G; Chevalier-Théry, S; Cho, D K; Cho, S W; Choi, S; Choudhary, B; Cihangir, S; Claes, D; Clutter, J; Cooke, M; Cooper, W E; Corcoran, M; Couderc, F; Cousinou, M-C; Croc, A; Cutts, D; Das, A; Davies, G; de Jong, S J; De La Cruz-Burelo, E; Déliot, F; Demina, R; Denisov, D; Denisov, S P; Desai, S; Deterre, C; Devaughan, K; Diehl, H T; Diesburg, M; Ding, P F; Dominguez, A; Dubey, A; Dudko, L V; Duggan, D; Duperrin, A; Dutt, S; Dyshkant, A; Eads, M; Edmunds, D; Ellison, J; Elvira, V D; Enari, Y; Evans, H; Evdokimov, A; Evdokimov, V N; Facini, G; Feng, L; Ferbel, T; Fiedler, F; Filthaut, F; Fisher, W; Fisk, H E; Fortner, M; Fox, H; Fuess, S; Garcia-Bellido, A; García-González, J A; García-Guerra, G A; Gavrilov, V; Gay, P; Geng, W; Gerbaudo, D; Gerber, C E; Gershtein, Y; Ginther, G; Golovanov, G; Goussiou, A; Grannis, P D; Greder, S; Greenlee, H; Grenier, G; Gris, Ph; Grivaz, J-F; Grohsjean, A; Grünendahl, S; Grünewald, M W; Guillemin, T; Gutierrez, G; Gutierrez, P; Hagopian, S; Haley, J; Han, L; Harder, K; Harel, A; Hauptman, J M; Hays, J; Head, T; Hebbeker, T; Hedin, D; Hegab, H; Heinson, A P; Heintz, U; Hensel, C; Heredia De La Cruz, I; Herner, K; Hesketh, G; Hildreth, M D; Hirosky, R; Hoang, T; Hobbs, J D; Hoeneisen, B; Hogan, J; Hohlfeld, M; Howley, I; Hubacek, Z; Hynek, V; Iashvili, I; Ilchenko, Y; Illingworth, R; Ito, A S; Jabeen, S; Jaffré, M; Jayasinghe, A; Jeong, M S; Jesik, R; Jiang, P; Johns, K; Johnson, E; Johnson, M; Jonckheere, A; Jonsson, P; Joshi, J; Jung, A W; Juste, A; Kaadze, K; Kajfasz, E; Karmanov, D; Kasper, P A; Katsanos, I; Kehoe, R; Kermiche, S; Khalatyan, N; Khanov, A; Kharchilava, A; Kharzheev, Y N; Kiselevich, I; Kohli, J M; Kozelov, A V; Kraus, J; Kulikov, S; Kumar, A; Kupco, A; Kurča, T; Kuzmin, V A; Lammers, S; Landsberg, G; Lebrun, P; Lee, H S; Lee, S W; Lee, W M; Lei, X; Lellouch, J; Li, D; Li, H; Li, L; Li, Q Z; Lim, J K; Lincoln, D; Linnemann, J; Lipaev, V V; Lipton, R; Liu, H; Liu, Y; Lobodenko, A; Lokajicek, M; Lopes de Sa, R; Lubatti, H J; Luna-Garcia, R; Lyon, A L; Maciel, A K A; Madar, R; Magaña-Villalba, R; Malik, S; Malyshev, V L; Maravin, Y; Martínez-Ortega, J; McCarthy, R; McGivern, C L; Meijer, M M; Melnitchouk, A; Menezes, D; Mercadante, P G; Merkin, M; Meyer, A; Meyer, J; Miconi, F; Mondal, N K; Mulhearn, M; Nagy, E; Naimuddin, M; Narain, M; Nayyar, R; Neal, H A; Negret, J P; Neustroev, P; Nguyen, H T; Nunnemann, T; Orduna, J; Osman, N; Osta, J; Padilla, M; Pal, A; Parashar, N; Parihar, V; Park, S K; Partridge, R; Parua, N; Patwa, A; Penning, B; Perfilov, M; Peters, Y; Petridis, K; Petrillo, G; Pétroff, P; Pleier, M-A; Podesta-Lerma, P L M; Podstavkov, V M; Popov, A V; Prewitt, M; Price, D; Prokopenko, N; Qian, J; Quadt, A; Quinn, B; Rangel, M S; Ranjan, K; Ratoff, P N; Razumov, I; Renkel, P; Ripp-Baudot, I; Rizatdinova, F; Rominsky, M; Ross, A; Royon, C; Rubinov, P; Ruchti, R; Sajot, G; Salcido, P; Sánchez-Hernández, A; Sanders, M P; Santos, A S; Savage, G; Sawyer, L; Scanlon, T; Schamberger, R D; Scheglov, Y; Schellman, H; Schlobohm, S; Schwanenberger, C; Schwienhorst, R; Sekaric, J; Severini, H; Shabalina, E; Shary, V; Shaw, S; Shchukin, A A; Shivpuri, R K; Simak, V; Skubic, P; Slattery, P; Smirnov, D; Smith, K J; Snow, G R; Snow, J; Snyder, S; Söldner-Rembold, S; Sonnenschein, L; Soustruznik, K; Stark, J; Stoyanova, D A; Strauss, M; Suter, L; Svoisky, P; Takahashi, M; Titov, M; Tokmenin, V V; Tsai, Y-T; Tschann-Grimm, K; Tsybychev, D; Tuchming, B; Tully, C; Uvarov, L; Uvarov, S; Uzunyan, S; Van Kooten, R; van Leeuwen, W M; Varelas, N; Varnes, E W; Vasilyev, I A; Verdier, P; Verkheev, A Y; Vertogradov, L S; Verzocchi, M; Vesterinen, M; Vilanova, D; Vokac, P; Wahl, H D; Wang, M H L S; Wang, R-J; Warchol, J; Watts, G; Wayne, M; Weichert, J; Welty-Rieger, L; White, A; Wicke, D; Williams, M R J; Wilson, G W; Wobisch, M; Wood, D R; Wyatt, T R; Xie, Y; Yamada, R; Yang, S; Yang, W-C; Yasuda, T; Yatsunenko, Y A; Ye, W; Ye, Z; Yin, H; Yip, K; Youn, S W; Yu, J M; Zennamo, J; Zhao, T; Zhao, T G; Zhou, B; Zhu, J; Zielinski, M; Zieminska, D; Zivkovic, L

    2012-09-21

    We present the results of the combination of searches for the standard model Higgs boson produced in association with a W or Z boson and decaying into bb using the data sample collected with the D0 detector in pp collisions at √s = 1.96 TeV at the Fermilab Tevatron Collider. We derive 95% C.L. upper limits on the Higgs boson cross section relative to the standard model prediction in the mass range 100 GeV ≤ M(H) ≤ 150 GeV, and we exclude Higgs bosons with masses smaller than 102 GeV at the 95% C.L. In the mass range 120 GeV ≤ M(H) ≤145 GeV, the data exhibit an excess above the background prediction with a global significance of 1.5 standard deviations, consistent with the expectation in the presence of a standard model Higgs boson.

  19. Rock glaciers on the run - understanding rock glacier landform evolution and recent changes from numerical flow modeling

    Science.gov (United States)

    Müller, Johann; Vieli, Andreas; Gärtner-Roer, Isabelle

    2016-11-01

    Rock glaciers are landforms that form as a result of creeping mountain permafrost which have received considerable attention concerning their dynamical and thermal changes. Observed changes in rock glacier motion on seasonal to decadal timescales have been linked to ground temperature variations and related changes in landform geometries interpreted as signs of degradation due to climate warming. Despite the extensive kinematic and thermal monitoring of these creeping permafrost landforms, our understanding of the controlling factors remains limited and lacks robust quantitative models of rock glacier evolution in relation to their environmental setting. Here, we use a holistic approach to analyze the current and long-term dynamical development of two rock glaciers in the Swiss Alps. Site-specific sedimentation and ice generation rates are linked with an adapted numerical flow model for rock glaciers that couples the process chain from material deposition to rock glacier flow in order to reproduce observed rock glacier geometries and their general dynamics. Modeling experiments exploring the impact of variations in rock glacier temperature and sediment-ice supply show that these forcing processes are not sufficient to explain the currently observed short-term geometrical changes derived from multitemporal digital terrain models at the two different rock glaciers. The modeling also shows that rock glacier thickness is dominantly controlled by slope and rheology while the advance rates are mostly constrained by rates of sediment-ice supply. Furthermore, timescales of dynamical adjustment are found to be strongly linked to creep velocity. Overall, we provide a useful modeling framework for a better understanding of the dynamical response and morphological changes of rock glaciers to changes in external forcing.

  20. 步进电机控制系统建模及运行曲线仿真%Modeling of stepper motor control system and running curve simulation

    Institute of Scientific and Technical Information of China (English)

    周黎; 杨世洪; 高晓东

    2011-01-01

    为了优化开环情况下步进电机的控制,研究运行曲线和传动刚度对步进电机开环控制系统运动情况的影响.依据步进电机运行原理和系统动力学特性,建立控制系统数学模型.设计一种基于正矢函数,高阶平滑的加减速曲线,并与常见的匀加减速曲线和指数型加减速曲线进行了比较仿真.仿真结果表明正矢型加减速曲线能够更好地抑制运动过程中的冲击,减小终点位置的残余振动幅度.该控制方式适用于对运动精度和稳定性有较高要求的场合,在分幅式航空相机的摆扫控制中得到了成功的应用.%In order to optimize the open-loop control strategy of stepper motors, the influence of running curve and transmission stiffness to the stepper motor open-loop control system was studied. The mathematical model of the control system was established according to the stepper motor running principles and system dynamics. A versine based acceleration and deceleration curve with high-order smoothness was designed and compared to the ordinary constant and exponential acceleration and deceleration curves by simulation. The results show that versine based curve works better in restraining the impulsion during the running and reducing the amplitude of residual vibration at the end of the running process. The proposed control strategy is suitable for applications that accuracy and stability are highly required and have been successfully applied to scanning control of a step framing arial camera.