WorldWideScience

Sample records for model running operationally

  1. Running scenarios using the Waste Tank Safety and Operations Hanford Site model

    International Nuclear Information System (INIS)

    Stahlman, E.J.

    1995-11-01

    Management of the Waste Tank Safety and Operations (WTS ampersand O) at Hanford is a large and complex task encompassing 177 tanks and having a budget of over $500 million per year. To assist managers in this task, a model based on system dynamics was developed by the Massachusetts Institute of Technology. The model simulates the WTS ampersand O at the Hanford Tank Farms by modeling the planning, control, and flow of work conducted by Managers, Engineers, and Crafts. The model is described in Policy Analysis of Hanford Tank Farm Operations with System Dynamics Approach (Kwak 1995b) and Management Simulator for Hanford Tank Farm Operations (Kwak 1995a). This document provides guidance for users of the model in developing, running, and analyzing results of management scenarios. The reader is assumed to have an understanding of the model and its operation. Important parameters and variables in the model are described, and two scenarios are formulated as examples

  2. Simulating run-up on steep slopes with operational Boussinesq models; capabilities, spurious effects and instabilities

    Directory of Open Access Journals (Sweden)

    F. Løvholt

    2013-06-01

    Full Text Available Tsunamis induced by rock slides plunging into fjords constitute a severe threat to local coastal communities. The rock slide impact may give rise to highly non-linear waves in the near field, and because the wave lengths are relatively short, frequency dispersion comes into play. Fjord systems are rugged with steep slopes, and modeling non-linear dispersive waves in this environment with simultaneous run-up is demanding. We have run an operational Boussinesq-type TVD (total variation diminishing model using different run-up formulations. Two different tests are considered, inundation on steep slopes and propagation in a trapezoidal channel. In addition, a set of Lagrangian models serves as reference models. Demanding test cases with solitary waves with amplitudes ranging from 0.1 to 0.5 were applied, and slopes were ranging from 10 to 50°. Different run-up formulations yielded clearly different accuracy and stability, and only some provided similar accuracy as the reference models. The test cases revealed that the model was prone to instabilities for large non-linearity and fine resolution. Some of the instabilities were linked with false breaking during the first positive inundation, which was not observed for the reference models. None of the models were able to handle the bore forming during drawdown, however. The instabilities are linked to short-crested undulations on the grid scale, and appear on fine resolution during inundation. As a consequence, convergence was not always obtained. It is reason to believe that the instability may be a general problem for Boussinesq models in fjords.

  3. CMS computing operations during run 1

    CERN Document Server

    Adelman, J; Artieda, J; Bagliese, G; Ballestero, D; Bansal, S; Bauerdick, L; Behrenhof, W; Belforte, S; Bloom, K; Blumenfeld, B; Blyweert, S; Bonacorsi, D; Brew, C; Contreras, L; Cristofori, A; Cury, S; da Silva Gomes, D; Dolores Saiz Santos, M; Dost, J; Dykstra, D; Fajardo Hernandez, E; Fanzango, F; Fisk, I; Flix, J; Georges, A; Gi ffels, M; Gomez-Ceballos, G; Gowdy, S; Gutsche, O; Holzman, B; Janssen, X; Kaselis, R; Kcira, D; Kim, B; Klein, D; Klute, M; Kress, T; Kreuzer, P; Lahi , A; Larson, K; Letts, J; Levin, A; Linacre, J; Linares, J; Liu, S; Luyckx, S; Maes, M; Magini, N; Malta, A; Marra Da Silva, J; Mccartin, J; McCrea, A; Mohapatra, A; Molina, J; Mortensen, T; Padhi, S; Paus, C; Piperov, S; Ralph; Sartirana, A; Sciaba, A; S ligoi, I; Spinoso, V; Tadel, M; Traldi, S; Wissing, C; Wuerthwein, F; Yang, M; Zielinski, M; Zvada, M

    2014-01-01

    During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.

  4. ATLAS Strip Detector: Operational Experience and Run1-> Run2 Transition

    CERN Document Server

    Nagai, Koichi; The ATLAS collaboration

    2014-01-01

    Large hadron collider was operated very successfully during the Run1 and provided a lot of opportunities of physics studies. It currently has a consolidation work toward to the operation at $\\sqrt{s}=14 \\mathrm{TeV}$ in Run2. The ATLAS experiment has achieved excellent performance in Run1 operation, delivering remarkable physics results. The SemiConductor Tracker contributed to the precise measurement of momentum of charged particles. This paper describes the operation experience of the SemiConductor Tracker in Run1 and the preparation toward to the Run2 operation during the LS1.

  5. A capacity expansion model dealing with balancing requirements, short-term operations and long-run dynamics

    International Nuclear Information System (INIS)

    Villavicencio, Manuel

    2017-01-01

    One of the challenges of current power systems is presented by the need to adequately integrate increasing shares of variable renewable energies (VRE) such as wind and photovoltaic (PV) technologies. The study of capacity investments under this context raises refreshed interrogations about the optimal power generation mix when considering system adequacy, operability and reliability issues. This paper analyses the influence of such considerations and adopts a resource-adequacy approach to propose a stylized capacity expansion model (CEM) that endogenously optimize investments in both generation capacity and new flexibility options such as electric energy storage (EES) and demand side management (DSM) capabilities. Three formulations are tested in order to seize the relevance of system dynamics representation over the valuation of capacity and flexibility investments. In each formulation the complexity of the representation of operating constraints increases. The resource-adequacy approach is then enlarged with a multi-service representation of the power system introducing non-contingency reserve considerations. Therefore, investments decisions are enhanced by information from system operations requirements given by the hourly economic dispatch and also by a reliability criterion given by reserve needs. The formulations are tested on a case study in order to capture the trade-offs of adding more details on the system representation while exogenously imposing supplementary VRE penetration. The results obtained show the importance of adopting a sufficiently detailed representation of system requirements to accurately capture the value of capacity and flexibility when important VRE penetration levels are to be studied, but also to appropriately estimate resulting system cost and CO 2 emissions. (author)

  6. ATLAS strip detector: Operational Experience and Run1 → Run2 transition

    CERN Document Server

    NAGAI, K; The ATLAS collaboration

    2014-01-01

    The ATLAS SCT operational experience and the detector performance during the RUN1 period of LHC will be reported. Additionally the preparation outward to RUN2 during the long shut down 1 will be mentioned.

  7. Cycle Engine Modelling Of Spark Ignition Engine Processes during Wide-Open Throttle (WOT) Engine Operation Running By Gasoline Fuel

    International Nuclear Information System (INIS)

    Rahim, M F Abdul; Rahman, M M; Bakar, R A

    2012-01-01

    One-dimensional engine model is developed to simulate spark ignition engine processes in a 4-stroke, 4 cylinders gasoline engine. Physically, the baseline engine is inline cylinder engine with 3-valves per cylinder. Currently, the engine's mixture is formed by external mixture formation using piston-type carburettor. The model of the engine is based on one-dimensional equation of the gas exchange process, isentropic compression and expansion, progressive engine combustion process, and accounting for the heat transfer and frictional losses as well as the effect of valves overlapping. The model is tested for 2000, 3000 and 4000 rpm of engine speed and validated using experimental engine data. Results showed that the engine is able to simulate engine's combustion process and produce reasonable prediction. However, by comparing with experimental data, major discrepancy is noticeable especially on the 2000 and 4000 rpm prediction. At low and high engine speed, simulated cylinder pressures tend to under predict the measured data. Whereas the cylinder temperatures always tend to over predict the measured data at all engine speed. The most accurate prediction is obtained at medium engine speed of 3000 rpm. Appropriate wall heat transfer setup is vital for more precise calculation of cylinder pressure and temperature. More heat loss to the wall can lower cylinder temperature. On the hand, more heat converted to the useful work mean an increase in cylinder pressure. Thus, instead of wall heat transfer setup, the Wiebe combustion parameters are needed to be carefully evaluated for better results.

  8. Preliminary Results of a U.S. Deep South Modeling Experiment Using NASA SPoRT Initialization Datasets for Operational National Weather Service Local Model Runs

    Science.gov (United States)

    Wood, Lance; Medlin, Jeffrey M.; Case, Jon

    2012-01-01

    A joint collaborative modeling effort among the NWS offices in Mobile, AL, and Houston, TX, and NASA Short-term Prediction Research and Transition (SPoRT) Center began during the 2011-2012 cold season, and continued into the 2012 warm season. The focus was on two frequent U.S. Deep South forecast challenges: the initiation of deep convection during the warm season; and heavy precipitation during the cold season. We wanted to examine the impact of certain NASA produced products on the Weather Research and Forecasting Environmental Modeling System in improving the model representation of mesoscale boundaries such as the local sea-, bay- and land-breezes (which often leads to warm season convective initiation); and improving the model representation of slow moving, or quasi-stationary frontal boundaries (which focus cold season storm cell training and heavy precipitation). The NASA products were: the 4-km Land Information System, a 1-km sea surface temperature analysis, and a 4-km greenness vegetation fraction analysis. Similar domains were established over the southeast Texas and Alabama coastlines, each with an outer grid with a 9 km spacing and an inner nest with a 3 km grid spacing. The model was run at each NWS office once per day out to 24 hours from 0600 UTC, using the NCEP Global Forecast System for initial and boundary conditions. Control runs without the NASA products were made at the NASA SPoRT Center. The NCAR Model Evaluation Tools verification package was used to evaluate both the positive and negative impacts of the NASA products on the model forecasts. Select case studies will be presented to highlight the influence of the products.

  9. Heavy ion operation from run 2 to HL-LHC

    CERN Document Server

    Jowett, J M; Versteegen, R

    2014-01-01

    The nuclear collision programme of the LHC will continue with Pb-Pb and p-Pb collisions in Run 2 and beyond. Extrapolating from the performance at lower energies in Run 1, it is already clear that Run 2 will substantially exceed design performance. Beyond that, future high-luminosity heavy ion operation of LHC depends on a somewhat different set of (more modest) upgrades to the collider and its injectors from p-p. The high-luminosity phase will start sooner, in Run 3, when necessary upgrades to detectors should be completed. It follows that the upgrades for heavy-ion operation need high priority in LS2.

  10. New Operator Assistance Features in the CMS Run Control System

    Energy Technology Data Exchange (ETDEWEB)

    Andre, J.M.; et al.

    2017-11-22

    During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.

  11. New operator assistance features in the CMS Run Control System

    CERN Document Server

    Andre, Jean-Marc Olivier; Branson, James; Brummer, Philipp Maximilian; Chaze, Olivier; Cittolin, Sergio; Contescu, Cristian; Craigs, Benjamin Gordon; Darlea, Georgiana Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, Marc; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan F; Gigi, Dominique; Michail Gładki; Glege, Frank; Gomez Ceballos, Guillelmo; Hegeman, Jeroen Guido; Holzner, Andre Georg; Janulis, Mindaugas; Jimenez Estupinan, Raul; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrova, Petia; Pieri, Marco; Racz, Attila; Reis, Thomas; Sakulin, Hannes; Schwick, Christoph; Simelevicius, Dainius; Zejdl, Petr; Vougioukas, M.

    2017-01-01

    The Run Control System of the Compact Muon Solenoid (CMS) experiment at CERN is a distributed Java web application running on Apache Tomcat servers. During Run-1 of the LHC, many operational procedures have been automated. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following t...

  12. Numerical Modelling of Wave Run-Up

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  13. Running-mass inflation model and WMAP

    International Nuclear Information System (INIS)

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.

    2004-01-01

    We consider the observational constraints on the running-mass inflationary model, and, in particular, on the scale dependence of the spectral index, from the new cosmic microwave background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale dependence of n, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into specific types of interaction (gauge and Yukawa) we find that the parameter space is significantly constrained by the new data, but that the running-mass model remains viable

  14. Operation and Configuration of the LHC in Run 1

    CERN Document Server

    Alemany-Fernandez, R; Drosdal, L; Gorzawski, A; Kain, V; Lamont, M; Macpherson, A; Papotti, G; Pojer, M; Ponce, L; Redaelli, S; Roy, G; Solfaroli Camillocci, M; Venturini, W; Wenninger, J

    2013-01-01

    Between 2010 and 2013 the LHC was operated with protons at beam energies of 3.5 and 4 TeV. The proton beams consisted of single bunches and trains of 150 ns (2010), 75 ns (2011) and 50 ns (2011 and 2012). Performances well beyond what had been expected initially have been achieved with 50 ns beams, culminating in the discovery of a 125 GeV/c2 Higgs boson by the ATLAS and CMS experiments. The nominal bunch spacing of 25 ns was only used for electron-cloud scrubbing runs at injection and for collision tests in view of future operation. The cycle structure evolved over the years, and the operational * for ATLAS and CMS was lowered in steps from 3.5 m (2010) to 0.6 m (2012). Lead ion, mixed proton-lead, intermediate proton energy (1.38 TeV) and high-beta runs were also performed. This note provides an overview of LHC operation between 2010 and 2013. The aim is to document the various operational configurations and highlights of Run 1.

  15. ALICE HLT Cluster operation during ALICE Run 2

    Science.gov (United States)

    Lehrbach, J.; Krzewicki, M.; Rohr, D.; Engel, H.; Gomez Ramirez, A.; Lindenstruth, V.; Berzano, D.; ALICE Collaboration

    2017-10-01

    ALICE (A Large Ion Collider Experiment) is one of the four major detectors located at the LHC at CERN, focusing on the study of heavy-ion collisions. The ALICE High Level Trigger (HLT) is a compute cluster which reconstructs the events and compresses the data in real-time. The data compression by the HLT is a vital part of data taking especially during the heavy-ion runs in order to be able to store the data which implies that reliability of the whole cluster is an important matter. To guarantee a consistent state among all compute nodes of the HLT cluster we have automatized the operation as much as possible. For automatic deployment of the nodes we use Foreman with locally mirrored repositories and for configuration management of the nodes we use Puppet. Important parameters like temperatures, network traffic, CPU load etc. of the nodes are monitored with Zabbix. During periods without beam the HLT cluster is used for tests and as one of the WLCG Grid sites to compute offline jobs in order to maximize the usage of our cluster. To prevent interference with normal HLT operations we separate the virtual machines running the Grid jobs from the normal HLT operation via virtual networks (VLANs). In this paper we give an overview of the ALICE HLT operation in 2016.

  16. Running vacuum cosmological models: linear scalar perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Perico, E.L.D. [Instituto de Física, Universidade de São Paulo, Rua do Matão 1371, CEP 05508-090, São Paulo, SP (Brazil); Tamayo, D.A., E-mail: elduartep@usp.br, E-mail: tamayo@if.usp.br [Departamento de Astronomia, Universidade de São Paulo, Rua do Matão 1226, CEP 05508-900, São Paulo, SP (Brazil)

    2017-08-01

    In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ( H {sup 2}) or Λ( R ). Such models assume an equation of state for the vacuum given by P-bar {sub Λ} = - ρ-bar {sub Λ}, relating its background pressure P-bar {sub Λ} with its mean energy density ρ-bar {sub Λ} ≡ Λ/8π G . This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely ρ-bar {sub Λ} = Σ {sub i} ρ-bar {sub Λ} {sub i} . Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ( H {sup 2}) scenario the vacuum is coupled with every matter component, whereas the Λ( R ) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.

  17. Thermodynamical aspects of running vacuum models

    Energy Technology Data Exchange (ETDEWEB)

    Lima, J.A.S. [Universidade de Sao Paulo, Departamento de Astronomia, Sao Paulo (Brazil); Basilakos, Spyros [Academy of Athens, Research Center for Astronomy and Applied Mathematics, Athens (Greece); Sola, Joan [Univ. de Barcelona, High Energy Physics Group, Dept. d' Estructura i Constituents de la Materia, Institut de Ciencies del Cosmos (ICC), Barcelona, Catalonia (Spain)

    2016-04-15

    The thermal history of a large class of running vacuum models in which the effective cosmological term is described by a truncated power series of the Hubble rate, whose dominant term is Λ(H) ∝ H{sup n+2}, is discussed in detail. Specifically, by assuming that the ultrarelativistic particles produced by the vacuum decay emerge into space-time in such a way that its energy density ρ{sub r} ∝ T{sup 4}, the temperature evolution law and the increasing entropy function are analytically calculated. For the whole class of vacuum models explored here we find that the primeval value of the comoving radiation entropy density (associated to effectively massless particles) starts from zero and evolves extremely fast until reaching a maximum near the end of the vacuum decay phase, where it saturates. The late-time conservation of the radiation entropy during the adiabatic FRW phase also guarantees that the whole class of running vacuum models predicts the same correct value of the present day entropy, S{sub 0} ∝ 10{sup 87}-10{sup 88} (in natural units), independently of the initial conditions. In addition, by assuming Gibbons¨CHawking temperature as an initial condition, we find that the ratio between the late-time and primordial vacuum energy densities is in agreement with naive estimates from quantum field theory, namely, ρ{sub Λ0}/ρ{sub ΛI} 10{sup -123}. Such results are independent on the power n and suggests that the observed Universe may evolve smoothly between two extreme, unstable, non-singular de Sitter phases. (orig.)

  18. The Effective Standard Model after LHC Run I

    CERN Document Server

    Ellis, John; You, Tevong

    2015-01-01

    We treat the Standard Model as the low-energy limit of an effective field theory that incorporates higher-dimensional operators to capture the effects of decoupled new physics. We consider the constraints imposed on the coefficients of dimension-6 operators by electroweak precision tests (EWPTs), applying a framework for the effects of dimension-6 operators on electroweak precision tests that is more general than the standard $S,T$ formalism, and use measurements of Higgs couplings and the kinematics of associated Higgs production at the Tevatron and LHC, as well as triple-gauge couplings at the LHC. We highlight the complementarity between EWPTs, Tevatron and LHC measurements in obtaining model-independent limits on the effective Standard Model after LHC Run~1. We illustrate the combined constraints with the example of the two-Higgs doublet model.

  19. The effective Standard Model after LHC Run I

    International Nuclear Information System (INIS)

    Ellis, John; Sanz, Verónica; You, Tevong

    2015-01-01

    We treat the Standard Model as the low-energy limit of an effective field theory that incorporates higher-dimensional operators to capture the effects of decoupled new physics. We consider the constraints imposed on the coefficients of dimension-6 operators by electroweak precision tests (EWPTs), applying a framework for the effects of dimension-6 operators on electroweak precision tests that is more general than the standard S,T formalism, and use measurements of Higgs couplings and the kinematics of associated Higgs production at the Tevatron and LHC, as well as triple-gauge couplings at the LHC. We highlight the complementarity between EWPTs, Tevatron and LHC measurements in obtaining model-independent limits on the effective Standard Model after LHC Run 1. We illustrate the combined constraints with the example of the two-Higgs doublet model.

  20. Mental models of the operator

    International Nuclear Information System (INIS)

    Stary, I.

    2004-01-01

    A brief explanation is presented of the mental model concept, properties of mental models and fundamentals of mental models theory. Possible applications of such models in nuclear power plants are described in more detail. They include training of power plant operators, research into their behaviour and design of the operator-control process interface. The design of a mental model of an operator working in abnormal conditions due to power plant malfunction is outlined as an example taken from the literature. The model has been created based on analysis of experiments performed on a nuclear power plant simulator, run by a training center. (author)

  1. Pathways to designing and running an operational flood forecasting system: an adventure game!

    Science.gov (United States)

    Arnal, Louise; Pappenberger, Florian; Ramos, Maria-Helena; Cloke, Hannah; Crochemore, Louise; Giuliani, Matteo; Aalbers, Emma

    2017-04-01

    In the design and building of an operational flood forecasting system, a large number of decisions have to be taken. These include technical decisions related to the choice of the meteorological forecasts to be used as input to the hydrological model, the choice of the hydrological model itself (its structure and parameters), the selection of a data assimilation procedure to run in real-time, the use (or not) of a post-processor, and the computing environment to run the models and display the outputs. Additionally, a number of trans-disciplinary decisions are also involved in the process, such as the way the needs of the users will be considered in the modelling setup and how the forecasts (and their quality) will be efficiently communicated to ensure usefulness and build confidence in the forecasting system. We propose to reflect on the numerous, alternative pathways to designing and running an operational flood forecasting system through an adventure game. In this game, the player is the protagonist of an interactive story driven by challenges, exploration and problem-solving. For this presentation, you will have a chance to play this game, acting as the leader of a forecasting team at an operational centre. Your role is to manage the actions of your team and make sequential decisions that impact the design and running of the system in preparation to and during a flood event, and that deal with the consequences of the forecasts issued. Your actions are evaluated by how much they cost you in time, money and credibility. Your aim is to take decisions that will ultimately lead to a good balance between time and money spent, while keeping your credibility high over the whole process. This game was designed to highlight the complexities behind decision-making in an operational forecasting and emergency response context, in terms of the variety of pathways that can be selected as well as the timescale, cost and timing of effective actions.

  2. Model for radionuclide transport in running waters

    Energy Technology Data Exchange (ETDEWEB)

    Jonsson, Karin; Elert, Mark [Kemakta Konsult AB, Stockholm (Sweden)

    2005-11-15

    Two sites in Sweden are currently under investigation by SKB for their suitability as places for deep repository of radioactive waste, the Forsmark and Simpevarp/Laxemar area. As a part of the safety assessment, SKB has formulated a biosphere model with different sub-models for different parts of the ecosystem in order to be able to predict the dose to humans following a possible radionuclide discharge from a future deep repository. In this report, a new model concept describing radionuclide transport in streams is presented. The main difference from the previous model for running water used by SKB, where only dilution of the inflow of radionuclides was considered, is that the new model includes parameterizations also of the exchange processes present along the stream. This is done in order to be able to investigate the effect of the retention on the transport and to be able to estimate the resulting concentrations in the different parts of the system. The concentrations determined with this new model could later be used for order of magnitude predictions of the dose to humans. The presented model concept is divided in two parts, one hydraulic and one radionuclide transport model. The hydraulic model is used to determine the flow conditions in the stream channel and is based on the assumption of uniform flow and quasi-stationary conditions. The results from the hydraulic model are used in the radionuclide transport model where the concentration is determined in the different parts of the stream ecosystem. The exchange processes considered are exchange with the sediments due to diffusion, advective transport and sedimentation/resuspension and uptake of radionuclides in biota. Transport of both dissolved radionuclides and sorbed onto particulates is considered. Sorption kinetics in the stream water phase is implemented as the time scale of the residence time in the stream water probably is short in comparison to the time scale of the kinetic sorption. In the sediment

  3. Preliminary Results of a U.S. Deep South Warm Season Deep Convective Initiation Modeling Experiment using NASA SPoRT Initialization Datasets for Operational National Weather Service Local Model Runs

    Science.gov (United States)

    Medlin, Jeffrey M.; Wood, Lance; Zavodsky, Brad; Case, Jon; Molthan, Andrew

    2012-01-01

    The initiation of deep convection during the warm season is a forecast challenge in the relative high instability and low wind shear environment of the U.S. Deep South. Despite improved knowledge of the character of well known mesoscale features such as local sea-, bay- and land-breezes, observations show the evolution of these features fall well short in fully describing the location of first initiates. A joint collaborative modeling effort among the NWS offices in Mobile, AL, and Houston, TX, and NASA s Short-term Prediction Research and Transition (SPoRT) Center was undertaken during the 2012 warm season to examine the impact of certain NASA produced products on the Weather Research and Forecasting Environmental Modeling System. The NASA products were: a 4-km Land Information System data, a 1-km sea surface temperature analysis, and a 4-km greenness vegetation fraction analysis. Similar domains were established over the southeast Texas and Alabama coastlines, each with a 9 km outer grid spacing and a 3 km inner nest spacing. The model was run at each NWS office once per day out to 24 hours from 0600 UTC, using the NCEP Global Forecast System for initial and boundary conditions. Control runs without the NASA products were made at the NASA SPoRT Center. The NCAR Model Evaluation Tools verification package was used to evaluate both the forecast timing and location of the first initiates, with a focus on the impacts of the NASA products on the model forecasts. Select case studies will be presented to highlight the influence of the products.

  4. Long-run properties of some Danish macroeconometric models

    DEFF Research Database (Denmark)

    Harck, Søren H.

    This paper provides an analytical treatment of various long-run aspects of the MONA model as well as the SMEC model of the Danish economy. More specifically, the analysis lays bare the long-run and steady-state nexus between unemployment and, respectively, inflation and the wage share implied by ...

  5. Modelling surface run-off and trends analysis over India

    Indian Academy of Sciences (India)

    responsible for run-off generation plays a major role in run-off modelling at region scales. Remote sensing, GIS and advancement of the computer technology based evaluation of land surface prop- erties at spatial and temporal scales are very useful input data for hydrological models. Using remote sensing data is not only ...

  6. An overview of Booster and AGS polarized proton operation during Run 15

    Energy Technology Data Exchange (ETDEWEB)

    Zeno, K. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2015-10-20

    This note is an overview of the Booster and AGS for the 2015 Polarized Proton RHIC run from an operations perspective. There are some notable differences between this and previous runs. In particular, the polarized source intensity was expected to be, and was, higher this year than in previous RHIC runs. The hope was to make use of this higher input intensity by allowing the beam to be scraped down more in the Booster to provide a brighter and smaller beam for the AGS and RHIC. The RHIC intensity requirements were also higher this run than in previous runs, which caused additional challenges because the AGS polarization and emittance are normally intensity dependent.

  7. Computing Models of CDF and D0 in Run II

    International Nuclear Information System (INIS)

    Lammel, S.

    1997-05-01

    The next collider run of the Fermilab Tevatron, Run II, is scheduled for autumn of 1999. Both experiments, the Collider Detector at Fermilab (CDF) and the D0 experiment are being modified to cope with the higher luminosity and shorter bunchspacing of the Tevatron. New detector components, higher event complexity, and an increased data volume require changes from the data acquisition systems up to the analysis systems. In this paper we present a summary of the computing models of the two experiments for Run II

  8. Operational experience running Hadoop XRootD Fallback

    Science.gov (United States)

    Dost, J. M.; Tadel, A.; Tadel, M.; Würthwein, F.

    2015-12-01

    In April of 2014, the UCSD T2 Center deployed hdfs-xrootd-fallback, a UCSD- developed software system that interfaces Hadoop with XRootD to increase reliability of the Hadoop file system. The hdfs-xrootd-fallback system allows a site to depend less on local file replication and more on global replication provided by the XRootD federation to ensure data redundancy. Deploying the software has allowed us to reduce Hadoop replication on a significant subset of files in our cluster, freeing hundreds of terabytes in our local storage, and to recover HDFS blocks lost due to storage degradation. An overview of the architecture of the hdfs-xrootd-fallback system will be presented, as well as details of our experience operating the service over the past year.

  9. WLCG Operations and the First Prolonged LHC Run

    CERN Document Server

    Girone, M; CERN. Geneva. IT Department

    2011-01-01

    By the time of CHEP 2010 we had accumulated just over 6 months’ experience with proton-proton data taking, production and analysis at the LHC. This paper addresses the issues seen from the point of view of the WLCG Service. In particular, it answers the following questions: Did the WLCG service delivered quantitatively and qualitatively? Were the "key performance indicators" a reliable and accurate measure of the service quality? Were the inevitable service issues been resolved in a sufficiently rapid fashion? What are the key areas of improvement required not only for long-term sustainable operations, but also to embrace new technologies. It concludes with a summary of our readiness for data taking in the light of real experience.

  10. Running of the Scalar Spectral Index from Inflationary Models

    CERN Document Server

    Chung, D J H; Trodden, M; Chung, Daniel J.H.; Shiu, Gary; Trodden, Mark

    2003-01-01

    The scalar spectral index n is an important parameter describing the nature of primordial density perturbations. Recent data, including that from the WMAP satellite, shows some evidence that the index runs (changes as a function of the scale k at which it is measured) from n>1 (blue) on long scales to n<1 (red) on short scales. We investigate the extent to which inflationary models can accomodate such significant running of n. We present several methods for constructing large classes of potentials which yield a running spectral index. We show that within the slow-roll approximation, the fact that n-1 changes sign from blue to red forces the slope of the potential to reach a minimum at a similar field location. We also briefly survey the running of the index in a wider class of inflationary models, including a subset of those with non-minimial kinetic terms.

  11. Yield-reliability analysis and operating rules for run-of-river ...

    African Journals Online (AJOL)

    The study focused on yield-reliability analysis and operating rules for optimum scheduling of run-of-river (ROR) abstractions for typical rural water supply schemes using Siloam Village, Limpopo Province, South Africa, as a case study. Efficient operation of water supply systems requires operating rules as decision support ...

  12. Numerical Modelling of Wave Run-Up: Regular Waves

    DEFF Research Database (Denmark)

    Ramirez, Jorge; Frigaard, Peter; Andersen, Thomas Lykke

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  13. Long-Run Properties of Large-Scale Macroeconometric Models

    OpenAIRE

    Kenneth F. WALLIS-; John D. WHITLEY

    1987-01-01

    We consider alternative approaches to the evaluation of the long-run properties of dynamic nonlinear macroeconometric models, namely dynamic simulation over an extended database, or the construction and direct solution of the steady-state version of the model. An application to a small model of the UK economy is presented. The model is found to be unstable, but a stable form can be produced by simple alterations to the structure.

  14. Short Trail Running Race: Beyond the Classic Model for Endurance Running Performance.

    Science.gov (United States)

    Ehrström, Sabine; Tartaruga, Marcus P; Easthope, Christopher S; Brisswalter, Jeanick; Morin, Jean-Benoit; Vercruyssen, Fabrice

    2018-03-01

    This study aimed to examine the extent to which the classical physiological variables of endurance running performance (maximal oxygen uptake (V˙O2max), %V˙O2max at ventilatory threshold (VT), and running economy (RE)) but also muscle strength factors contribute to short trail running (TR) performance. A homogeneous group of nine highly trained trail runners performed an official TR race (27 km) and laboratory-based sessions to determine V˙O2max, %V˙O2max at VT, level RE (RE0%) and RE on a +10% slope, maximal voluntary concentric and eccentric knee extension torques, local endurance assessed by a fatigue index (FI), and a time to exhaustion at 87.5% of the velocity associated with V˙O2max. A simple regression method and commonality analysis identifying unique and common coefficients of each independent variable were used to determine the best predictors for the TR race time (dependent variable). Pearson correlations showed that FI and V˙O2max had the highest correlations (r = 0.91 and r = -0.76, respectively) with TR performance. The other selected variables were not significantly correlated with TR performance. The analysis of unique and common coefficients of relative V˙O2max, %V˙O2max at VT, and RE0% provides a low prediction of TR performance (R = 0.48). However, adding FI and RE on a +10% slope (instead of RE0%) markedly improved the predictive power of the model (R = 0.98). FI and V˙O2max showed the highest unique (49.8% and 20.4% of total effect, respectively) and common (26.9% of total effect) contributions to the regression equation. The classic endurance running model does not allow for meaningful prediction of short TR performance. Incorporating more specific factors into TR such as local endurance and gradient-specific RE testing procedures should be considered to better characterize short TR performance.

  15. 2013 CEF RUN - PHASE 1 DATA ANALYSIS AND MODEL VALIDATION

    Energy Technology Data Exchange (ETDEWEB)

    Choi, A.

    2014-05-08

    Phase 1 of the 2013 Cold cap Evaluation Furnace (CEF) test was completed on June 3, 2013 after a 5-day round-the-clock feeding and pouring operation. The main goal of the test was to characterize the CEF off-gas produced from a nitric-formic acid flowsheet feed and confirm whether the CEF platform is capable of producing scalable off-gas data necessary for the revision of the DWPF melter off-gas flammability model; the revised model will be used to define new safety controls on the key operating parameters for the nitric-glycolic acid flowsheet feeds including total organic carbon (TOC). Whether the CEF off-gas data were scalable for the purpose of predicting the potential flammability of the DWPF melter exhaust was determined by comparing the predicted H{sub 2} and CO concentrations using the current DWPF melter off-gas flammability model to those measured during Phase 1; data were deemed scalable if the calculated fractional conversions of TOC-to-H{sub 2} and TOC-to-CO at varying melter vapor space temperatures were found to trend and further bound the respective measured data with some margin of safety. Being scalable thus means that for a given feed chemistry the instantaneous flow rates of H{sub 2} and CO in the DWPF melter exhaust can be estimated with some degree of conservatism by multiplying those of the respective gases from a pilot-scale melter by the feed rate ratio. This report documents the results of the Phase 1 data analysis and the necessary calculations performed to determine the scalability of the CEF off-gas data. A total of six steady state runs were made during Phase 1 under non-bubbled conditions by varying the CEF vapor space temperature from near 700 to below 300°C, as measured in a thermowell (T{sub tw}). At each steady state temperature, the off-gas composition was monitored continuously for two hours using MS, GC, and FTIR in order to track mainly H{sub 2}, CO, CO{sub 2}, NO{sub x}, and organic gases such as CH{sub 4}. The standard

  16. Liquid Argon Calorimeters Operation and Data Quality During the 2015 Proton Run

    CERN Document Server

    Camincher, Clement; The ATLAS collaboration

    2016-01-01

    In 2015 ATLAS operated with an excellent efficiency, recording an integrated luminosity of 3.9fb^{-1} at \\sqrt{s} = 13 TeV. The Liquid Argon (LAr) Calorimeter contributed to this effort by operating with a good data quality efficiency of 99.4% . This poster highlights the overall status, performances and data quality of the LAr Calorimeters during the first year of Run-2 operations.

  17. Online monitoring for the CDF Run II experiment and the remote operation facilities

    Energy Technology Data Exchange (ETDEWEB)

    Arisawa, T.; /Waseda U.; Fabiani, D.; /INFN, Pisa; Hirschbuehl, D.; /Karlsruhe U.; Ikado, K.; /Waseda U.; Kubo, T.; /KEK, Tsukuba; Kusakabe, Y.; /Waseda U.; Maeshima, K.; /UCLA; Naganoma, J.; Nakamura, K.; /Waseda U.; Plager, C.; /UCLA; Schmidt, E.; /Fermilab /INFN, Pisa /Karlsruhe U.

    2007-01-01

    The foundation of the CDF Run II online event monitoring framework, placed well before the physics runs start, allowed to develop coherent monitoring software across all the different subsystems which consequently made maintenance and operation simple and efficient. Only one shift person is needed to monitor the entire CDF detector, including the trigger system. High data quality check is assured in real time and well defined monitoring results are propagated coherently to offline datasets used for physics analyzes. We describe the CDF Run II online event monitoring system and operation, with emphasis on the remote monitoring shift operation started since November 2006 with Pisa-INFN as pilot Institution and exploiting the WEB based access to the data.

  18. Model based control for run-of-river system. Part 1: Model implementation and tuning

    Directory of Open Access Journals (Sweden)

    Liubomyr Vytvytskyi

    2015-10-01

    Full Text Available Optimal operation and control of a run-of-river hydro power plant depends on good knowledge of the elements of the plant in the form of models. River reaches are often considered shallow channels with free surfaces. A typical model for such reaches use the Saint Venant model, which is a 1D distributed model based on the mass and momentum balances. This combination of free surface and momentum balance makes the problem numerically challenging to solve. The finite volume method with staggered grid was compared with the Kurganov-Petrova central upwind scheme, and was used to illustrate the dynamics of the river upstream from the Grønvollfoss run-of-river power plant in Telemark, Norway, operated by Skagerak Energi AS. In an experiment on the Grønvollfoss run-of-river power plant, a step was injected in the upstream inlet flow at Årlifoss, and the resulting change in level in front of the dam at the Grønvollfoss plant was logged. The results from the theoretical Saint Venant model was then compared to the experimental results. Because of uncertainties in the geometry of the river reach (river bed slope, etc., the slope and length of the varying slope parts were tuned manually to improve the fit. Then, friction factor, river width and height drop of the river was tuned by minimizing a least squares criterion. The results of the improved model (numerically, tuned to experiments, is a model that can be further used for control synthesis and analysis.

  19. Operation of the upgraded ATLAS Central Trigger Processor during the LHC Run 2

    CERN Document Server

    Bertelsen, H.; Deviveiros, P.O.; Eifert, T.; Galster, G.; Glatzer, J.; Haas, S.; Marzin, A.; Silva Oliveira, M.V.; Pauly, T.; Schmieden, K.; Spiwoks, R.; Stelzer, J.

    2016-01-01

    The ATLAS Central Trigger Processor (CTP) is responsible for forming the Level-1 trigger decision based on the information from the calorimeter and muon trigger processors. In order to cope with the increase of luminosity and physics cross-sections in Run 2, several components of this system have been upgraded. In particular, the number of usable trigger inputs and trigger items have been increased from 160 to 512 and from 256 to 512, respectively. The upgraded CTP also provides extended monitoring capabilities and allows to operate simultaneously up to three independent combinations of sub-detectors with full trigger functionality, which is particularly useful for commissioning, calibration and test runs. The software has also undergone a major upgrade to take advantage of all these new functionalities. An overview of the commissioning and the operation of the upgraded CTP during the LHC Run 2 is given.

  20. Arbitrary Symmetric Running Gait Generation for an Underactuated Biped Model.

    Directory of Open Access Journals (Sweden)

    Behnam Dadashzadeh

    Full Text Available This paper investigates generating symmetric trajectories for an underactuated biped during the stance phase of running. We use a point mass biped (PMB model for gait analysis that consists of a prismatic force actuator on a massless leg. The significance of this model is its ability to generate more general and versatile running gaits than the spring-loaded inverted pendulum (SLIP model, making it more suitable as a template for real robots. The algorithm plans the necessary leg actuator force to cause the robot center of mass to undergo arbitrary trajectories in stance with any arbitrary attack angle and velocity angle. The necessary actuator forces follow from the inverse kinematics and dynamics. Then these calculated forces become the control input to the dynamic model. We compare various center-of-mass trajectories, including a circular arc and polynomials of the degrees 2, 4 and 6. The cost of transport and maximum leg force are calculated for various attack angles and velocity angles. The results show that choosing the velocity angle as small as possible is beneficial, but the angle of attack has an optimum value. We also find a new result: there exist biped running gaits with double-hump ground reaction force profiles which result in less maximum leg force than single-hump profiles.

  1. Arbitrary Symmetric Running Gait Generation for an Underactuated Biped Model.

    Science.gov (United States)

    Dadashzadeh, Behnam; Esmaeili, Mohammad; Macnab, Chris

    2017-01-01

    This paper investigates generating symmetric trajectories for an underactuated biped during the stance phase of running. We use a point mass biped (PMB) model for gait analysis that consists of a prismatic force actuator on a massless leg. The significance of this model is its ability to generate more general and versatile running gaits than the spring-loaded inverted pendulum (SLIP) model, making it more suitable as a template for real robots. The algorithm plans the necessary leg actuator force to cause the robot center of mass to undergo arbitrary trajectories in stance with any arbitrary attack angle and velocity angle. The necessary actuator forces follow from the inverse kinematics and dynamics. Then these calculated forces become the control input to the dynamic model. We compare various center-of-mass trajectories, including a circular arc and polynomials of the degrees 2, 4 and 6. The cost of transport and maximum leg force are calculated for various attack angles and velocity angles. The results show that choosing the velocity angle as small as possible is beneficial, but the angle of attack has an optimum value. We also find a new result: there exist biped running gaits with double-hump ground reaction force profiles which result in less maximum leg force than single-hump profiles.

  2. New Constraints on the running-mass inflation model

    OpenAIRE

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro

    2002-01-01

    We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest Cosmic Microwave Background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman $\\alpha $ forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of $n$, which occurs in a physically reasonabl...

  3. Black hole constraints on the running-mass inflation model

    OpenAIRE

    Leach, Samuel M; Grivell, Ian J; Liddle, Andrew R

    2000-01-01

    The running-mass inflation model, which has strong motivation from particle physics, predicts density perturbations whose spectral index is strongly scale-dependent. For a large part of parameter space the spectrum rises sharply to short scales. In this paper we compute the production of primordial black holes, using both analytic and numerical calculation of the density perturbation spectra. Observational constraints from black hole production are shown to exclude a large region of otherwise...

  4. The running-mass inflation model and WMAP

    OpenAIRE

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.

    2004-01-01

    We consider the observational constraints on the running-mass inflationary model, and in particular on the scale-dependence of the spectral index, from the new Cosmic Microwave Background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale-dependence of $n$, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into sp...

  5. Main improvements of LHC Cryogenics Operation during Run 2 (2015-2018)

    Science.gov (United States)

    Delprat, L.; Bradu, B.; Brodzinski, K.; Ferlin, G.; Hafi, K.; Herblin, L.; Rogez, E.; Suraci, A.

    2017-12-01

    After the successful Run 1 (2010-2012), the LHC entered its first Long Shutdown period (LS1, 2013-2014). During LS1 the LHC cryogenic system went under a complete maintenance and consolidation program. The LHC resumed operation in 2015 with an increased beam energy from 4 TeV to 6.5 TeV. Prior to the new physics Run 2 (2015-2018), the LHC was progressively cooled down from ambient to the 1.9 K operation temperature. The LHC has resumed operation with beams in April 2015. Operational margins on the cryogenic capacity were reduced compared to Run 1, mainly due to the observed higher than expected electron-cloud heat load coming from increased beam energy and intensity. Maintaining and improving the cryogenic availability level required the implementation of a series of actions in order to deal with the observed heat loads. This paper describes the results from the process optimization and update of the control system, thus allowing the adjustment of the non-isothermal heat load at 4.5 – 20 K and the optimized dynamic behaviour of the cryogenic system versus the electron-cloud thermal load. Effects from the new regulation settings applied for operation on the electrical distribution feed-boxes and inner triplets will be discussed. The efficiency of the preventive and corrective maintenance, as well as the benefits and issues of the present cryogenic system configuration for Run 2 operational scenario will be described. Finally, the overall availability results and helium management of the LHC cryogenic system during the 2015-2016 operational period will be presented.

  6. Operational experience of the upgraded LHC injection kicker magnets during Run 2 and future plans

    Science.gov (United States)

    Barnes, M. J.; Adraktas, A.; Bregliozzi, G.; Goddard, B.; Ducimetière, L.; Salvant, B.; Sestak, J.; Vega Cid, L.; Weterings, W.; Vallgren, C. Yin

    2017-07-01

    During Run 1 of the LHC, one of the injection kicker magnets caused occasional operational delays due to beam induced heating with high bunch intensity and short bunch lengths. In addition, there were also sporadic issues with vacuum activity and electrical flashover of the injection kickers. An extensive program of studies was launched and significant upgrades were carried out during Long Shutdown 1 (LS 1). These upgrades included a new design of beam screen to reduce both beam coupling impedance of the kicker magnet and the electric field associated with the screen conductors, hence decreasing the probability of electrical breakdown in this region. This paper presents operational experience of the injection kicker magnets during the first years of Run 2 of the LHC, including a discussion of faults and kicker magnet issues that limited LHC operation. In addition, in light of these issues, plans for further upgrades are briefly discussed.

  7. 1-D blood flow modelling in a running human body.

    Science.gov (United States)

    Szabó, Viktor; Halász, Gábor

    2017-07-01

    In this paper an attempt was made to simulate blood flow in a mobile human arterial network, specifically, in a running human subject. In order to simulate the effect of motion, a previously published immobile 1-D model was modified by including an inertial force term into the momentum equation. To calculate inertial force, gait analysis was performed at different levels of speed. Our results show that motion has a significant effect on the amplitudes of the blood pressure and flow rate but the average values are not effected significantly.

  8. New constraints on the running-mass inflation model

    International Nuclear Information System (INIS)

    Covi, L.; Lyth, D.H.; Melchiorri, A.

    2002-10-01

    We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest cosmic microwave background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman α forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of n, which occurs in a physically reasonable regime of parameter space. (orig.)

  9. PSB LLRF: new features for machine studies and operation in the PSB 2016 run

    CERN Document Server

    Angoletta, M E

    2017-01-01

    A new digital Low-Level RF (LLRF) system has beensuccessfully deployed on the four PS Booster (PSB) ringsin June 2014, after the Long-Shutdown 1 (LS1). Althoughonly recently deployed, several new features for machinestudies and operation have already been required and im-plemented. This note provides an overview of the main fea-tures deployed for the 2016 PSB run and of their results

  10. Operational Experience, Improvements, and Performance of the CDF Run II Silicon Vertex Detector

    CERN Document Server

    Aaltonen, T; Boveia, A.; Brau, B.; Bolla, G; Bortoletto, D; Calancha, C; Carron, S.; Cihangir, S.; Corbo, M.; Clark, D.; Di Ruzza, B.; Eusebi, R.; Fernandez, J.P.; Freeman, J.C.; Garcia, J.E.; Garcia-Sciveres, M.; Gonzalez, O.; Grinstein, S.; Hartz, M.; Herndon, M.; Hill, C.; Hocker, A.; Husemann, U.; Incandela, J.; Issever, C.; Jindariani, S.; Junk, T.R.; Knoepfel, K.; Lewis, J.D.; Martinez-Ballarin, R.; Mathis, M.; Mattson, M.; Merkel, P; Mondragon, M.N.; Moore, R.; Mumford, J.R.; Nahn, S.; Nielsen, J.; Nelson, T.K.; Pavlicek, V.; Pursley, J.; Redondo, I.; Roser, R.; Schultz, K.; Spalding, J.; Stancari, M.; Stanitzki, M.; Stuart, D.; Sukhanov, A.; Tesarek, R.; Treptow, K.; Wallny, R.; Worm, S.

    2013-01-01

    The Collider Detector at Fermilab (CDF) pursues a broad physics program at Fermilab's Tevatron collider. Between Run II commissioning in early 2001 and the end of operations in September 2011, the Tevatron delivered 12 fb-1 of integrated luminosity of p-pbar collisions at sqrt(s)=1.96 TeV. Many physics analyses undertaken by CDF require heavy flavor tagging with large charged particle tracking acceptance. To realize these goals, in 2001 CDF installed eight layers of silicon microstrip detectors around its interaction region. These detectors were designed for 2--5 years of operation, radiation doses up to 2 Mrad (0.02 Gy), and were expected to be replaced in 2004. The sensors were not replaced, and the Tevatron run was extended for several years beyond its design, exposing the sensors and electronics to much higher radiation doses than anticipated. In this paper we describe the operational challenges encountered over the past 10 years of running the CDF silicon detectors, the preventive measures undertaken, an...

  11. Modelling the long-run supply of coal

    International Nuclear Information System (INIS)

    Steenblik, R.P.

    1992-01-01

    There are many issues facing policy-makers in the fields of energy and the environment that require knowledge of coal supply and cost. Such questions arise in relation to decisions concerning, for example, the discontinuation of subsidies, or the effects of new environmental laws. The very complexity of these questions makes them suitable for analysis by models. Indeed, models have been used for analysing the behaviour of coal markets and the effects of public policies on them for many years. For estimating short-term responses econometric models are the most suitable. For estimating the supply of coal over the longer term, however - i.e., coal that would come from mines as yet not developed - depletion has to be taken into account. Underlying the normal supply curve relating cost to the rate of production is a curve that increases with cumulative production - what mineral economists refer to as the potential supply curve. To derive such a curve requires at some point in the analysis using process-oriented modelling techniques. Because coal supply curves can convey so succinctly information about the resource's long-run supply potential and costs, they have been influential in several major public debates on energy policy. And, within the coal industry itself, they have proved to be powerful tools for undertaking market research and long-range planning. The purpose of this paper is to describe in brief the various approaches that have been used to model long-run coal supply, to highlight their strengths, and to identify areas in which further progress is needed. (author)

  12. Integrating spatio-temporal environmental models for planning ski runs

    NARCIS (Netherlands)

    Pfeffer, Karin

    2003-01-01

    The establishment of ski runs and ski lifts, the action of skiing and maintenance of ski runs may cause considerable environmental impact. Clearly, for improvements to be made in the planning of ski runs in alpine terrain a good understanding of the environmental system and the response of

  13. Synthane Pilot Plant, Bruceton, Pa. Run report No. 1. Operating period: July--December 1976

    Energy Technology Data Exchange (ETDEWEB)

    1976-01-01

    Test Directive No. 1 provided the operating conditions and process requirements for the first coal to be gasified in the Synthane Pilot Plant. Rosebud coal, which is a western sub-bituminous coal, was chosen by DOE because of its non-caking properties and reactivity. This report summarizes and presents the data obtained. The pilot plant produced gas for a total of 228 hours and gasified 709 tons of Rosebud coal from July 7 to December 20, 1976. Most of this period was spent in achieving process reliability and learning how to operate and control the gasifier. A significant number of equipment and process changes were required to achieve successful operation of the coal grinding and handling facilities, the Petrocarb feed system, and the char handling facilities. A complete revision of all gasifier instrumentation was necessary to achieve good control. Twenty-one test runs were accomplished, the longest of which was 37 hours. During this run, carbon conversions of 57 to 60% were achieved at bed temperatures of 1450 to 1475/sup 0/F. Earlier attempts to operate the gasifier with bed temperatures of 1550 and 1650/sup 0/F resulted in clinker formation in the gasifier and the inability to remove char. Test Directive No. 1 was discontinued in January 1977, without meeting the directive's goals because the process conditions of free fall of coal feed into the Synthane gasifier resulted in excessive quantities of tar and fines carryover into the gas scrubbing area. Each time the gasifier was opened after a run, the internal cyclone dip leg was found to be plugged solidly with hard tar and fines. The gas scrubbing equipment was always badly fouled with char and tar requiring an extensive and difficult cleanout. Packing in the gas scrubber had to be completely changed twice due to extensive fouling.

  14. Bayesian operational risk models

    OpenAIRE

    Silvia Figini; Lijun Gao; Paolo Giudici

    2013-01-01

    Operational risk is hard to quantify, for the presence of heavy tailed loss distributions. Extreme value distributions, used in this context, are very sensitive to the data, and this is a problem in the presence of rare loss data. Self risk assessment questionnaires, if properly modelled, may provide the missing piece of information that is necessary to adequately estimate op- erational risks. In this paper we propose to embody self risk assessment data into suitable prior distributions, and ...

  15. The ATLAS Level-1 Topological Trigger Design and Operation in Run-2

    CERN Document Server

    Igonkina, Olga; The ATLAS collaboration

    2018-01-01

    The ATLAS Level-1 Trigger system performs initial event selection using data from calorimeters and the muon spectrometer to reduce the LHC collision event rate down to about 100 kHz. Trigger decisions from the different sub-systems are combined in the Central Trigger Processor for the final Level-1 decision. A new FPGAs-based AdvancedTCA sub-system was introduced to calculate in real time complex kinematic observables: the Topological Processor System. It was installed during the shutdown and commissioning started in 2015 and continued during 2016. The design and operation of the Level-1 Topological Trigger in Run-2 will be illustrated.

  16. Minimum Bias Trigger Scintillators for ATLAS: Commissioning and Run 2 Initial Operation

    CERN Document Server

    Dano Hoffmann, Maria; The ATLAS collaboration

    2015-01-01

    The Minimum Bias Trigger Scintillators (MBTS) delivered the primary trigger for selecting events from low luminosity proton-proton, lead-lead and lead-proton collisions with the smallest possible bias during LHC Run 1 (2009-2013). Similarly, the MBTS will select events for the first Run 2 physics measurements, for instance charge multiplicity, proton-proton cross section, rapidity gap measurements, etc. at the unprecedented 13 TeV center of mass energy of proton-proton collisions. We will review the upgrades to the MBTS detector that have been implemented during the 2013-2014 shutdown. New scintillators have been installed to replace the radiation damaged ones, a modified optical readout scheme have been adopted to increase the light yield and an improved data acquisition chain has been used to cope with the few issues observed during Run 1 operations. Since late 2014, MBTS have been commissioned during cosmic data taking, first LHC beam splashes and single beam LHC fills. The goal is to have a fully commissi...

  17. Operations and Modeling Analysis

    Science.gov (United States)

    Ebeling, Charles

    2005-01-01

    The Reliability and Maintainability Analysis Tool (RMAT) provides NASA the capability to estimate reliability and maintainability (R&M) parameters and operational support requirements for proposed space vehicles based upon relationships established from both aircraft and Shuttle R&M data. RMAT has matured both in its underlying database and in its level of sophistication in extrapolating this historical data to satisfy proposed mission requirements, maintenance concepts and policies, and type of vehicle (i.e. ranging from aircraft like to shuttle like). However, a companion analyses tool, the Logistics Cost Model (LCM) has not reached the same level of maturity as RMAT due, in large part, to nonexistent or outdated cost estimating relationships and underlying cost databases, and it's almost exclusive dependence on Shuttle operations and logistics cost input parameters. As a result, the full capability of the RMAT/LCM suite of analysis tools to take a conceptual vehicle and derive its operations and support requirements along with the resulting operating and support costs has not been realized.

  18. Operating Security System Support for Run-Time Security with a Trusted Execution Environment

    DEFF Research Database (Denmark)

    Gonzalez, Javier

    , it is safe to assume that any complex software is compromised. The problem is then to monitor and contain it when it executes in order to protect sensitive data and other sensitive assets. To really have an impact, any solution to this problem should be integrated in commodity operating systems...... in the Linux operating system. We are in the process of making this driver part of the mainline Linux kernel.......Software services have become an integral part of our daily life. Cyber-attacks have thus become a problem of increasing importance not only for the IT industry, but for society at large. A way to contain cyber-attacks is to guarantee the integrity of IT systems at run-time. Put differently...

  19. Operation and performance of the CMS Resistive Plate Chambers during LHC run II

    CERN Document Server

    Eysermans, Jan

    2017-01-01

    The Resitive Plate Chambers (RPC) at the Compact Muon Solenoid (CMS) experiment at the CERN Large Hadron Collider (LHC) provide redundancy to the Drift Tubes in the barrel and Cathode Strip Chambers in the endcap regions. Consisting of 1056 double gap RPC chambers, the main detector parameters and environmental conditions are carefully monitored during the data taking period. At a center of mass energy of 13 TeV, the luminosity reached record levels which was challenging from the operational and performance point of view. In this work, the main operational parameters are discussed and the overall performance of the RPC system is reported for the LHC run II data taking period. With a low amount of inactive chambers, a good and stable detector performance was achieved with high efficiency.

  20. CMS operations for Run II preparation and commissioning of the offline infrastructure

    CERN Document Server

    Cerminara, Gianluca

    2016-01-01

    The restart of the LHC coincided with an intense activity for the CMS experiment. Both at the beginning of Run II in 2015 and the restart of operations in 2016, the collaboration was engaged in an extensive re-commissioning of the CMS data-taking operations. After the long stop, the detector was fully aligned and calibrated. Data streams were redesigned, to fit the priorities dictated by the physics program for 2015 and 2016. A new reconstruction software (both online and offline) was commissioned with early collisions and further developed during the year. A massive campaign of Monte Carlo production was launched, to assist physics analyses. This presentation reviews the main event of this commissioning journey and describes the status of CMS physics performances for 2016.

  1. Dynamical system approach to running Λ cosmological models

    International Nuclear Information System (INIS)

    Stachowski, Aleksander; Szydlowski, Marek

    2016-01-01

    We study the dynamics of cosmological models with a time dependent cosmological term. We consider five classes of models; two with the non-covariant parametrization of the cosmological term Λ: Λ(H)CDM cosmologies, Λ(a)CDM cosmologies, and three with the covariant parametrization of Λ: Λ(R)CDM cosmologies, where R(t) is the Ricci scalar, Λ(φ)-cosmologies with diffusion, Λ(X)-cosmologies, where X = (1)/(2)g αβ ∇ α ∇ β φ is a kinetic part of the density of the scalar field. We also consider the case of an emergent Λ(a) relation obtained from the behaviour of trajectories in a neighbourhood of an invariant submanifold. In the study of the dynamics we used dynamical system methods for investigating how an evolutionary scenario can depend on the choice of special initial conditions. We show that the methods of dynamical systems allow one to investigate all admissible solutions of a running Λ cosmology for all initial conditions. We interpret Alcaniz and Lima's approach as a scaling cosmology. We formulate the idea of an emergent cosmological term derived directly from an approximation of the exact dynamics. We show that some non-covariant parametrization of the cosmological term like Λ(a), Λ(H) gives rise to the non-physical behaviour of trajectories in the phase space. This behaviour disappears if the term Λ(a) is emergent from the covariant parametrization. (orig.)

  2. The Run 2 ATLAS Analysis Event Data Model

    CERN Document Server

    SNYDER, S; The ATLAS collaboration; NOWAK, M; EIFERT, T; BUCKLEY, A; ELSING, M; GILLBERG, D; MOYSE, E; KOENEKE, K; KRASZNAHORKAY, A

    2014-01-01

    During the LHC's first Long Shutdown (LS1) ATLAS set out to establish a new analysis model, based on the experience gained during Run 1. A key component of this is a new Event Data Model (EDM), called the xAOD. This format, which is now in production, provides the following features: A separation of the EDM into interface classes that the user code directly interacts with, and data storage classes that hold the payload data. The user sees an Array of Structs (AoS) interface, while the data is stored in a Struct of Arrays (SoA) format in memory, thus making it possible to efficiently auto-vectorise reconstruction code. A simple way of augmenting and reducing the information saved for different data objects. This makes it possible to easily decorate objects with new properties during data analysis, and to remove properties that the analysis does not need. A persistent file format that can be explored directly with ROOT, either with or without loading any additional libraries. This allows fast interactive naviga...

  3. Statistical Design of an Adaptive Synthetic X- Control Chart with Run Rule on Service and Management Operation

    Directory of Open Access Journals (Sweden)

    Shucheng Yu

    2016-01-01

    Full Text Available An improved synthetic X- control chart based on hybrid adaptive scheme and run rule scheme is introduced to enhance the statistical performance of traditional synthetic X- control chart on service and management operation. The proposed scientific hybrid adaptive schemes consider both variable sampling interval and variable sample size scheme. The properties of the proposed chart are obtained using Markov chain approach. An extensive set of numerical results is presented to test the effectiveness of the proposed model in detecting small and moderate shifts in the process mean. The results show that the proposed chart is quicker than the standard synthetic X- chart and CUSUM chart in detecting small and moderate shifts in the process of service and management operation.

  4. Modelling of Muscle Force Distributions During Barefoot and Shod Running

    Directory of Open Access Journals (Sweden)

    Sinclair Jonathan

    2015-09-01

    Full Text Available Research interest in barefoot running has expanded considerably in recent years, based around the notion that running without shoes is associated with a reduced incidence of chronic injuries. The aim of the current investigation was to examine the differences in the forces produced by different skeletal muscles during barefoot and shod running. Fifteen male participants ran at 4.0 m·s-1 (± 5%. Kinematics were measured using an eight camera motion analysis system alongside ground reaction force parameters. Differences in sagittal plane kinematics and muscle forces between footwear conditions were examined using repeated measures or Freidman’s ANOVA. The kinematic analysis showed that the shod condition was associated with significantly more hip flexion, whilst barefoot running was linked with significantly more flexion at the knee and plantarflexion at the ankle. The examination of muscle kinetics indicated that peak forces from Rectus femoris, Vastus medialis, Vastus lateralis, Tibialis anterior were significantly larger in the shod condition whereas Gastrocnemius forces were significantly larger during barefoot running. These observations provide further insight into the mechanical alterations that runners make when running without shoes. Such findings may also deliver important information to runners regarding their susceptibility to chronic injuries in different footwear conditions.

  5. Effects of Yaw Error on Wind Turbine Running Characteristics Based on the Equivalent Wind Speed Model

    Directory of Open Access Journals (Sweden)

    Shuting Wan

    2015-06-01

    Full Text Available Natural wind is stochastic, being characterized by its speed and direction which change randomly and frequently. Because of the certain lag in control systems and the yaw body itself, wind turbines cannot be accurately aligned toward the wind direction when the wind speed and wind direction change frequently. Thus, wind turbines often suffer from a series of engineering issues during operation, including frequent yaw, vibration overruns and downtime. This paper aims to study the effects of yaw error on wind turbine running characteristics at different wind speeds and control stages by establishing a wind turbine model, yaw error model and the equivalent wind speed model that includes the wind shear and tower shadow effects. Formulas for the relevant effect coefficients Tc, Sc and Pc were derived. The simulation results indicate that the effects of the aerodynamic torque, rotor speed and power output due to yaw error at different running stages are different and that the effect rules for each coefficient are not identical when the yaw error varies. These results may provide theoretical support for optimizing the yaw control strategies for each stage to increase the running stability of wind turbines and the utilization rate of wind energy.

  6. W-026 integrated engineering cold run operational test report for balance of plant (BOP)

    Energy Technology Data Exchange (ETDEWEB)

    Kersten, J.K.

    1998-02-24

    This Cold Run test is designed to demonstrate the functionality of systems necessary to move waste drums throughout the plant using approved procedures, and the compatibility of these systems to function as an integrated process. This test excludes all internal functions of the gloveboxes. In the interest of efficiency and support of the facility schedule, the initial revision of the test (rev 0) was limited to the following: Receipt and storage of eight overpacked drums, four LLW and four TRU; Receipt, routing, and staging of eleven empty drums to the process area where they will be used later in this test; Receipt, processing, and shipping of two verification drums (Route 9); Receipt, processing, and shipping of two verification drums (Route 1). The above listed operations were tested using the rev 0 test document, through Section 5.4.25. The document was later revised to include movement of all staged drums to and from the LLW and TRU process and RWM gloveboxes. This testing was performed using Sections 5.5 though 5.11 of the rev 1 test document. The primary focus of this test is to prove the functionality of automatic operations for all mechanical and control processes listed. When necessary, the test demonstrates manual mode operations as well. Though the gloveboxes are listed, only waste and empty drum movement to, from, and between the gloveboxes was tested.

  7. W-026 integrated engineering cold run operational test report for balance of plant (BOP)

    International Nuclear Information System (INIS)

    Kersten, J.K.

    1998-01-01

    This Cold Run test is designed to demonstrate the functionality of systems necessary to move waste drums throughout the plant using approved procedures, and the compatibility of these systems to function as an integrated process. This test excludes all internal functions of the gloveboxes. In the interest of efficiency and support of the facility schedule, the initial revision of the test (rev 0) was limited to the following: Receipt and storage of eight overpacked drums, four LLW and four TRU; Receipt, routing, and staging of eleven empty drums to the process area where they will be used later in this test; Receipt, processing, and shipping of two verification drums (Route 9); Receipt, processing, and shipping of two verification drums (Route 1). The above listed operations were tested using the rev 0 test document, through Section 5.4.25. The document was later revised to include movement of all staged drums to and from the LLW and TRU process and RWM gloveboxes. This testing was performed using Sections 5.5 though 5.11 of the rev 1 test document. The primary focus of this test is to prove the functionality of automatic operations for all mechanical and control processes listed. When necessary, the test demonstrates manual mode operations as well. Though the gloveboxes are listed, only waste and empty drum movement to, from, and between the gloveboxes was tested

  8. How Run-of-River Operation Affects Hydropower Generation and Value

    Science.gov (United States)

    Jager, Henriette I.; Bevelhimer, Mark S.

    2007-12-01

    Regulated rivers in the United States are required to support human water uses while preserving aquatic ecosystems. However, the effectiveness of hydropower license requirements nationwide has not been demonstrated. One requirement that has become more common is “run-of-river” (ROR) operation, which restores a natural flow regime. It is widely believed that ROR requirements (1) are mandated to protect aquatic biota, (2) decrease hydropower generation per unit flow, and (3) decrease energy revenue. We tested these three assumptions by reviewing hydropower projects with license-mandated changes from peaking to ROR operation. We found that ROR operation was often prescribed in states with strong water-quality certification requirements and migratory fish species. Although benefits to aquatic resources were frequently cited, changes were often motivated by other considerations. After controlling for climate, the overall change in annual generation efficiency across projects because of the change in operation was not significant. However, significant decreases were detected at one quarter of individual hydropower projects. As expected, we observed a decrease in flow during peak demand at 7 of 10 projects. At the remaining projects, diurnal fluctuations actually increased because of operation of upstream storage projects. The economic implications of these results, including both producer costs and ecologic benefits, are discussed. We conclude that regional-scale studies of hydropower regulation, such as this one, are long overdue. Public dissemination of flow data, license provisions, and monitoring data by way of on-line access would facilitate regional policy analysis while increasing regulatory transparency and providing feedback to decision makers.

  9. Academic Education Chain Operation Model

    OpenAIRE

    Ruskov, Petko; Ruskov, Andrey

    2007-01-01

    This paper presents an approach for modelling the educational processes as a value added chain. It is an attempt to use a business approach to interpret and compile existing business and educational processes towards reference models and suggest an Academic Education Chain Operation Model. The model can be used to develop an Academic Chain Operation Reference Model.

  10. Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager

    Science.gov (United States)

    Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.

    2012-01-01

    GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.

  11. mr: A C++ library for the matching and running of the Standard Model parameters

    Science.gov (United States)

    Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.

    2016-09-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library. Catalogue identifier: AFAI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 517613 No. of bytes in distributed program, including test data, etc.: 2358729 Distribution format: tar.gz Programming language: C++. Computer: IBM PC. Operating system: Linux, Mac OS X. RAM: 1 GB Classification: 11.1. External routines: TSIL [1], OdeInt [2], boost [3] Nature of problem: The running parameters of the Standard Model renormalized in the MS bar scheme at some high renormalization scale, which is chosen by the user, are evaluated in perturbation theory as precisely as possible in two steps. First, the initial conditions at the electroweak energy scale are evaluated from the Fermi constant GF and the pole masses of the W, Z, and Higgs bosons and the bottom and top quarks including the full two-loop threshold corrections. Second, the evolution to the high energy scale is performed by numerically solving the renormalization group evolution equations through three loops. Pure QCD corrections to the matching and running are included through four loops. Solution method: Numerical integration of analytic expressions Additional comments: Available for download from URL

  12. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Anghelache

    2006-01-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  13. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables.

    Science.gov (United States)

    Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O

    2016-06-01

    The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.

  14. Operational Risk Modeling

    OpenAIRE

    Gabriela ANGHELACHE; Ana Cornelia OLTEANU

    2011-01-01

    Losses resulting from operational risk events from a complex interaction between organizational factors, personal and market participants that do not fit a simple classification scheme. Taking into account past losses (ex. Barings, Daiwa, etc.) we can say that operational risk is a major financial losses in the banking sector, although until recently have been underestimated, considering that they are generally minor, note setting survival of a bank.

  15. Operational Risk Modeling

    Directory of Open Access Journals (Sweden)

    Gabriela ANGHELACHE

    2011-06-01

    Full Text Available Losses resulting from operational risk events from a complex interaction between organizational factors, personal and market participants that do not fit a simple classification scheme. Taking into account past losses (ex. Barings, Daiwa, etc. we can say that operational risk is a major financial losses in the banking sector, although until recently have been underestimated, considering that they are generally minor, note setting survival of a bank.

  16. Systems-level computational modeling demonstrates fuel selection switching in high capacity running and low capacity running rats

    Science.gov (United States)

    Qi, Nathan R.

    2018-01-01

    High capacity and low capacity running rats, HCR and LCR respectively, have been bred to represent two extremes of running endurance and have recently demonstrated disparities in fuel usage during transient aerobic exercise. HCR rats can maintain fatty acid (FA) utilization throughout the course of transient aerobic exercise whereas LCR rats rely predominantly on glucose utilization. We hypothesized that the difference between HCR and LCR fuel utilization could be explained by a difference in mitochondrial density. To test this hypothesis and to investigate mechanisms of fuel selection, we used a constraint-based kinetic analysis of whole-body metabolism to analyze transient exercise data from these rats. Our model analysis used a thermodynamically constrained kinetic framework that accounts for glycolysis, the TCA cycle, and mitochondrial FA transport and oxidation. The model can effectively match the observed relative rates of oxidation of glucose versus FA, as a function of ATP demand. In searching for the minimal differences required to explain metabolic function in HCR versus LCR rats, it was determined that the whole-body metabolic phenotype of LCR, compared to the HCR, could be explained by a ~50% reduction in total mitochondrial activity with an additional 5-fold reduction in mitochondrial FA transport activity. Finally, we postulate that over sustained periods of exercise that LCR can partly overcome the initial deficit in FA catabolic activity by upregulating FA transport and/or oxidation processes. PMID:29474500

  17. Models of human operators

    International Nuclear Information System (INIS)

    Knee, H.E.; Schryver, J.C.

    1991-01-01

    Models of human behavior and cognition (HB and C) are necessary for understanding the total response of complex systems. Many such models have come available over the past thirty years for various applications. Unfortunately, many potential model users remain skeptical about their practicality, acceptability, and usefulness. Such hesitancy stems in part to disbelief in the ability to model complex cognitive processes, and a belief that relevant human behavior can be adequately accounted for through the use of commonsense heuristics. This paper will highlight several models of HB and C and identify existing and potential applications in attempt to dispel such notions. (author)

  18. Impaired voluntary wheel running behavior in the unilateral 6-hydroxydopamine rat model of Parkinson's disease.

    Science.gov (United States)

    Pan, Qi; Zhang, Wangming; Wang, Jinyan; Luo, Fei; Chang, Jingyu; Xu, Ruxiang

    2015-02-01

    The aim of this study was to investigate voluntary wheel running behavior in the unilateral 6-hydroxydopamine (6-OHDA) rat model. Male Sprague-Dawley rats were assigned to 2 groups : 6-OHDA group (n=17) and control group (n=8). The unilateral 6-OHDA rat model was induced by injection of 6-OHDA into unilateral medial forebrain bundle using a stereotaxic instrument. Voluntary wheel running activity was assessed per day in successfully lesioned rats (n=10) and control rats. Each behavioral test lasted an hour. The following parameters were investigated during behavioral tests : the number of running bouts, the distance moved in the wheel, average peak speed in running bouts and average duration from the running start to the peak speed. The number of running bouts and the distance moved in the wheel were significantly decreased in successfully lesioned rats compared with control rats. In addition, average peak speed in running bouts was decreased, and average duration from the running start to the peak speed was increased in lesioned animals, which might indicate motor deficits in these rats. These behavioral changes were still observed 42 days after lesion. Voluntary wheel running behavior is impaired in the unilateral 6-OHDA rat model and may represent a useful tool to quantify motor deficits in this model.

  19. Fast Running Urban Dispersion Model for Radiological Dispersal Device (RDD) Releases: Model Description and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Gowardhan, Akshay [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Neuscamman, Stephanie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Donetti, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Walker, Hoyt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Belles, Rich [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Eme, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Homann, Steven [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Simpson, Matthew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Nasstrom, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC)

    2017-05-24

    Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a more detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).

  20. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  1. The ATLAS Run-2 Trigger Menu for higher luminosities: Design, Performance and Operational Aspects

    CERN Document Server

    Ruiz-Martinez, Aranzazu; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment aims at recording about 1 kHz of physics collisions, starting with an LHC design bunch crossing rate of 40 MHz. To reduce the massive background rate while maintaining a high selection efficiency for rare physics events (such as beyond the Standard Model physics), a two-level trigger system is used. Events are selected based on physics signatures such as presence of energetic leptons, photons, jets or large missing energy. The trigger system exploits topological information, as well as multi-variate methods to carry out the necessary physics filtering. In total, the ATLAS online selection consists of thousands of different individual triggers. A trigger menu is a compilation of these triggers which specifies the physics algorithms to be used during data taking and the bandwidth a given trigger is allocated. Trigger menus reflect not only the physics goals of the collaboration for a given run, but also take into consideration the instantaneous luminosity of the LHC and limitations from the...

  2. The ATLAS Run-2 Trigger Menu for higher luminosities: Design, Performance and Operational Aspects

    CERN Document Server

    Torro Pastor, Emma; The ATLAS collaboration

    2018-01-01

    The ATLAS experiment aims at recording about 1 kHz of physics collisions, starting with an LHC design bunch crossing rate of 40 MHz. To reduce the massive background rate while maintaining a high selection efficiency for rare physics events (such as beyond the Standard Model physics), a two-level trigger system is used. Events are selected based on physics signatures such as presence of energetic leptons, photons, jets or large missing energy. The trigger system exploits topological information, as well as multi-variate methods to carry out the necessary physics filtering. In total, the ATLAS online selection consists of thousands of different individual triggers. A trigger menu is a compilation of these triggers which specifies the physics algorithms to be used during data taking and the bandwidth a given trigger is allocated. Trigger menus reflect not only the physics goals of the collaboration for a given run, but also take into consideration the instantaneous luminosity of the LHC and limitations from the...

  3. Modeling the Frequency of Cyclists’ Red-Light Running Behavior Using Bayesian PG Model and PLN Model

    Directory of Open Access Journals (Sweden)

    Yao Wu

    2016-01-01

    Full Text Available Red-light running behaviors of bicycles at signalized intersection lead to a large number of traffic conflicts and high collision potentials. The primary objective of this study is to model the cyclists’ red-light running frequency within the framework of Bayesian statistics. Data was collected at twenty-five approaches at seventeen signalized intersections. The Poisson-gamma (PG and Poisson-lognormal (PLN model were developed and compared. The models were validated using Bayesian p values based on posterior predictive checking indicators. It was found that the two models have a good fit of the observed cyclists’ red-light running frequency. Furthermore, the PLN model outperformed the PG model. The model estimated results showed that the amount of cyclists’ red-light running is significantly influenced by bicycle flow, conflict traffic flow, pedestrian signal type, vehicle speed, and e-bike rate. The validation result demonstrated the reliability of the PLN model. The research results can help transportation professionals to predict the expected amount of the cyclists’ red-light running and develop effective guidelines or policies to reduce red-light running frequency of bicycles at signalized intersections.

  4. Reliability Analysis of the LHC Beam Dumping System Taking into Account the Operational Experience during LHC Run 1

    CERN Document Server

    Filippini, R; Magnin, N; Uythoven, J A

    2014-01-01

    The LHC beam dumping system operated reliably during the Run 1 period of the LHC (2009 2013). A number of internal failures of the beam dumping system occurred that, because of built-in safety features, resulted in a safe removal of the particle beams from the machine, so called “internal beam

  5. An overview of Booster and AGS Polarized Proton Operations during Run 17

    Energy Technology Data Exchange (ETDEWEB)

    Zeno, K. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2017-10-11

    There were only a few differences in the setup between this year’s Polarized Proton run and the previous one (Run 15). Consequently, this note will focus on these differences as well as a few more notable studies done during the course of the run. This year, the Booster input intensity was kept around 7e11 for the majority of the run whereas in Run 15 it was kept around 9e11. It was lowered because there was some indication that the source polarization was higher with this lower input. Some of the polarization measurements that motivated this change will be discussed. Both the emittance and polarization on the AGS flattop show intensity dependence, thought to be related to the peak current, especially early in the AGS acceleration ramp. In Run 15, the AGS Rf was configured for h=8, but in this run h=6 was used to reduce the peak current and also to allow for the possibility of using a dual harmonic to reduce it further. Eventually, a dual harmonic configuration was used for the first 100 ms or so of the AGS acceleration cycle. Two cavities were set to h=12 and phased differently than the other 8 to accomplish this. Quad pumping was also used at Booster extraction to make the bunch injected into the AGS wider in order to match the dual harmonic bucket right at injection. This configuration, which was used for the majority of the run, will be described. Measurements of the intensity dependence of the transverse emittance and polarization with and without it will be compared.

  6. Academic Education Chain Operation Model

    NARCIS (Netherlands)

    Ruskov, Petko; Ruskov, Andrey

    2007-01-01

    This paper presents an approach for modelling the educational processes as a value added chain. It is an attempt to use a business approach to interpret and compile existing business and educational processes towards reference models and suggest an Academic Education Chain Operation Model. The model

  7. Two-Higgs-doublet model of type II confronted with the LHC run I and run II data

    Science.gov (United States)

    Wang, Lei; Zhang, Feng; Han, Xiao-Fang

    2017-06-01

    We examine the parameter space of the two-Higgs-doublet model of type II after imposing the relevant theoretical and experimental constraints from the precision electroweak data, B -meson decays, and the LHC run I and run II data. We find that the searches for Higgs bosons via the τ+τ- , W W , Z Z , γ γ , h h , h Z , H Z , and A Z channels can give strong constraints on the C P -odd Higgs A and heavy C P -even Higgs H , and the parameter space excluded by each channel is respectively carved out in detail assuming that either mA or mH are fixed to 600 or 700 GeV in the scans. The surviving samples are discussed in two different regions. (i) In the standard model-like coupling region of the 125 GeV Higgs, mA is allowed to be as low as 350 GeV, and a strong upper limit is imposed on tan β . mH is allowed to be as low as 200 GeV for the appropriate values of tan β , sin (β -α ), and mA, but is required to be larger than 300 GeV for mA=700 GeV . (ii) In the wrong-sign Yukawa coupling region of the 125 GeV Higgs, the b b ¯→A /H →τ+τ- channel can impose the upper limits on tan β and sin (β -α ), and the A →h Z channel can give the lower limits on tan β and sin (β -α ). mA and mH are allowed to be as low as 60 and 200 GeV, respectively, but 320 GeV

  8. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    International Nuclear Information System (INIS)

    Bonacorsi, D; Neri, M; Boccali, T; Giordano, D; Girone, M; Magini, N; Kuznetsov, V; Wildish, T

    2015-01-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for

  9. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  10. Operating Security System Support for Run-Time Security with a Trusted Execution Environment

    DEFF Research Database (Denmark)

    Gonzalez, Javier

    sensitive assets at run-time that we denote split-enforcement, and provide an implementation for ARM-powered devices using ARM TrustZone security extensions. We design, build, and evaluate a prototype Trusted Cell that provides trusted services. We also present the first generic TrustZone driver....... In this thesis we introduce run-time security primitives that enable a number of trusted services in the context of Linux. These primitives mediate any action involving sensitive data or sensitive assets in order to guarantee their integrity and confidentiality. We introduce a general mechanism to protect...

  11. Operation of the upgraded ATLAS Central Trigger Processor during the LHC Run 2

    DEFF Research Database (Denmark)

    Bertelsen, H.; Montoya, G. Carrillo; Deviveiros, P. O.

    2016-01-01

    The ATLAS Central Trigger Processor (CTP) is responsible for forming the Level-1 trigger decision based on the information from the calorimeter and muon trigger processors. In order to cope with the increase of luminosity and physics cross-sections in Run 2, several components of this system have...

  12. Flood Peak Estimation Using Rainfall Run off Models | Matondo ...

    African Journals Online (AJOL)

    The design of hydraulic structures such as road culverts, road bridges and dam spillways requires the determination of the design food peak. Two approaches are available in the determination of the design flood peak and these are: flood frequency analysis and rainfall runoff models. Flood frequency analysis requires a ...

  13. A long run intertemporal model of the oil market with uncertainty and strategic interaction

    International Nuclear Information System (INIS)

    Lensberg, T.; Rasmussen, H.

    1991-06-01

    This paper describes a model of the long run price uncertainty in the oil market. The main feature of the model is that the uncertainty about OPEC's price strategy is assumed to be generated not by irrational behavior on the part of OPEC, but by uncertainty about OPEC's size and time preference. The control of OPEC's pricing decision is assumed to shift among a set of OPEC-types over time according to a stochastic process, with each type implementing that price strategy which best fits the interests of its supporters. The model is fully dynamic on the supply side in the sense that all oil producers are assumed to understand the working of OPEC and the oil market, in particular, the non-OPEC producers base their investment decisions on rational price expectations. On the demand side, we assume that the market insight is less developed on the average, and model it by means of a long run demand curve on current prices and a simple lag structure. The long run demand curve for crude oil is generated by a fairly detailed static long-run equilibrium model of the product markets. Preliminary experience with the model indicate that prices are likely to stay below 20 dollars in the foreseeable future, but that prices around 30 dollars may occur if the present long run time perspective of OPEC is abandoned in favor of a more short run one. 26 refs., 4 figs., 7 tabs

  14. Implementation of the ATLAS Run 2 event data model

    CERN Document Server

    Buckley, Andrew; Elsing, Markus; Gillberg, Dag Ingemar; Koeneke, Karsten; Krasznahorkay, Attila; Moyse, Edward; Nowak, Marcin; Snyder, Scott; van Gemmeren, Peter

    2015-01-01

    During the 2013--2014 shutdown of the Large Hadron Collider, ATLAS switched to a new event data model for analysis, called the xAOD. A key feature of this model is the separation of the object data from the objects themselves (the `auxiliary store'). Rather being stored as member variables of the analysis classes, all object data are stored separately, as vectors of simple values. Thus, the data are stored in a `structure of arrays' format, while the user still can access it as an `array of structures'. This organization allows for on-demand partial reading of objects, the selective removal of object properties, and the addition of arbitrary user-defined properties in a uniform manner. It also improves performance by increasing the locality of memory references in typical analysis code. The resulting data structures can be written to ROOT files with data properties represented as simple ROOT tree branches. This talk will focus on the design and implementation of the auxiliary store and its interaction with RO...

  15. Implementation of the ATLAS Run 2 event data model

    Science.gov (United States)

    Buckley, A.; Eifert, T.; Elsing, M.; Gillberg, D.; Koeneke, K.; Krasznahorkay, A.; Moyse, E.; Nowak, M.; Snyder, S.; van Gemmeren, P.

    2015-12-01

    During the 2013-2014 shutdown of the Large Hadron Collider, ATLAS switched to a new event data model for analysis, called the xAOD. A key feature of this model is the separation of the object data from the objects themselves (the ‘auxiliary store’). Rather than being stored as member variables of the analysis classes, all object data are stored separately, as vectors of simple values. Thus, the data are stored in a ‘structure of arrays’ format, while the user still can access it as an ‘array of structures’. This organization allows for on-demand partial reading of objects, the selective removal of object properties, and the addition of arbitrary user- defined properties in a uniform manner. It also improves performance by increasing the locality of memory references in typical analysis code. The resulting data structures can be written to ROOT files with data properties represented as simple ROOT tree branches. This paper focuses on the design and implementation of the auxiliary store and its interaction with ROOT.

  16. Modelling of Batch Process Operations

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli; Cameron, Ian; Gani, Rafiqul

    2011-01-01

    Here a batch cooling crystalliser is modelled and simulated as is a batch distillation system. In the batch crystalliser four operational modes of the crystalliser are considered, namely: initial cooling, nucleation, crystal growth and product removal. A model generation procedure is shown that s...

  17. Operators Control of Railway Model

    Directory of Open Access Journals (Sweden)

    Roman PAVLAS

    2009-06-01

    Full Text Available This article describes digital control of trains on a railway model and it implements option of monitoring and controlling of model from graphical interface by operator. Constituent components and their function, control system and its possibilities, a software applications needed for practical realization of assignment are described. On the base of knowledge of safe railway traffic [5] is created program which controls movement of trains, setting of train ways and signal-safe components on the model [7]. Next is described graphical interface which shows situation on railway and which allows to operator to set train ways and control movement of train. A new way of trains movement was created.

  18. Run-time Assurance for Safe UAS Operations with Reduced Human Oversight, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Current Unmanned Aircraft Systems (UAS) operations in the National Airspace System (NAS) rely heavily on human oversight, with the majority of commercial operations...

  19. mr. A C++ library for the matching and running of the Standard Model parameters

    International Nuclear Information System (INIS)

    Kniehl, Bernd A.; Veretin, Oleg L.; Pikelner, Andrey F.; Joint Institute for Nuclear Research, Dubna

    2016-01-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  20. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  1. Influence of running velocity on vertical, leg and joint stiffness : modelling and recommendations for future research.

    Science.gov (United States)

    Brughelli, Matt; Cronin, John

    2008-01-01

    Human running can be modelled as either a spring-mass model or multiple springs in series. A force is required to stretch or compress the spring, and thus stiffness, the variable of interest in this paper, can be calculated from the ratio of this force to the change in spring length. Given the link between force and length change, muscle stiffness and mechanical stiffness have been areas of interest to researchers, clinicians, and strength and conditioning practitioners for many years. This review focuses on mechanical stiffness, and in particular, vertical, leg and joint stiffness, since these are the only stiffness types that have been directly calculated during human running. It has been established that as running velocity increases from slow-to-moderate values, leg stiffness remains constant while both vertical stiffness and joint stiffness increase. However, no studies have calculated vertical, leg or joint stiffness over a range of slow-to-moderate values to maximum values in an athletic population. Therefore, the effects of faster running velocities on stiffness are relatively unexplored. Furthermore, no experimental research has examined the effects of training on vertical, leg or joint stiffness and the subsequent effects on running performance. Various methods of training (Olympic style weightlifting, heavy resistance training, plyometrics, eccentric strength training) have shown to be effective at improving running performance. However, the effects of these training methods on vertical, leg and joint stiffness are unknown. As a result, the true importance of stiffness to running performance remains unexplored, and the best practice for changing stiffness to optimize running performance is speculative at best. It is our hope that a better understanding of stiffness, and the influence of running speed on stiffness, will lead to greater interest and an increase in experimental research in this area.

  2. Operation and Performance of the ATLAS Level-1 Calorimeter and Topological Triggers in Run 2

    CERN Document Server

    Weber, Sebastian Mario; The ATLAS collaboration

    2017-01-01

    In Run 2 at CERN's Large Hadron Collider, the ATLAS detector uses a two-level trigger system to reduce the event rate from the nominal collision rate of 40 MHz to the event storage rate of 1 kHz, while preserving interesting physics events. The first step of the trigger system, Level-1, reduces the event rate to 100 kHz within a latency of less than $2.5$ $\\mu\\text{s}$. One component of this system is the Level-1 Calorimeter Trigger (L1Calo), which uses coarse-granularity information from the electromagnetic and hadronic calorimeters to identify regions of interest corresponding to electrons, photons, taus, jets, and large amounts of transverse energy and missing transverse energy. In these proceedings, we discuss improved features and performance of the L1Calo system in the challenging, high-luminosity conditions provided by the LHC in Run 2. A new dynamic pedestal correction algorithm reduces pile-up effects and the use of variable thresholds and isolation criteria for electromagnetic objects allows for opt...

  3. RHIC polarized proton-proton operation at 100 GeV in Run 15

    International Nuclear Information System (INIS)

    Schoefer, V.; Aschenauer, E. C.; Atoian, G.; Blaskiewicz, M.; Brown, K. A.; Bruno, D.; Connolly, R.; D Ottavio, T.; Drees, K. A.; Dutheil, Y.; Fischer, W.; Gardner, C.; Gu, X.; Hayes, T.; Huang, H.; Laster, J.; Liu, C.; Luo, Y.; Makdisi, Y.; Marr, G.; Marusic, A.; Meot, F.; Mernick, K.; Michnoff, R.; Marusic, A.; Minty, M.; Montag, C.; Morris, J.; Narayan, G.; Nemesure, S.; Pile, P.; Poblaguev, A.; Ranjbar, V.; Robert-Demolaize, G.; Roser, T.; Schmidke, W. B.; Severino, F.; Shrey, T.; Smith, K.; Steski, D.; Tepikian, S.; Trbojevic, D.; Tsoupas, N.; Tuozzolo, J.; Wang, G.; White, S.; Yip, K.; Zaltsman, A.; Zelenski, A.; Zeno, K.; Zhang, S. Y.

    2015-01-01

    The first part of RHIC Run 15 consisted of ten weeks of polarized proton on proton collisions at a beam energy of 100 GeV at two interaction points. In this paper we discuss several of the upgrades to the collider complex that allowed for improved performance. The largest effort consisted in commissioning of the electron lenses, one in each ring, which are designed to compensate one of the two beam-beam interactions experienced by the proton bunches. The e-lenses raise the per bunch intensity at which luminosity becomes beam-beam limited. A new lattice was designed to create the phase advances necessary for a beam-beam compensation with the e-lens, which also has an improved off-momentum dynamic aperture relative to previous runs. In order to take advantage of the new, higher intensity limit without suffering intensity driven emittance deterioration, other features were commissioned including a continuous transverse bunch-by-bunch damper in RHIC and a double harmonic RF cature scheme in the Booster. Other high intensity protections include improvements to the abort system and the installation of masks to intercept beam lost due to abort kicker pre-fires.

  4. Characterisation of the responsive properties of two running-specific prosthetic models.

    Science.gov (United States)

    Grobler, Lara; Ferreira, Suzanne; Vanwanseele, Benedicte; Terblanche, Elmarie E

    2017-04-01

    The need for information regarding running-specific prosthetic properties has previously been voiced. Such information is necessary to assist in athletes' prostheses selection. This study aimed to describe the characteristics of two commercially available running-specific prostheses. The running-specific prostheses were tested (in an experimental setup) without the external interference of athlete performance variations. Four stiffness categories of each running-specific prosthetic model (Xtend ™ and Xtreme ™ ) were tested at seven alignment setups and three drop masses (28, 38 and 48 kg). Results for peak ground reaction force (GRF peak ), contact time ( t c ), flight time ( t f ), reactive strength index (RSI) and maximal compression (Δ L) were determined during controlled dropping of running-specific prostheses onto a force platform with different masses attached to the experimental setup. No statistically significant differences were found between the different setups of the running-specific prostheses. Statistically significant differences were found between the two models for all outcome variables (GRF peak , Xtend > Xtreme; t c , Xtreme > Xtend; t f , Xtreme > Xtend; RSI, Xtend > Xtreme; Δ L, Xtreme > Xtend; p prosthetic choice. Physiologically and metabolically, a short sprint event (i.e. 100 m) places different demands on the athlete than a long sprint event (i.e. 400 m), and the RSP should match these performance demands.

  5. Making Deformable Template Models Operational

    DEFF Research Database (Denmark)

    Fisker, Rune

    2000-01-01

    for estimation of the model parameters, which applies a combination of a maximum likelihood and minimum distance criterion. Another contribution is a very fast search based initialization algorithm using a filter interpretation of the likelihood model. These two methods can be applied to most deformable template......Deformable template models are a very popular and powerful tool within the field of image processing and computer vision. This thesis treats this type of models extensively with special focus on handling their common difficulties, i.e. model parameter selection, initialization and optimization....... A proper handling of the common difficulties is essential for making the models operational by a non-expert user, which is a requirement for intensifying and commercializing the use of deformable template models. The thesis is organized as a collection of the most important articles, which has been...

  6. Introducing MOZLEAP: an integrated long-run scenario model of the emerging energy sector of Mozambique

    NARCIS (Netherlands)

    Mahumane, G; Mulder, P.

    2016-01-01

    Since recently Mozambique is actively developing its large reserves of coal, natural gas and hydropower. Against this background, we present the first integrated long-run scenario model of the Mozambican energy sector. Our model, which we name MOZLEAP, is calibrated on the basis of recently

  7. Higher-order effects in asset-pricing models with long-run risks

    NARCIS (Netherlands)

    Pohl, W.; Schmedders, K.; Wilms, Ole

    This paper shows that the latest generation of asset pricing models with long-run risk exhibits economically significant nonlinearities, and thus the ubiquitous Campbell--Shiller log-linearization can generate large numerical errors. These errors in turn translate to considerable errors in the model

  8. The Effect of the Accelerometer Operating Range on Biomechanical Parameters: Stride Length, Velocity, and Peak Tibial Acceleration during Running

    Directory of Open Access Journals (Sweden)

    Christian Mitschke

    2018-01-01

    Full Text Available Previous studies have used accelerometers with various operating ranges (ORs when measuring biomechanical parameters. However, it is still unclear whether ORs influence the accuracy of running parameters, and whether the different stiffnesses of footwear midsoles influence this accuracy. The purpose of the present study was to systematically investigate the influence of OR on the accuracy of stride length, running velocity, and on peak tibial acceleration. Twenty-one recreational heel strike runners ran on a 15-m indoor track at self-selected running speeds in three footwear conditions (low to high midsole stiffness. Runners were equipped with an inertial measurement unit (IMU affixed to the heel cup of the right shoe and with a uniaxial accelerometer at the right tibia. Accelerometers (at the tibia and included in the IMU with a high OR of ±70 g were used as the reference and the data were cut at ±32, ±16, and at ±8 g in post-processing, before calculating parameters. The results show that the OR influenced the outcomes of all investigated parameters, which were not influenced by tested footwear conditions. The lower ORs were associated with an underestimation error for all biomechanical parameters, which increased noticeably with a decreasing OR. It can be concluded that accelerometers with a minimum OR of ±32 g should be used to avoid inaccurate measurements.

  9. Usefulness of running wheel for detection of congestive heart failure in dilated cardiomyopathy mouse model.

    Directory of Open Access Journals (Sweden)

    Masami Sugihara

    Full Text Available BACKGROUND: Inherited dilated cardiomyopathy (DCM is a progressive disease that often results in death from congestive heart failure (CHF or sudden cardiac death (SCD. Mouse models with human DCM mutation are useful to investigate the developmental mechanisms of CHF and SCD, but knowledge of the severity of CHF in live mice is necessary. We aimed to diagnose CHF in live DCM model mice by measuring voluntary exercise using a running wheel and to determine causes of death in these mice. METHODOLOGY/PRINCIPAL FINDINGS: A knock-in mouse with a mutation in cardiac troponin T (ΔK210 (DCM mouse, which results in frequent death with a t(1/2 of 70 to 90 days, was used as a DCM model. Until 2 months of age, average wheel-running activity was similar between wild-type and DCM mice (approximately 7 km/day. At approximately 3 months, some DCM mice demonstrated low running activity (LO: 5 km/day. In the LO group, the lung weight/body weight ratio was much higher than that in the other groups, and the lungs were infiltrated with hemosiderin-loaded alveolar macrophages. Furthermore, echocardiography showed more severe ventricular dilation and a lower ejection fraction, whereas Electrocardiography (ECG revealed QRS widening. There were two patterns in the time courses of running activity before death in DCM mice: deaths with maintained activity and deaths with decreased activity. CONCLUSIONS/SIGNIFICANCE: Our results indicate that DCM mice with low running activity developed severe CHF and that running wheels are useful for detection of CHF in mouse models. We found that approximately half of ΔK210 DCM mice die suddenly before onset of CHF, whereas others develop CHF, deteriorate within 10 to 20 days, and die.

  10. Operational Street Pollution Model (OSPM)

    DEFF Research Database (Denmark)

    Kakosimos, k.E.; Hertel, Ole; Ketzel, Matthias

    2010-01-01

    Environmental context Trafficked streets are air pollution hot spots where people experience high exposure to hazardous pollutants. Although monitoring networks provide crucial information about measured pollutant levels, the measurements are resource demanding and thus can be performed at only few...... selected sites. Fast and easily applied street pollution models are therefore necessary tools to provide information about the loadings in streets without measurement activities. We evaluate the Operational Street Pollution Model, one of the most commonly applied models in air pollution management...... and research worldwide. Abstract Traffic emissions constitute a major source of health hazardous air pollution in urban areas. Models describing pollutant levels in urban streets are thus important tools in air pollution management as a supplement to measurements in routine monitoring programmes. A widely used...

  11. ASCHFLOW - A dynamic landslide run-out model for medium scale hazard analysis

    Czech Academy of Sciences Publication Activity Database

    Quan Luna, B.; Blahůt, Jan; van Asch, T.W.J.; van Westen, C.J.; Kappes, M.

    2016-01-01

    Roč. 3, 12 December (2016), č. článku 29. E-ISSN 2197-8670 Institutional support: RVO:67985891 Keywords : landslides * run-out models * medium scale hazard analysis * quantitative risk assessment Subject RIV: DE - Earth Magnetism, Geodesy, Geography

  12. Operation and Performance of a new microTCA-based CMS Calorimeter Trigger in LHC Run 2

    CERN Document Server

    Klabbers, Pamela Renee

    2016-01-01

    The Large Hadron Collider (LHC) at CERN is currently increasing the instantaneous luminosity for p-p collisions. In LHC Run 2, the center-of-mass energy has gone from 8 to 13 TeV and the instantaneous luminosity will approximately double for proton collisions. This will make it even more challenging to trigger on interesting events since the number of interactions per crossing (pileup) and the overall trigger rate will be significantly larger than in LHC Run 1. The Compact Muon Solenoid (CMS) experiment has installed the second stage of a two-stage upgrade to the Calorimeter Trigger to ensure that the trigger rates can be controlled and the thresholds kept low, so that physics data will not be compromised. The stage-1, which replaced the original CMS Global Calorimeter Trigger, operated successfully in 2015. The completely new stage-2 has replaced the entire calorimeter trigger in 2016 with AMC form-factor boards and optical links operating in a microTCA chassis. It required that updates to the calorimet...

  13. Running a distributed virtual observatory: U.S. Virtual Astronomical Observatory operations

    Science.gov (United States)

    McGlynn, Thomas A.; Hanisch, Robert J.; Berriman, G. Bruce; Thakar, Aniruddha R.

    2012-09-01

    Operation of the US Virtual Astronomical Observatory shares some issues with modern physical observatories, e.g., intimidating data volumes and rapid technological change, and must also address unique concerns like the lack of direct control of the underlying and scattered data resources, and the distributed nature of the observatory itself. In this paper we discuss how the VAO has addressed these challenges to provide the astronomical community with a coherent set of science-enabling tools and services. The distributed nature of our virtual observatory-with data and personnel spanning geographic, institutional and regime boundaries-is simultaneously a major operational headache and the primary science motivation for the VAO. Most astronomy today uses data from many resources. Facilitation of matching heterogeneous datasets is a fundamental reason for the virtual observatory. Key aspects of our approach include continuous monitoring and validation of VAO and VO services and the datasets provided by the community, monitoring of user requests to optimize access, caching for large datasets, and providing distributed storage services that allow user to collect results near large data repositories. Some elements are now fully implemented, while others are planned for subsequent years. The distributed nature of the VAO requires careful attention to what can be a straightforward operation at a conventional observatory, e.g., the organization of the web site or the collection and combined analysis of logs. Many of these strategies use and extend protocols developed by the international virtual observatory community. Our long-term challenge is working with the underlying data providers to ensure high quality implementation of VO data access protocols (new and better 'telescopes'), assisting astronomical developers to build robust integrating tools (new 'instruments'), and coordinating with the research community to maximize the science enabled.

  14. Lunar Landing Operational Risk Model

    Science.gov (United States)

    Mattenberger, Chris; Putney, Blake; Rust, Randy; Derkowski, Brian

    2010-01-01

    Characterizing the risk of spacecraft goes beyond simply modeling equipment reliability. Some portions of the mission require complex interactions between system elements that can lead to failure without an actual hardware fault. Landing risk is currently the least characterized aspect of the Altair lunar lander and appears to result from complex temporal interactions between pilot, sensors, surface characteristics and vehicle capabilities rather than hardware failures. The Lunar Landing Operational Risk Model (LLORM) seeks to provide rapid and flexible quantitative insight into the risks driving the landing event and to gauge sensitivities of the vehicle to changes in system configuration and mission operations. The LLORM takes a Monte Carlo based approach to estimate the operational risk of the Lunar Landing Event and calculates estimates of the risk of Loss of Mission (LOM) - Abort Required and is Successful, Loss of Crew (LOC) - Vehicle Crashes or Cannot Reach Orbit, and Success. The LLORM is meant to be used during the conceptual design phase to inform decision makers transparently of the reliability impacts of design decisions, to identify areas of the design which may require additional robustness, and to aid in the development and flow-down of requirements.

  15. Randomly curved runs interrupted by tumbling: A model for bacterial motion

    Science.gov (United States)

    Condat, C. A.; Jäckle, J.; Menchón, S. A.

    2005-08-01

    Small bacteria are strongly buffeted by Brownian forces that make completely straight runs impossible. A model for bacterial motion is formulated in which the effects of fluctuational forces and torques on the run phase are taken into account by using coupled Langevin equations. An integrated description of the motion, including runs and tumbles, is then obtained by the use of convolution and Laplace transforms. The properties of the velocity-velocity correlation function, of the mean displacement, and of the two relevant diffusion coefficients are examined in terms of the bacterial sizes and of the magnitude of the propelling forces. For bacteria smaller than E. coli, the integrated diffusion coefficient crosses over from a jump-dominated to a rotational-diffusion-dominated form.

  16. Biomechanical modeling and sensitivity analysis of bipedal running ability. II. Extinct taxa.

    Science.gov (United States)

    Hutchinson, John R

    2004-10-01

    Using an inverse dynamics biomechanical analysis that was previously validated for extant bipeds, I calculated the minimum amount of actively contracting hindlimb extensor muscle that would have been needed for rapid bipedal running in several extinct dinosaur taxa. I analyzed models of nine theropod dinosaurs (including birds) covering over five orders of magnitude in size. My results uphold previous findings that large theropods such as Tyrannosaurus could not run very quickly, whereas smaller theropods (including some extinct birds) were adept runners. Furthermore, my results strengthen the contention that many nonavian theropods, especially larger individuals, used fairly upright limb orientations, which would have reduced required muscular force, and hence muscle mass. Additional sensitivity analysis of muscle fascicle lengths, moment arms, and limb orientation supports these conclusions and points out directions for future research on the musculoskeletal limits on running ability. Although ankle extensor muscle support is shown to have been important for all taxa, the ability of hip extensor muscles to support the body appears to be a crucial limit for running capacity in larger taxa. I discuss what speeds were possible for different theropod dinosaurs, and how running ability evolved in an inverse relationship to body size in archosaurs. 2004 Wiley-Liss, Inc.

  17. Modeling driver stop/run behavior at the onset of a yellow indication considering driver run tendency and roadway surface conditions.

    Science.gov (United States)

    Elhenawy, Mohammed; Jahangiri, Arash; Rakha, Hesham A; El-Shawarby, Ihab

    2015-10-01

    The ability to model driver stop/run behavior at signalized intersections considering the roadway surface condition is critical in the design of advanced driver assistance systems. Such systems can reduce intersection crashes and fatalities by predicting driver stop/run behavior. The research presented in this paper uses data collected from two controlled field experiments on the Smart Road at the Virginia Tech Transportation Institute (VTTI) to model driver stop/run behavior at the onset of a yellow indication for different roadway surface conditions. The paper offers two contributions. First, it introduces a new predictor related to driver aggressiveness and demonstrates that this measure enhances the modeling of driver stop/run behavior. Second, it applies well-known artificial intelligence techniques including: adaptive boosting (AdaBoost), random forest, and support vector machine (SVM) algorithms as well as traditional logistic regression techniques on the data in order to develop a model that can be used by traffic signal controllers to predict driver stop/run decisions in a connected vehicle environment. The research demonstrates that by adding the proposed driver aggressiveness predictor to the model, there is a statistically significant increase in the model accuracy. Moreover the false alarm rate is significantly reduced but this reduction is not statistically significant. The study demonstrates that, for the subject data, the SVM machine learning algorithm performs the best in terms of optimum classification accuracy and false positive rates. However, the SVM model produces the best performance in terms of the classification accuracy only. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    KAUST Repository

    Castruccio, Stefano

    2014-03-01

    The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as pattern scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. It may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.

  19. Impact of data assimilation of physical variables on the spring bloom from TOPAZ operational runs in the North Atlantic

    Directory of Open Access Journals (Sweden)

    A. Samuelsen

    2009-12-01

    Full Text Available A reanalysis of the North Atlantic spring bloom in 2007 was produced using the real-time analysis from the TOPAZ North Atlantic and Arctic forecasting system. The TOPAZ system uses a hybrid coordinate general circulation ocean model and assimilates physical observations: sea surface anomalies, sea surface temperatures, and sea-ice concentrations using the Ensemble Kalman Filter. This ocean model was coupled to an ecosystem model, NORWECOM (Norwegian Ecological Model System, and the TOPAZ-NORWECOM coupled model was run throughout the spring and summer of 2007. The ecosystem model was run online, restarting from analyzed physical fields (result after data assimilation every 7 days. Biological variables were not assimilated in the model. The main purpose of the study was to investigate the impact of physical data assimilation on the ecosystem model. This was determined by comparing the results to those from a model without assimilation of physical data. The regions of focus are the North Atlantic and the Arctic Ocean. Assimilation of physical variables does not affect the results from the ecosystem model significantly. The differences between the weekly mean values of chlorophyll are normally within 5–10% during the summer months, and the maximum difference of ~20% occurs in the Arctic, also during summer. Special attention was paid to the nutrient input from the North Atlantic to the Nordic Seas and the impact of ice-assimilation on the ecosystem. The ice-assimilation increased the phytoplankton concentration: because there was less ice in the assimilation run, this increased both the mixing of nutrients during winter and the area where production could occur during summer. The forecast was also compared to remotely sensed chlorophyll, climatological nutrients, and in-situ data. The results show that the model reproduces a realistic annual cycle, but the chlorophyll concentrations tend to be between 0.1 and 1.0 mg chla/m3 too

  20. Recent updates in the aerosol component of the C-IFS model run by ECMWF

    Science.gov (United States)

    Remy, Samuel; Boucher, Olivier; Hauglustaine, Didier; Kipling, Zak; Flemming, Johannes

    2017-04-01

    The Composition-Integrated Forecast System (C-IFS) is a global atmospheric composition forecasting tool, run by ECMWF within the framework of the Copernicus Atmospheric Monitoring Service (CAMS). The aerosol model of C-IFS is a simple bulk scheme that forecasts 5 species: dust, sea-salt, black carbon, organic matter and sulfate. Three bins represent the dust and sea-salt, for the super-coarse, coarse and fine mode of these species (Morcrette et al., 2009). This talk will present recent updates of the aerosol model, and also introduce forthcoming developments. It will also present the impact of these changes as measured scores against AERONET Aerosol Optical Depth (AOD) and Airbase PM10 observations. The next cycle of C-IFS will include a mass fixer, because the semi-Lagrangian advection scheme used in C-IFS is not mass-conservative. C-IFS now offers the possibility to emit biomass-burning aerosols at an injection height that is provided by a new version of the Global Fire Assimilation System (GFAS). Secondary Organic Aerosols (SOA) production will be scaled on non-biomass burning CO fluxes. This approach allows to represent the anthropogenic contribution to SOA production; it brought a notable improvement in the skill of the model, especially over Europe. Lastly, the emissions of SO2 are now provided by the MACCity inventory instead of and older version of the EDGAR dataset. The seasonal and yearly variability of SO2 emissions are better captured by the MACCity dataset. Upcoming developments of the aerosol model of C-IFS consist mainly in the implementation of a nitrate and ammonium module, with 2 bins (fine and coarse) for nitrate. Nitrate and ammonium sulfate particle formation from gaseous precursors is represented following Hauglustaine et al. (2014); formation of coarse nitrate over pre-existing sea-salt or dust particles is also represented. This extension of the forward model improved scores over heavily populated areas such as Europe, China and Eastern

  1. Running the running

    OpenAIRE

    Cabass, Giovanni; Di Valentino, Eleonora; Melchiorri, Alessandro; Pajer, Enrico; Silk, Joseph

    2016-01-01

    We use the recent observations of Cosmic Microwave Background temperature and polarization anisotropies provided by the Planck satellite experiment to place constraints on the running $\\alpha_\\mathrm{s} = \\mathrm{d}n_{\\mathrm{s}} / \\mathrm{d}\\log k$ and the running of the running $\\beta_{\\mathrm{s}} = \\mathrm{d}\\alpha_{\\mathrm{s}} / \\mathrm{d}\\log k$ of the spectral index $n_{\\mathrm{s}}$ of primordial scalar fluctuations. We find $\\alpha_\\mathrm{s}=0.011\\pm0.010$ and $\\beta_\\mathrm{s}=0.027\\...

  2. Nonhydrostatic and surfbeat model predictions of extreme wave run-up in fringing reef environments

    Science.gov (United States)

    Lashley, Christopher H.; Roelvink, Dano; van Dongeren, Ap R.; Buckley, Mark L.; Lowe, Ryan J.

    2018-01-01

    The accurate prediction of extreme wave run-up is important for effective coastal engineering design and coastal hazard management. While run-up processes on open sandy coasts have been reasonably well-studied, very few studies have focused on understanding and predicting wave run-up at coral reef-fronted coastlines. This paper applies the short-wave resolving, Nonhydrostatic (XB-NH) and short-wave averaged, Surfbeat (XB-SB) modes of the XBeach numerical model to validate run-up using data from two 1D (alongshore uniform) fringing-reef profiles without roughness elements, with two objectives: i) to provide insight into the physical processes governing run-up in such environments; and ii) to evaluate the performance of both modes in accurately predicting run-up over a wide range of conditions. XBeach was calibrated by optimizing the maximum wave steepness parameter (maxbrsteep) in XB-NH and the dissipation coefficient (alpha) in XB-SB) using the first dataset; and then applied to the second dataset for validation. XB-NH and XB-SB predictions of extreme wave run-up (Rmax and R2%) and its components, infragravity- and sea-swell band swash (SIG and SSS) and shoreline setup (), were compared to observations. XB-NH more accurately simulated wave transformation but under-predicted shoreline setup due to its exclusion of parameterized wave-roller dynamics. XB-SB under-predicted sea-swell band swash but overestimated shoreline setup due to an over-prediction of wave heights on the reef flat. Run-up (swash) spectra were dominated by infragravity motions, allowing the short-wave (but not wave group) averaged model (XB-SB) to perform comparably well to its more complete, short-wave resolving (XB-NH) counterpart. Despite their respective limitations, both modes were able to accurately predict Rmax and R2%.

  3. NASA SPoRT Initialization Datasets for Local Model Runs in the Environmental Modeling System

    Science.gov (United States)

    Case, Jonathan L.; LaFontaine, Frank J.; Molthan, Andrew L.; Carcione, Brian; Wood, Lance; Maloney, Joseph; Estupinan, Jeral; Medlin, Jeffrey M.; Blottman, Peter; Rozumalski, Robert A.

    2011-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed several products for its National Weather Service (NWS) partners that can be used to initialize local model runs within the Weather Research and Forecasting (WRF) Environmental Modeling System (EMS). These real-time datasets consist of surface-based information updated at least once per day, and produced in a composite or gridded product that is easily incorporated into the WRF EMS. The primary goal for making these NASA datasets available to the WRF EMS community is to provide timely and high-quality information at a spatial resolution comparable to that used in the local model configurations (i.e., convection-allowing scales). The current suite of SPoRT products supported in the WRF EMS include a Sea Surface Temperature (SST) composite, a Great Lakes sea-ice extent, a Greenness Vegetation Fraction (GVF) composite, and Land Information System (LIS) gridded output. The SPoRT SST composite is a blend of primarily the Moderate Resolution Imaging Spectroradiometer (MODIS) infrared and Advanced Microwave Scanning Radiometer for Earth Observing System data for non-precipitation coverage over the oceans at 2-km resolution. The composite includes a special lake surface temperature analysis over the Great Lakes using contributions from the Remote Sensing Systems temperature data. The Great Lakes Environmental Research Laboratory Ice Percentage product is used to create a sea-ice mask in the SPoRT SST composite. The sea-ice mask is produced daily (in-season) at 1.8-km resolution and identifies ice percentage from 0 100% in 10% increments, with values above 90% flagged as ice.

  4. The ATLAS Run-2 Trigger: Design, Menu, Performance and Operational Aspects

    CERN Document Server

    Martin, Tim; The ATLAS collaboration

    2016-01-01

    The LHC, at design capacity, has a bunch-crossing rate of 40 MHz whereas the ATLAS experiment at the LHC has an average recording rate of about 1000 Hz. To reduce the rate of events but still maintain a high efficiency of selecting rare events such as physics signals beyond the Standard Model, a two-level trigger system is used in ATLAS. Events are selected based on physics signatures such as presence of energetic leptons, photons, jets or large missing energy. Despite the limited time available for processing collision events, the trigger system is able to exploit topological information, as well as using multi-variate methods. In total, the ATLAS trigger system consists of thousands of different individual triggers. The ATLAS trigger menu specifies which triggers are used during data taking and how much rate a given trigger is allocated. This menu reflects not only the physics goals of the collaboration but also takes the instantaneous luminosity of the LHC, the design limits of the ATLAS detector and the o...

  5. The ATLAS Run-2 Trigger: Design, Menu, Performance and Operational Aspects

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219584; The ATLAS collaboration

    2016-01-01

    The LHC, at design capacity, has a bunch-crossing rate of 40 MHz whereas the ATLAS experiment has an average recording rate of about 1000 Hz. To reduce the rate of events but still maintain high efficiency of selecting rare events such as physics signals beyond the Standard Model, a two-level trigger system is used in ATLAS. Events are selected based on physics signatures such as presence of energetic leptons, photons, jets or large missing energy. Despite the limited time available for processing collision events, the trigger system is able to exploit topological informations, as well as using multi-variate methods. In total, the ATLAS trigger systems consists of thousands of different individual triggers. The ATLAS trigger menu specifies which triggers are used during data taking and how much rate a given trigger is allocated. This menu reflects not only the physics goals of the collaboration but also takes into consideration the instantaneous luminosity of the LHC and the design limits of the ATLAS detecto...

  6. The ATLAS Run-2 Trigger Menu for higher luminosities: Design, Performance and Operational Aspects

    CERN Document Server

    Montejo Berlingen, Javier; The ATLAS collaboration

    2017-01-01

    The LHC, at design capacity, has a bunch-crossing rate of 40 MHz whereas the ATLAS experiment has an average recording rate of about 1 kHz. To reduce the rate of events, but maintain high selection efficiency for rare events such as physics signals beyond the Standard Model, a two-level trigger system is used. Events are selected based on physics signatures such as presence of energetic leptons, photons, jets or large missing energy. Despite the limited time available for processing collision events the trigger system is able to exploit topological information, as well as using multi-variate methods. In total, the ATLAS trigger systems consists of thousands of different individual triggers. The ATLAS trigger menu specifies which triggers are used during data taking and how much rate a given trigger is allocated. This menu reflects not only the physics goals of the collaboration but also takes into consideration the instantaneous luminosity of the LHC and the design limits of the ATLAS detector and offline pro...

  7. The long-run forecasting of energy prices using the model of shifting trend

    International Nuclear Information System (INIS)

    Radchenko, Stanislav

    2005-01-01

    Developing models for accurate long-term energy price forecasting is an important problem because these forecasts should be useful in determining both supply and demand of energy. On the supply side, long-term forecasts determine investment decisions of energy-related companies. On the demand side, investments in physical capital and durable goods depend on price forecasts of a particular energy type. Forecasting long-run rend movements in energy prices is very important on the macroeconomic level for several developing countries because energy prices have large impacts on their real output, the balance of payments, fiscal policy, etc. Pindyck (1999) argues that the dynamics of real energy prices is mean-reverting to trend lines with slopes and levels that are shifting unpredictably over time. The hypothesis of shifting long-term trend lines was statistically tested by Benard et al. (2004). The authors find statistically significant instabilities for coal and natural gas prices. I continue the research of energy prices in the framework of continuously shifting levels and slopes of trend lines started by Pindyck (1999). The examined model offers both parsimonious approach and perspective on the developments in energy markets. Using the model of depletable resource production, Pindyck (1999) argued that the forecast of energy prices in the model is based on the long-run total marginal cost. Because the model of a shifting trend is based on the competitive behavior, one may examine deviations of oil producers from the competitive behavior by studying the difference between actual prices and long-term forecasts. To construct the long-run forecasts (10-year-ahead and 15-year-ahead) of energy prices, I modify the univariate shifting trends model of Pindyck (1999). I relax some assumptions on model parameters, the assumption of white noise error term, and propose a new Bayesian approach utilizing a Gibbs sampling algorithm to estimate the model with autocorrelation. To

  8. Operational models for forecasting Dst

    Science.gov (United States)

    Watanabe, S.; Sagawa, E.; Ohtaka, K.; Shimazu, H.

    We have constructed operational models for forecasting the geomagnetic storm index (Dst) two hours in advance from six parameters: the velocity and density of the solar wind, the magnitude of the interplanetary magnetic field (IMF), and the x, y, and z components of the IMF. Our models use an Elman-type neural network, and we forecast space weather by using real-time solar-wind data from the Advanced Composition Explorer spacecraft.The models have worked well since April of 1998 and the Dst values forecast using them have been made available to the public at http://www.crl.go.jp/uk/uk223/service/nnw/index.html. From February to October 1998 there were 11 storms with minimum Dst values below -80 nT, and for ten the difference between the forecast minimum Dst and the Dst calculated from data measured by ground stations was less than 23%.For the storm starting on 19 October, however, the difference was 40% because of the weak correlation between the ACE environment and the earth's environment during this event.The Dst depends on the orientation of the IMF relative to the solar magnetospheric x-y plane and seems to be relatively large when the y component of the IMF is positive and perhaps also when the x component is positive.

  9. Constraints on running vacuum model with H ( z ) and f σ{sub 8}

    Energy Technology Data Exchange (ETDEWEB)

    Geng, Chao-Qiang [Chongqing University of Posts and Telecommunications, Chongqing, 400065 (China); Lee, Chung-Chi [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Yin, Lu, E-mail: geng@phys.nthu.edu.tw, E-mail: lee.chungchi16@gmail.com, E-mail: yinlumail@foxmail.com [Department of Physics, National Tsing Hua University, Hsinchu, 300 Taiwan (China)

    2017-08-01

    We examine the running vacuum model with Λ ( H ) = 3 ν H {sup 2} + Λ{sub 0}, where ν is the model parameter and Λ{sub 0} is the cosmological constant. From the data of the cosmic microwave background radiation, weak lensing and baryon acoustic oscillation along with the time dependent Hubble parameter H ( z ) and weighted linear growth f ( z )σ{sub 8}( z ) measurements, we find that ν=(1.37{sup +0.72}{sub −0.95})× 10{sup −4} with the best fitted χ{sup 2} value slightly smaller than that in the ΛCDM model.

  10. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2013-01-01

    The focus of Run Coordination during LS1 is to monitor closely the advance of maintenance and upgrade activities, to smooth interactions between subsystems and to ensure that all are ready in time to resume operations in 2015 with a fully calibrated and understood detector. After electricity and cooling were restored to all equipment, at about the time of the last CMS week, recommissioning activities were resumed for all subsystems. On 7 October, DCS shifts began 24/7 to allow subsystems to remain on to facilitate operations. That culminated with the Global Run in November (GriN), which   took place as scheduled during the week of 4 November. The GriN has been the first centrally managed operation since the beginning of LS1, and involved all subdetectors but the Pixel Tracker presently in a lab upstairs. All nights were therefore dedicated to long stable runs with as many subdetectors as possible. Among the many achievements in that week, three items may be highlighted. First, the Strip...

  11. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2013-01-01

    Since the LHC ceased operations in February, a lot has been going on at Point 5, and Run Coordination continues to monitor closely the advance of maintenance and upgrade activities. In the last months, the Pixel detector was extracted and is now stored in the pixel lab in SX5; the beam pipe has been removed and ME1/1 removal has started. We regained access to the vactank and some work on the RBX of HB has started. Since mid-June, electricity and cooling are back in S1 and S2, allowing us to turn equipment back on, at least during the day. 24/7 shifts are not foreseen in the next weeks, and safety tours are mandatory to keep equipment on overnight, but re-commissioning activities are slowly being resumed. Given the (slight) delays accumulated in LS1, it was decided to merge the two global runs initially foreseen into a single exercise during the week of 4 November 2013. The aim of the global run is to check that we can run (parts of) CMS after several months switched off, with the new VME PCs installed, th...

  12. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    CERN Document Server

    Bonacorsi, D; Giordano, D; Girone, M; Neri, M; Magini, N; Kuznetsov, V; Wildish, T

    2015-01-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This...

  13. A High-Speed Train Operation Plan Inspection Simulation Model

    Directory of Open Access Journals (Sweden)

    Yang Rui

    2018-01-01

    Full Text Available We developed a train operation simulation tool to inspect a train operation plan. In applying an improved Petri Net, the train was regarded as a token, and the line and station were regarded as places, respectively, in accordance with the high-speed train operation characteristics and network function. Location change and running information transfer of the high-speed train were realized by customizing a variety of transitions. The model was built based on the concept of component combination, considering the random disturbance in the process of train running. The simulation framework can be generated quickly and the system operation can be completed according to the different test requirements and the required network data. We tested the simulation tool when used for the real-world Wuhan to Guangzhou high-speed line. The results showed that the proposed model can be developed, the simulation results basically coincide with the objective reality, and it can not only test the feasibility of the high-speed train operation plan, but also be used as a support model to develop the simulation platform with more capabilities.

  14. Effect of sucrose availability on wheel-running as an operant and as a reinforcing consequence on a multiple schedule: Additive effects of extrinsic and automatic reinforcement.

    Science.gov (United States)

    Belke, Terry W; Pierce, W David

    2015-07-01

    As a follow up to Belke and Pierce's (2014) study, we assessed the effects of repeated presentation and removal of sucrose solution on the behavior of rats responding on a two-component multiple schedule. Rats completed 15 wheel turns (FR 15) for either 15% or 0% sucrose solution in the manipulated component and lever pressed 10 times on average (VR 10) for an opportunity to complete 15 wheel turns (FR 15) in the other component. In contrast to our earlier study, the components advanced based on time (every 8min) rather than completed responses. Results showed that in the manipulated component wheel-running rates were higher and the latency to initiate running longer when sucrose was present (15%) compared to absent (0% or water); the number of obtained outcomes (sucrose/water), however, did not differ with the presentation and withdrawal of sucrose. For the wheel-running as reinforcement component, rates of wheel turns, overall lever-pressing rates, and obtained wheel-running reinforcements were higher, and postreinforcement pauses shorter, when sucrose was present (15%) than absent (0%) in manipulated component. Overall, our findings suggest that wheel-running rate regardless of its function (operant or reinforcement) is maintained by automatically generated consequences (automatic reinforcement) and is increased as an operant by adding experimentally arranged sucrose reinforcement (extrinsic reinforcement). This additive effect on operant wheel-running generalizes through induction or arousal to the wheel-running as reinforcement component, increasing the rate of responding for opportunities to run and the rate of wheel-running per opportunity. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Singlet extensions of the standard model at LHC Run 2: benchmarks and comparison with the NMSSM

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Raul [Centro de Física Teórica e Computacional, Faculdade de Ciências,Universidade de Lisboa, Campo Grande, Edifício C8 1749-016 Lisboa (Portugal); Departamento de Física da Universidade de Aveiro,Campus de Santiago, 3810-183 Aveiro (Portugal); Mühlleitner, Margarete [Institute for Theoretical Physics, Karlsruhe Institute of Technology,76128 Karlsruhe (Germany); Sampaio, Marco O.P. [Departamento de Física da Universidade de Aveiro,Campus de Santiago, 3810-183 Aveiro (Portugal); CIDMA - Center for Research Development in Mathematics and Applications,Campus de Santiago, 3810-183 Aveiro (Portugal); Santos, Rui [Centro de Física Teórica e Computacional, Faculdade de Ciências,Universidade de Lisboa, Campo Grande, Edifício C8 1749-016 Lisboa (Portugal); ISEL - Instituto Superior de Engenharia de Lisboa,Instituto Politécnico de Lisboa, 1959-007 Lisboa (Portugal)

    2016-06-07

    The Complex singlet extension of the Standard Model (CxSM) is the simplest extension that provides scenarios for Higgs pair production with different masses. The model has two interesting phases: the dark matter phase, with a Standard Model-like Higgs boson, a new scalar and a dark matter candidate; and the broken phase, with all three neutral scalars mixing. In the latter phase Higgs decays into a pair of two different Higgs bosons are possible. In this study we analyse Higgs-to-Higgs decays in the framework of singlet extensions of the Standard Model (SM), with focus on the CxSM. After demonstrating that scenarios with large rates for such chain decays are possible we perform a comparison between the NMSSM and the CxSM. We find that, based on Higgs-to-Higgs decays, the only possibility to distinguish the two models at the LHC run 2 is through final states with two different scalars. This conclusion builds a strong case for searches for final states with two different scalars at the LHC run 2. Finally, we propose a set of benchmark points for the real and complex singlet extensions to be tested at the LHC run 2. They have been chosen such that the discovery prospects of the involved scalars are maximised and they fulfil the dark matter constraints. Furthermore, for some of the points the theory is stable up to high energy scales. For the computation of the decay widths and branching ratios we developed the Fortran code sHDECAY, which is based on the implementation of the real and complex singlet extensions of the SM in HDECAY.

  16. The effect of treadmill running on passive avoidance learning in animal model of Alzheimer disease.

    Science.gov (United States)

    Hosseini, Nasrin; Alaei, Hojjatallah; Reisi, Parham; Radahmadi, Maryam

    2013-02-01

    Alzheimer's disease was known as a progressive neurodegenerative disorder in the elderly and is characterized by dementia and severe neuronal loss in the some regions of brain such as nucleus basalis magnocellularis. It plays an important role in the brain functions such as learning and memory. Loss of cholinergic neurons of nucleus basalis magnocellularis by ibotenic acid can commonly be regarded as a suitable model of Alzheimer's disease. Previous studies reported that exercise training may slow down the onset and progression of memory deficit in neurodegenerative disorders. This research investigates the effects of treadmill running on acquisition and retention time of passive avoidance deficits induced by ibotenic acid nucleus basalis magnocellularis lesion. MALE WISTAR RATS WERE RANDOMLY SELECTED AND DIVIDED INTO FIVE GROUPS AS FOLLOWS: Control, sham, Alzheimer, exercise before Alzheimer, and exercise groups. Treadmill running had a 21 day period and Alzheimer was induced by 5 μg/μl bilateral injection of ibotenic acid in nucleus basalis magnocellularis. Our results showed that ibotenic acid lesions significantly impaired passive avoidance acquisition (P exercise significantly (P < 0.001) improved passive avoidance learning in NBM-lesion rats. Treadmill running has a potential role in the prevention of learning and memory impairments in NBM-lesion rats.

  17. Modelling the basic error tendencies of human operators

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    The paper outlines the primary structural features of human cognition: a limited, serial workspace interacting with a parallel distributed knowledge base. It is argued that the essential computational features of human cognition - to be captured by an adequate operator model - reside in the mechanisms by which stored knowledge structures are selected and brought into play. Two such computational 'primitives' are identified: similarity-matching and frequency-gambling. These two retrieval heuristics, it is argued, shape both the overall character of human performance (i.e. its heavy reliance on pattern-matching) and its basic error tendencies ('strong-but-wrong' responses, confirmation, similarity and frequency biases, and cognitive 'lock-up'). The various features of human cognition are integrated with a dynamic operator model capable of being represented in software form. This computer model, when run repeatedly with a variety of problem configurations, should produce a distribution of behaviours which, in total, simulate the general character of operator performance. (author)

  18. Modelling the basic error tendencies of human operators

    International Nuclear Information System (INIS)

    Reason, James

    1988-01-01

    The paper outlines the primary structural features of human cognition: a limited, serial workspace interacting with a parallel distributed knowledge base. It is argued that the essential computational features of human cognition - to be captured by an adequate operator model - reside in the mechanisms by which stored knowledge structures are selected and brought into play. Two such computational 'primitives' are identified: similarity-matching and frequency-gambling. These two retrieval heuristics, it is argued, shape both the overall character of human performance (i.e. its heavy reliance on pattern-matching) and its basic error tendencies ('strong-but-wrong' responses, confirmation, similarity and frequency biases, and cognitive 'lock-up'). The various features of human cognition are integrated with a dynamic operator model capable of being represented in software form. This computer model, when run repeatedly with a variety of problem configurations, should produce a distribution of behaviours which, in toto, simulate the general character of operator performance. (author)

  19. Overall Preference of Running Shoes Can Be Predicted by Suitable Perception Factors Using a Multiple Regression Model.

    Science.gov (United States)

    Tay, Cheryl Sihui; Sterzing, Thorsten; Lim, Chen Yen; Ding, Rui; Kong, Pui Wah

    2017-05-01

    This study examined (a) the strength of four individual footwear perception factors to influence the overall preference of running shoes and (b) whether these perception factors satisfied the nonmulticollinear assumption in a regression model. Running footwear must fulfill multiple functional criteria to satisfy its potential users. Footwear perception factors, such as fit and cushioning, are commonly used to guide shoe design and development, but it is unclear whether running-footwear users are able to differentiate one factor from another. One hundred casual runners assessed four running shoes on a 15-cm visual analogue scale for four footwear perception factors (fit, cushioning, arch support, and stability) as well as for overall preference during a treadmill running protocol. Diagnostic tests showed an absence of multicollinearity between factors, where values for tolerance ranged from .36 to .72, corresponding to variance inflation factors of 2.8 to 1.4. The multiple regression model of these four footwear perception variables accounted for 77.7% to 81.6% of variance in overall preference, with each factor explaining a unique part of the total variance. Casual runners were able to rate each footwear perception factor separately, thus assigning each factor a true potential to improve overall preference for the users. The results also support the use of a multiple regression model of footwear perception factors to predict overall running shoe preference. Regression modeling is a useful tool for running-shoe manufacturers to more precisely evaluate how individual factors contribute to the subjective assessment of running footwear.

  20. Debris flow analysis with a one dimensional dynamic run-out model that incorporates entrained material

    Science.gov (United States)

    Luna, Byron Quan; Remaître, Alexandre; van Asch, Theo; Malet, Jean-Philippe; van Westen, Cees

    2010-05-01

    Estimating the magnitude and the intensity of rapid landslides like debris flows is fundamental to evaluate quantitatively the hazard in a specific location. Intensity varies through the travelled course of the flow and can be described by physical features such as deposited volume, velocities, height of the flow, impact forces and pressures. Dynamic run-out models are able to characterize the distribution of the material, its intensity and define the zone where the elements will experience an impact. These models can provide valuable inputs for vulnerability and risk calculations. However, most dynamic run-out models assume a constant volume during the motion of the flow, ignoring the important role of material entrained along its path. Consequently, they neglect that the increase of volume enhances the mobility of the flow and can significantly influence the size of the potential impact area. An appropriate erosion mechanism needs to be established in the analyses of debris flows that will improve the results of dynamic modeling and consequently the quantitative evaluation of risk. The objective is to present and test a simple 1D debris flow model with a material entrainment concept based on limit equilibrium considerations and the generation of excess pore water pressure through undrained loading of the in situ bed material. The debris flow propagation model is based on a one dimensional finite difference solution of a depth-averaged form of the Navier-Stokes equations of fluid motions. The flow is treated as a laminar one phase material, which behavior is controlled by a visco-plastic Coulomb-Bingham rheology. The model parameters are evaluated and the model performance is tested on a debris flow event that occurred in 2003 in the Faucon torrent (Southern French Alps).

  1. Hydrologic and water-quality characterization and modeling of the Chenoweth Run basin, Jefferson County, Kentucky

    Science.gov (United States)

    Martin, Gary R.; Zarriello, Phillip J.; Shipp, Allison A.

    2001-01-01

    Rainfall, streamflow, and water-quality data collected in the Chenoweth Run Basin during February 1996?January 1998, in combination with the available historical sampling data, were used to characterize hydrologic conditions and to develop and calibrate a Hydrological Simulation Program?Fortran (HSPF) model for continuous simulation of rainfall, streamflow, suspended-sediment, and total-orthophosphate (TPO4) transport relations. Study results provide an improved understanding of basin hydrology and a hydrologic-modeling framework with analytical tools for use in comprehensive waterresource planning and management. Chenoweth Run Basin, encompassing 16.5 mi2 in suburban eastern Jefferson County, Kentucky, contains expanding urban development, particularly in the upper third of the basin. Historical water-quality problems have interfered with designated aquatic-life and recreation uses in the stream main channel (approximately 9 mi in length) and have been attributed to organic enrichment, nutrients, metals, and pathogens in urban runoff and wastewater inflows. Hydrologic conditions in Jefferson County are highly varied. In the Chenoweth Run Basin, as in much of the eastern third of the county, relief is moderately sloping to steep. Also, internal drainage in pervious areas is impeded by the shallow, fine-textured subsoils that contain abundant silts and clays. Thus, much of the precipitation here tends to move rapidly as overland flow and (or) shallow subsurface flow (interflow) to the stream channels. Data were collected at two streamflowgaging stations, one rain gage, and four waterquality- sampling sites in the basin. Precipitation, streamflow, and, consequently, constituent loads were above normal during the data-collection period of this study. Nonpoint sources contributed the largest portion of the sediment loads. However, the three wastewatertreatment plants (WWTP?s) were the source of the majority of estimated total phosphorus (TP) and TPO4 transport

  2. Long-Run Effects in Large Heterogeneous Panel Data Models with Cross-Sectionally Correlated Errors

    OpenAIRE

    Chudik, Alexander; Mohaddes, Kamiar; Pesaran, M Hashem; Raissi, Mehdi

    2016-01-01

    This paper develops a cross-sectionally augmented distributed lag (CS-DL) approach to the estimation of long-run effects in large dynamic heterogeneous panel data models with cross-sectionally dependent errors. The asymptotic distribution of the CS-DL estimator is derived under coefficient heterogeneity in the case where the time dimension (T) and the cross-section dimension (N) are both large. The CS-DL approach is compared with more standard panel data estimators that are based on autoregre...

  3. Long-run effects in large heterogenous panel data models with cross-sectionally correlated errors

    OpenAIRE

    Chudik, Alexander; Mohaddes, Kamiar; Pesaran, M. Hashem; Raissi, Mehdi

    2015-01-01

    This paper develops a cross-sectionally augmented distributed lag (CS-DL) approach to the estimation of long-run effects in large dynamic heterogeneous panel data models with cross-sectionally dependent errors. The asymptotic distribution of the CS-DL estimator is derived under coefficient heterogeneity in the case where the time dimension (T) and the cross-section dimension (N) are both large. The CS-DL approach is compared with more standard panel data estimators that are based on autoregre...

  4. Finite element modelling of Plantar Fascia response during running on different surface types

    Science.gov (United States)

    Razak, A. H. A.; Basaruddin, K. S.; Salleh, A. F.; Rusli, W. M. R.; Hashim, M. S. M.; Daud, R.

    2017-10-01

    Plantar fascia is a ligament found in human foot structure located beneath the skin of human foot that functioning to stabilize longitudinal arch of human foot during standing and normal gait. To perform direct experiment on plantar fascia seems very difficult since the structure located underneath the soft tissue. The aim of this study is to develop a finite element (FE) model of foot with plantar fascia and investigate the effect of the surface hardness on biomechanical response of plantar fascia during running. The plantar fascia model was developed using Solidworks 2015 according to the bone structure of foot model that was obtained from Turbosquid database. Boundary conditions were set out based on the data obtained from experiment of ground reaction force response during running on different surface hardness. The finite element analysis was performed using Ansys 14. The results found that the peak of stress and strain distribution were occur on the insertion of plantar fascia to bone especially on calcaneal area. Plantar fascia became stiffer with increment of Young’s modulus value and was able to resist more loads. Strain of plantar fascia was decreased when Young’s modulus increased with the same amount of loading.

  5. Building and Running the Yucca Mountain Total System Performance Model in a Quality Environment

    International Nuclear Information System (INIS)

    D.A. Kalinich; K.P. Lee; J.A. McNeish

    2005-01-01

    A Total System Performance Assessment (TSPA) model has been developed to support the Safety Analysis Report (SAR) for the Yucca Mountain High-Level Waste Repository. The TSPA model forecasts repository performance over a 20,000-year simulation period. It has a high degree of complexity due to the complexity of its underlying process and abstraction models. This is reflected in the size of the model (a 27,000 element GoldSim file), its use of dynamic-linked libraries (14 DLLs), the number and size of its input files (659 files totaling 4.7 GB), and the number of model input parameters (2541 input database entries). TSPA model development and subsequent simulations with the final version of the model were performed to a set of Quality Assurance (QA) procedures. Due to the complexity of the model, comments on previous TSPAs, and the number of analysts involved (22 analysts in seven cities across four time zones), additional controls for the entire life-cycle of the TSPA model, including management, physical, model change, and input controls were developed and documented. These controls did not replace the QA. procedures, rather they provided guidance for implementing the requirements of the QA procedures with the specific intent of ensuring that the model development process and the simulations performed with the final version of the model had sufficient checking, traceability, and transparency. Management controls were developed to ensure that only management-approved changes were implemented into the TSPA model and that only management-approved model runs were performed. Physical controls were developed to track the use of prototype software and preliminary input files, and to ensure that only qualified software and inputs were used in the final version of the TSPA model. In addition, a system was developed to name, file, and track development versions of the TSPA model as well as simulations performed with the final version of the model

  6. Hydrological Modeling in the Bull Run Watershed in Support of a Piloting Utility Modeling Applications (PUMA) Project

    Science.gov (United States)

    Nijssen, B.; Chiao, T. H.; Lettenmaier, D. P.; Vano, J. A.

    2016-12-01

    Hydrologic models with varying complexities and structures are commonly used to evaluate the impact of climate change on future hydrology. While the uncertainties in future climate projections are well documented, uncertainties in streamflow projections associated with hydrologic model structure and parameter estimation have received less attention. In this study, we implemented and calibrated three hydrologic models (the Distributed Hydrology Soil Vegetation Model (DHSVM), the Precipitation-Runoff Modeling System (PRMS), and the Variable Infiltration Capacity model (VIC)) for the Bull Run watershed in northern Oregon using consistent data sources and best practice calibration protocols. The project was part of a Piloting Utility Modeling Applications (PUMA) project with the Portland Water Bureau (PWB) under the umbrella of the Water Utility Climate Alliance (WUCA). Ultimately PWB would use the model evaluation to select a model to perform in-house climate change analysis for Bull Run Watershed. This presentation focuses on the experimental design of the comparison project, project findings and the collaboration between the team at the University of Washington and at PWB. After calibration, the three models showed similar capability to reproduce seasonal and inter-annual variations in streamflow, but differed in their ability to capture extreme events. Furthermore, the annual and seasonal hydrologic sensitivities to changes in climate forcings differed among models, potentially attributable to different model representations of snow and vegetation processes.

  7. The DIAMOND Model of Peace Support Operations

    National Research Council Canada - National Science Library

    Bailey, Peter

    2005-01-01

    DIAMOND (Diplomatic And Military Operations in a Non-warfighting Domain) is a high-level stochastic simulation developed at Dstl as a key centerpiece within the Peace Support Operations (PSO) 'modelling jigsaw...

  8. RUN COORDINATION

    CERN Multimedia

    M. Chamizo

    2012-01-01

      On 17th January, as soon as the services were restored after the technical stop, sub-systems started powering on. Since then, we have been running 24/7 with reduced shift crew — Shift Leader and DCS shifter — to allow sub-detectors to perform calibration, noise studies, test software upgrades, etc. On 15th and 16th February, we had the first Mid-Week Global Run (MWGR) with the participation of most sub-systems. The aim was to bring CMS back to operation and to ensure that we could run after the winter shutdown. All sub-systems participated in the readout and the trigger was provided by a fraction of the muon systems (CSC and the central RPC wheel). The calorimeter triggers were not available due to work on the optical link system. Initial checks of different distributions from Pixels, Strips, and CSC confirmed things look all right (signal/noise, number of tracks, phi distribution…). High-rate tests were done to test the new CSC firmware to cure the low efficiency ...

  9. Debris flow run-out simulation and analysis using a dynamic model

    Science.gov (United States)

    Melo, Raquel; van Asch, Theo; Zêzere, José L.

    2018-02-01

    Only two months after a huge forest fire occurred in the upper part of a valley located in central Portugal, several debris flows were triggered by intense rainfall. The event caused infrastructural and economic damage, although no lives were lost. The present research aims to simulate the run-out of two debris flows that occurred during the event as well as to calculate via back-analysis the rheological parameters and the excess rain involved. Thus, a dynamic model was used, which integrates surface runoff, concentrated erosion along the channels, propagation and deposition of flow material. Afterwards, the model was validated using 32 debris flows triggered during the same event that were not considered for calibration. The rheological and entrainment parameters obtained for the most accurate simulation were then used to perform three scenarios of debris flow run-out on the basin scale. The results were confronted with the existing buildings exposed in the study area and the worst-case scenario showed a potential inundation that may affect 345 buildings. In addition, six streams where debris flow occurred in the past and caused material damage and loss of lives were identified.

  10. eWaterCycle: A global operational hydrological forecasting model

    Science.gov (United States)

    van de Giesen, Nick; Bierkens, Marc; Donchyts, Gennadii; Drost, Niels; Hut, Rolf; Sutanudjaja, Edwin

    2015-04-01

    Development of an operational hyper-resolution hydrological global model is a central goal of the eWaterCycle project (www.ewatercycle.org). This operational model includes ensemble forecasts (14 days) to predict water related stress around the globe. Assimilation of near-real time satellite data is part of the intended product that will be launched at EGU 2015. The challenges come from several directions. First, there are challenges that are mainly computer science oriented but have direct practical hydrological implications. For example, we aim to make use as much as possible of existing standards and open-source software. For example, different parts of our system are coupled through the Basic Model Interface (BMI) developed in the framework of the Community Surface Dynamics Modeling System (CSDMS). The PCR-GLOBWB model, built by Utrecht University, is the basic hydrological model that is the engine of the eWaterCycle project. Re-engineering of parts of the software was needed for it to run efficiently in a High Performance Computing (HPC) environment, and to be able to interface using BMI, and run on multiple compute nodes in parallel. The final aim is to have a spatial resolution of 1km x 1km, which is currently 10 x 10km. This high resolution is computationally not too demanding but very memory intensive. The memory bottleneck becomes especially apparent for data assimilation, for which we use OpenDA. OpenDa allows for different data assimilation techniques without the need to build these from scratch. We have developed a BMI adaptor for OpenDA, allowing OpenDA to use any BMI compatible model. To circumvent memory shortages which would result from standard applications of the Ensemble Kalman Filter, we have developed a variant that does not need to keep all ensemble members in working memory. At EGU, we will present this variant and how it fits well in HPC environments. An important step in the eWaterCycle project was the coupling between the hydrological and

  11. Tsunami generation, propagation, and run-up with a high-order Boussinesq model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.

    2009-01-01

    In this work we extend a high-order Boussinesq-type (finite difference) model, capable of simulating waves out to wavenumber times depth kh tsunamis. The extension is straight forward, requiring only...... show that the long-time (fully nonlinear) evolution of waves resulting from an upthrusted bottom can eventually result in true solitary waves, consistent with theoretical predictions. It is stressed, however, that the nonlinearity used far exceeds that typical of geophysical tsunamis in the open ocean....... The Boussinesq-type model is then used to simulate numerous tsunami-type events generated from submerged landslides, in both one and two horizontal dimensions. The results again compare well against previous experiments and/or numerical simulations. The new extension compliments recently developed run...

  12. Minkowski space pion model inspired by lattice QCD running quark mass

    Directory of Open Access Journals (Sweden)

    Clayton S. Mello

    2017-03-01

    Full Text Available The pion structure in Minkowski space is described in terms of an analytic model of the Bethe–Salpeter amplitude combined with Euclidean Lattice QCD results. The model is physically motivated to take into account the running quark mass, which is fitted to Lattice QCD data. The pion pseudoscalar vertex is associated to the quark mass function, as dictated by dynamical chiral symmetry breaking requirements in the limit of vanishing current quark mass. The quark propagator is analyzed in terms of a spectral representation, and it shows a violation of the positivity constraints. The integral representation of the pion Bethe–Salpeter amplitude is also built. The pion space-like electromagnetic form factor is calculated with a quark electromagnetic current, which satisfies the Ward–Takahashi identity to ensure current conservation. The results for the form factor and weak decay constant are found to be consistent with the experimental data.

  13. Modelling of flexi-coil springs with rubber-metal pads in a locomotive running gear

    Directory of Open Access Journals (Sweden)

    Michálek T.

    2015-06-01

    Full Text Available Nowadays, flexi-coil springs are commonly used in the secondary suspension stage of railway vehicles. Lateral stiffness of these springs is influenced by means of their design parameters (number of coils, height, mean diameter of coils, wire diameter etc. and it is often suitable to modify this stiffness in such way, that the suspension shows various lateral stiffness in different directions (i.e., longitudinally vs. laterally in the vehicle-related coordinate system. Therefore, these springs are often supplemented with some kind of rubber-metal pads. This paper deals with modelling of the flexi-coil springs supplemented with tilting rubber-metal tilting pads applied in running gear of an electric locomotive as well as with consequences of application of that solution of the secondary suspension from the point of view of the vehicle running performance. This analysis is performed by means of multi-body simulations and the description of lateral stiffness characteristics of the springs is based on results of experimental measurements of these characteristics performed in heavy laboratories of the Jan Perner Transport Faculty of the University of Pardubice.

  14. Hippocampal serotonin-1A receptor function in a mouse model of anxiety induced by long-term voluntary wheel running.

    Science.gov (United States)

    Fuss, Johannes; Vogt, Miriam A; Weber, Klaus-Josef; Burke, Teresa F; Gass, Peter; Hensler, Julie G

    2013-10-01

    We have recently demonstrated that, in C57/Bl6 mice, long-term voluntary wheel running is anxiogenic, and focal hippocampal irradiation prevents the increase in anxiety-like behaviors and neurobiological changes in the hippocampus induced by wheel running. Evidence supports a role of hippocampal 5-HT1A receptors in anxiety. Therefore, we investigated hippocampal binding and function of 5-HT1A receptors in this mouse model of anxiety. Four weeks of voluntary wheel running resulted in hippocampal subregion-specific changes in 5-HT1A receptor binding sites and function, as measured by autoradiography of [(3) H] 8-hydroxy-2-(di-n-propylamino)tetralin binding and agonist-stimulated binding of [(35) S]GTPγS to G proteins, respectively. In the dorsal CA1 region, 5-HT1A receptor binding and function were not altered by wheel running or irradiation. In the dorsal dentate gyrus and CA2/3 region, 5-HT1A receptor function was decreased by not only running but also irradiation. In the ventral pyramidal layer, wheel running resulted in a decrease of 5-HT1A receptor function, which was prevented by irradiation. Neither irradiation nor wheel running affected 5-HT1A receptors in medial prefrontal cortex or in the dorsal or median raphe nuclei. Our data indicate that downregulation of 5-HT1A receptor function in ventral pyramidal layer may play a role in anxiety-like behavior induced by wheel running. Copyright © 2013 Wiley Periodicals, Inc.

  15. A Scalable Version of the Navy Operational Global Atmospheric Prediction System Spectral Forecast Model

    Directory of Open Access Journals (Sweden)

    Thomas E. Rosmond

    2000-01-01

    Full Text Available The Navy Operational Global Atmospheric Prediction System (NOGAPS includes a state-of-the-art spectral forecast model similar to models run at several major operational numerical weather prediction (NWP centers around the world. The model, developed by the Naval Research Laboratory (NRL in Monterey, California, has run operational at the Fleet Numerical Meteorological and Oceanographic Center (FNMOC since 1982, and most recently is being run on a Cray C90 in a multi-tasked configuration. Typically the multi-tasked code runs on 10 to 15 processors with overall parallel efficiency of about 90%. resolution is T159L30, but other operational and research applications run at significantly lower resolutions. A scalable NOGAPS forecast model has been developed by NRL in anticipation of a FNMOC C90 replacement in about 2001, as well as for current NOGAPS research requirements to run on DOD High-Performance Computing (HPC scalable systems. The model is designed to run with message passing (MPI. Model design criteria include bit reproducibility for different processor numbers and reasonably efficient performance on fully shared memory, distributed memory, and distributed shared memory systems for a wide range of model resolutions. Results for a wide range of processor numbers, model resolutions, and different vendor architectures are presented. Single node performance has been disappointing on RISC based systems, at least compared to vector processor performance. This is a common complaint, and will require careful re-examination of traditional numerical weather prediction (NWP model software design and data organization to fully exploit future scalable architectures.

  16. Why operational risk modelling creates inverse incentives

    NARCIS (Netherlands)

    Doff, R.

    2015-01-01

    Operational risk modelling has become commonplace in large international banks and is gaining popularity in the insurance industry as well. This is partly due to financial regulation (Basel II, Solvency II). This article argues that operational risk modelling is fundamentally flawed, despite efforts

  17. Towards a numerical run-out model for quick-clay slides

    Science.gov (United States)

    Issler, Dieter; L'Heureux, Jean-Sébastien; Cepeda, José M.; Luna, Byron Quan; Gebreslassie, Tesfahunegn A.

    2015-04-01

    Highly sensitive glacio-marine clays occur in many relatively low-lying areas near the coasts of eastern Canada, Scandinavia and northern Russia. If the load exceeds the yield stress of these clays, they quickly liquefy, with a reduction of the yield strength and the viscosity by several orders of magnitude. Leaching, fluvial erosion, earthquakes and man-made overloads, by themselves or combined, are the most frequent triggers of quick-clay slides, which are hard to predict and can attain catastrophic dimensions. The present contribution reports on two preparatory studies that were conducted with a view to creating a run-out model tailored to the characteristics of quick-clay slides. One study analyzed the connections between the morphological and geotechnical properties of more than 30 well-documented Norwegian quick-clay slides and their run-out behavior. The laboratory experiments by Locat and Demers (1988) suggest that the behavior of quick clays can be reasonably described by universal relations involving the liquidity index, plastic index, remolding energy, salinity and sensitivity. However, these tests should be repeated with Norwegian clays and analyzed in terms of a (shear-thinning) Herschel-Bulkley fluid rather than a Bingham fluid because the shear stress appears to grow in a sub-linear fashion with the shear rate. Further study is required to understand the discrepancy between the material parameters obtained in laboratory tests of material from observed slides and in back-calculations of the same slides with the simple model by Edgers & Karlsrud (1982). The second study assessed the capability of existing numerical flow models to capture the most important aspects of quick-clay slides by back-calculating three different, well documented events in Norway: Rissa (1978), Finneidfjord (1996) and Byneset (2012). The numerical codes were (i) BING, a quasi-two-dimensional visco-plastic model, (ii) DAN3D (2009 version), and (iii) MassMov2D. The latter two are

  18. Analysis of the traditional vehicle’s running cost and the electric vehicle’s running cost under car-following model

    Science.gov (United States)

    Tang, Tie-Qiao; Xu, Ke-Wei; Yang, Shi-Chun; Shang, Hua-Yan

    2016-03-01

    In this paper, we use car-following theory to study the traditional vehicle’s running cost and the electric vehicle’s running cost. The numerical results illustrate that the traditional vehicle’s running cost is larger than that of the electric vehicle and that the system’s total running cost drops with the increase of the electric vehicle’s proportion, which shows that the electric vehicle is better than the traditional vehicle from the perspective of the running cost.

  19. Development of a simulation model for compression ignition engine running with ignition improved blend

    Directory of Open Access Journals (Sweden)

    Sudeshkumar Ponnusamy Moranahalli

    2011-01-01

    Full Text Available Department of Automobile Engineering, Anna University, Chennai, India. The present work describes the thermodynamic and heat transfer models used in a computer program which simulates the diesel fuel and ignition improver blend to predict the combustion and emission characteristics of a direct injection compression ignition engine fuelled with ignition improver blend using classical two zone approach. One zone consists of pure air called non burning zone and other zone consist of fuel and combustion products called burning zone. First law of thermodynamics and state equations are applied in each of the two zones to yield cylinder temperatures and cylinder pressure histories. Using the two zone combustion model the combustion parameters and the chemical equilibrium composition were determined. To validate the model an experimental investigation has been conducted on a single cylinder direct injection diesel engine fuelled with 12% by volume of 2- ethoxy ethanol blend with diesel fuel. Addition of ignition improver blend to diesel fuel decreases the exhaust smoke and increases the thermal efficiency for the power outputs. It was observed that there is a good agreement between simulated and experimental results and the proposed model requires low computational time for a complete run.

  20. RUN COORDINATION

    CERN Multimedia

    G. Rakness.

    2013-01-01

    After three years of running, in February 2013 the era of sub-10-TeV LHC collisions drew to an end. Recall, the 2012 run had been extended by about three months to achieve the full complement of high-energy and heavy-ion physics goals prior to the start of Long Shutdown 1 (LS1), which is now underway. The LHC performance during these exciting years was excellent, delivering a total of 23.3 fb–1 of proton-proton collisions at a centre-of-mass energy of 8 TeV, 6.2 fb–1 at 7 TeV, and 5.5 pb–1 at 2.76 TeV. They also delivered 170 μb–1 lead-lead collisions at 2.76 TeV/nucleon and 32 nb–1 proton-lead collisions at 5 TeV/nucleon. During these years the CMS operations teams and shift crews made tremendous strides to commission the detector, repeatedly stepping up to meet the challenges at every increase of instantaneous luminosity and energy. Although it does not fully cover the achievements of the teams, a way to quantify their success is the fact that that...

  1. Balancing hydropower production and river bed incision in operating a run-of-river hydropower scheme along the River Po

    Science.gov (United States)

    Denaro, Simona; Dinh, Quang; Bizzi, Simone; Bernardi, Dario; Pavan, Sara; Castelletti, Andrea; Schippa, Leonardo; Soncini-Sessa, Rodolfo

    2013-04-01

    Water management through dams and reservoirs is worldwide necessary to support key human-related activities ranging from hydropower production to water allocation, and flood risk mitigation. Reservoir operations are commonly planned in order to maximize these objectives. However reservoirs strongly influence river geomorphic processes causing sediment deficit downstream, altering the flow regime, leading, often, to process of river bed incision: for instance the variations of river cross sections over few years can notably affect hydropower production, flood mitigation, water supply strategies and eco-hydrological processes of the freshwater ecosystem. The river Po (a major Italian river) has experienced severe bed incision in the last decades. For this reason infrastructure stability has been negatively affected, and capacity to derive water decreased, navigation, fishing and tourism are suffering economic damages, not to mention the impact on the environment. Our case study analyzes the management of Isola Serafini hydropower plant located on the main Po river course. The plant has a major impact to the geomorphic river processes downstream, affecting sediment supply, connectivity (stopping sediment upstream the dam) and transport capacity (altering the flow regime). Current operation policy aims at maximizing hydropower production neglecting the effects in term of geomorphic processes. A new improved policy should also consider controlling downstream river bed incision. The aim of this research is to find suitable modeling framework to identify an operating policy for Isola Serafini reservoir able to provide an optimal trade-off between these two conflicting objectives: hydropower production and river bed incision downstream. A multi-objective simulation-based optimization framework is adopted. The operating policy is parameterized as a piecewise linear function and the parameters optimized using an interactive response surface approach. Global and local

  2. Operational Models of Infrastructure Resilience

    Science.gov (United States)

    2015-01-01

    Wiemer S. A stochastic forecast of California earth - quakes based on fault slip and smoothed seismicity. Bulletin of the Seismological Society of America...mean time between failures). For so- called rare events there is ongoing debate about how to model the frequencies with which disruptions oc- cur (e.g...in a simple priority list of importance; however, the frequency with which a link appears in attack or defense solutions provides an indication of

  3. Verification of the NWP models operated at ICM, Poland

    Science.gov (United States)

    Melonek, Malgorzata

    2010-05-01

    Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw (ICM) started its activity in the field of NWP in May 1997. Since this time the numerical weather forecasts covering Central Europe have been routinely published on our publicly available website. First NWP model used in ICM was hydrostatic Unified Model developed by the UK Meteorological Office. It was a mesoscale version with horizontal resolution of 17 km and 31 levels in vertical. At present two NWP non-hydrostatic models are running in quasi-operational regime. The main new UM model with 4 km horizontal resolution, 38 levels in vertical and forecats range of 48 hours is running four times a day. Second, the COAMPS model (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by the US Naval Research Laboratory, configured with the three nested grids (with coresponding resolutions of 39km, 13km and 4.3km, 30 vertical levels) are running twice a day (for 00 and 12 UTC). The second grid covers Central Europe and has forecast range of 84 hours. Results of the both NWP models, ie. COAMPS computed on 13km mesh resolution and UM, are verified against observations from the Polish synoptic stations. Verification uses surface observations and nearest grid point forcasts. Following meteorological elements are verified: air temperature at 2m, mean sea level pressure, wind speed and wind direction at 10 m and 12 hours accumulated precipitation. There are presented different statistical indices. For continous variables Mean Error(ME), Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) in 6 hours intervals are computed. In case of precipitation the contingency tables for different thresholds are computed and some of the verification scores such as FBI, ETS, POD, FAR are graphically presented. The verification sample covers nearly one year.

  4. Prosthetic model, but not stiffness or height, affects the metabolic cost of running for athletes with unilateral transtibial amputations.

    Science.gov (United States)

    Beck, Owen N; Taboga, Paolo; Grabowski, Alena M

    2017-07-01

    Running-specific prostheses enable athletes with lower limb amputations to run by emulating the spring-like function of biological legs. Current prosthetic stiffness and height recommendations aim to mitigate kinematic asymmetries for athletes with unilateral transtibial amputations. However, it is unclear how different prosthetic configurations influence the biomechanics and metabolic cost of running. Consequently, we investigated how prosthetic model, stiffness, and height affect the biomechanics and metabolic cost of running. Ten athletes with unilateral transtibial amputations each performed 15 running trials at 2.5 or 3.0 m/s while we measured ground reaction forces and metabolic rates. Athletes ran using three different prosthetic models with five different stiffness category and height combinations per model. Use of an Ottobock 1E90 Sprinter prosthesis reduced metabolic cost by 4.3 and 3.4% compared with use of Freedom Innovations Catapult [fixed effect (β) = -0.177; P < 0.001] and Össur Flex-Run (β = -0.139; P = 0.002) prostheses, respectively. Neither prosthetic stiffness ( P ≥ 0.180) nor height ( P = 0.062) affected the metabolic cost of running. The metabolic cost of running was related to lower peak (β = 0.649; P = 0.001) and stance average (β = 0.772; P = 0.018) vertical ground reaction forces, prolonged ground contact times (β = -4.349; P = 0.012), and decreased leg stiffness (β = 0.071; P < 0.001) averaged from both legs. Metabolic cost was reduced with more symmetric peak vertical ground reaction forces (β = 0.007; P = 0.003) but was unrelated to stride kinematic symmetry ( P ≥ 0.636). Therefore, prosthetic recommendations based on symmetric stride kinematics do not necessarily minimize the metabolic cost of running. Instead, an optimal prosthetic model, which improves overall biomechanics, minimizes the metabolic cost of running for athletes with unilateral transtibial amputations. NEW & NOTEWORTHY The metabolic cost of running for

  5. Modeling Operating Modes during Plant Life Cycle

    DEFF Research Database (Denmark)

    Jørgensen, Sten Bay; Lind, Morten

    2012-01-01

    of candidate control structures. The present contribution focuses on development of a model ensemble for a plant with an illustartive example for a bioreactor. Starting from a functional model a process plant may be conceptually designed and qualitative operating models may be developed to cover the different...... regions within the plant operating window, including transitions between operating regions. Subsequently qualitative functional models may be developed when the means for achieving the desired functionality are sufficiently specified during the design process. Quantitative mathematical models of plant...... physics can be used for detailed design and optimization. However the qualitative functional models already provide a systematic framework based on the notion of means-end abstraction hierarchies. Thereby functional modeling provides a scientific basis for managing complexity. A functional modelling...

  6. Short-run analysis of fiscal policy and the current account in a finite horizon model

    OpenAIRE

    Heng-fu Zou

    1995-01-01

    This paper utilizes a technique developed by Judd to quantify the short-run effects of fiscal policies and income shocks on the current account in a small open economy. It is found that: (1) a future increase in government spending improves the short-run current account; (2) a future tax increase worsens the short-run current account; (3) a present increase in the government spending worsens the short-run current account dollar by dollar, while a present increase in the income improves the cu...

  7. RG running in a minimal UED model in light of recent LHC Higgs mass bounds

    International Nuclear Information System (INIS)

    Blennow, Mattias; Melbéus, Henrik; Ohlsson, Tommy; Zhang, He

    2012-01-01

    We study how the recent ATLAS and CMS Higgs mass bounds affect the renormalization group running of the physical parameters in universal extra dimensions. Using the running of the Higgs self-coupling constant, we derive bounds on the cutoff scale of the extra-dimensional theory itself. We show that the running of physical parameters, such as the fermion masses and the CKM mixing matrix, is significantly restricted by these bounds. In particular, we find that the running of the gauge couplings cannot be sufficient to allow gauge unification at the cutoff scale.

  8. Stereological Study on the Positive Effect of Running Exercise on the Capillaries in the Hippocampus in a Depression Model

    Directory of Open Access Journals (Sweden)

    Linmu Chen

    2017-11-01

    Full Text Available Running exercise is an effective method to improve depressive symptoms when combined with drugs. However, the underlying mechanisms are not fully clear. Cerebral blood flow perfusion in depressed patients is significantly lower in the hippocampus. Physical activity can achieve cerebrovascular benefits. The purpose of this study was to evaluate the impacts of running exercise on capillaries in the hippocampal CA1 and dentate gyrus (DG regions. The chronic unpredictable stress (CUS depression model was used in this study. CUS rats were given 4 weeks of running exercise from the fifth week to the eighth week (20 min every day from Monday to Friday each week. The sucrose consumption test was used to measure anhedonia. Furthermore, stereological methods were used to investigate the capillary changes among the control group, CUS/Standard group and CUS/Running group. Sucrose consumption significantly increased in the CUS/Running group. Running exercise has positive effects on the capillaries parameters in the hippocampal CA1 and DG regions, such as the total volume, total length and total surface area. These results demonstrated that capillaries are protected by running exercise in the hippocampal CA1 and DG might be one of the structural bases for the exercise-induced treatment of depression-like behavior. These results suggest that drugs and behavior influence capillaries and may be considered as a new means for depression treatment in the future.

  9. MODEL OF PRIORITY SERVICE FOR AIRDROME'S OPERATIONS

    Directory of Open Access Journals (Sweden)

    A. K. Mitrofanov

    2015-01-01

    Full Text Available The model of process of priority service of departure and arrival operations is discussed. Priorities are operatively appointed by the controllers personnel according to requirements of documents of the civil aviation authorities and a situation developing on the earth and in air around airfield. Expressions for an assessment of properties of service of flights are received.

  10. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2015-10-01

    Full Text Available would be needed by a Cyber Security Operations Centre in order to perform offensive cyber operations?". The analysis was performed, using as a springboard seven models of cyber-attack, and resulted in the development of what is described as a canonical...

  11. Stochastic Modelling and Analysis of Warehouse Operations

    NARCIS (Netherlands)

    Y. Gong (Yeming)

    2009-01-01

    textabstractThis thesis has studied stochastic models and analysis of warehouse operations. After an overview of stochastic research in warehouse operations, we explore the following topics. Firstly, we search optimal batch sizes in a parallel-aisle warehouse with online order arrivals. We employ a

  12. Modeling Control Situations in Power System Operations

    DEFF Research Database (Denmark)

    Saleem, Arshad; Lind, Morten; Singh, Sri Niwas

    2010-01-01

    Increased interconnection and loading of the power system along with deregulation has brought new challenges for electric power system operation, control and automation. Traditional power system models used in intelligent operation and control are highly dependent on the task purpose. Thus, a model...... of explicit principles for model construction. This paper presents a work on using explicit means-ends model based reasoning about complex control situations which results in maintaining consistent perspectives and selecting appropriate control action for goal driven agents. An example of power system...... for intelligent operation and control must represent system features, so that information from measurements can be related to possible system states and to control actions. These general modeling requirements are well understood, but it is, in general, difficult to translate them into a model because of the lack...

  13. Visualization study of operators' plant knowledge model

    International Nuclear Information System (INIS)

    Kanno, Tarou; Furuta, Kazuo; Yoshikawa, Shinji

    1999-03-01

    Nuclear plants are typically very complicated systems and are required extremely high level safety on the operations. Since it is never possible to include all the possible anomaly scenarios in education/training curriculum, plant knowledge formation is desired for operators to enable thein to act against unexpected anomalies based on knowledge base decision making. The authors have been conducted a study on operators' plant knowledge model for the purpose of supporting operators' effort in forming this kind of plant knowledge. In this report, an integrated plant knowledge model consisting of configuration space, causality space, goal space and status space is proposed. The authors examined appropriateness of this model and developed a prototype system to support knowledge formation by visualizing the operators' knowledge model and decision making process in knowledge-based actions with this model on a software system. Finally the feasibility of this prototype as a supportive method in operator education/training to enhance operators' ability in knowledge-based performance has been evaluated. (author)

  14. Test and Evaluation of the Malicious Activity Simulation Tool (MAST) in a Local Area Network (LAN) Running the Common PC Operating System Environment (COMPOSE)

    Science.gov (United States)

    2013-09-01

    Compliant Center, “2011 Internet crime report,” National White Collar Crime Center, Glen Allen, VA. May 2012. [15] F. Paget, “Hacktivism,” McAfee, Inc...IC3 Internet Crime Complaint Center xiv IRC Internet Relay Chat IP Internet Protocol IPS Intrusion Prevention System IT Information Systems...able to run its simulations on the same network without 2 causing a reduction in the network’s operational readiness or availability. We discuss this

  15. Operation and performance of the ATLAS Level-1 Calorimeter and Level-1 Topological Triggers in Run 2 at the LHC

    CERN Document Server

    Whalen, Kate; The ATLAS collaboration

    2017-01-01

    In Run 2 at CERN's Large Hadron Collider, the ATLAS detector uses a two-level trigger system to reduce the event rate from the nominal collision rate of 40 MHz to the event storage rate of 1 kHz, while preserving interesting physics events. The first step of the trigger system, Level-1, reduces the event rate to 100 kHz with a latency of less than 2.5 μs. One component of this system is the Level-1 Calorimeter Trigger (L1Calo), which uses coarse-granularity information from the electromagnetic and hadronic calorimeters to identify regions of interest corresponding to electrons, photons, taus, jets, and large amounts of transverse energy and missing transverse energy. In this talk, we will discuss the improved performance of the L1Calo system in the challenging, high-luminosity conditions provided by the LHC in Run 2. As the LHC exceeds its design luminosity, it is becoming even more critical to reduce event rates while preserving physics. A new feature of the ATLAS trigger system for Run 2 is the Level-1 Top...

  16. Study on modeling of operator's learning mechanism

    International Nuclear Information System (INIS)

    Yoshimura, Seichi; Hasegawa, Naoko

    1998-01-01

    One effective method to analyze the causes of human errors is to model the behavior of human and to simulate it. The Central Research Institute of Electric Power Industry (CRIEPI) has developed an operator team behavior simulation system called SYBORG (Simulation System for the Behavior of an Operating Group) to analyze the human errors and to establish the countermeasures for them. As an operator behavior model which composes SYBORG has no learning mechanism and the knowledge of a plant is fixed, it cannot take suitable actions when unknown situations occur nor learn anything from the experience. However, considering actual operators, learning is an essential human factor to enhance their abilities to diagnose plant anomalies. In this paper, Q learning with 1/f fluctuation was proposed as a learning mechanism of an operator and simulation using the mechanism was conducted. The results showed the effectiveness of the learning mechanism. (author)

  17. Relaxed memory models: an operational approach

    OpenAIRE

    Boudol , Gérard; Petri , Gustavo

    2009-01-01

    International audience; Memory models define an interface between programs written in some language and their implementation, determining which behaviour the memory (and thus a program) is allowed to have in a given model. A minimal guarantee memory models should provide to the programmer is that well-synchronized, that is, data-race free code has a standard semantics. Traditionally, memory models are defined axiomatically, setting constraints on the order in which memory operations are allow...

  18. The Launch Systems Operations Cost Model

    Science.gov (United States)

    Prince, Frank A.; Hamaker, Joseph W. (Technical Monitor)

    2001-01-01

    One of NASA's primary missions is to reduce the cost of access to space while simultaneously increasing safety. A key component, and one of the least understood, is the recurring operations and support cost for reusable launch systems. In order to predict these costs, NASA, under the leadership of the Independent Program Assessment Office (IPAO), has commissioned the development of a Launch Systems Operations Cost Model (LSOCM). LSOCM is a tool to predict the operations & support (O&S) cost of new and modified reusable (and partially reusable) launch systems. The requirements are to predict the non-recurring cost for the ground infrastructure and the recurring cost of maintaining that infrastructure, performing vehicle logistics, and performing the O&S actions to return the vehicle to flight. In addition, the model must estimate the time required to cycle the vehicle through all of the ground processing activities. The current version of LSOCM is an amalgamation of existing tools, leveraging our understanding of shuttle operations cost with a means of predicting how the maintenance burden will change as the vehicle becomes more aircraft like. The use of the Conceptual Operations Manpower Estimating Tool/Operations Cost Model (COMET/OCM) provides a solid point of departure based on shuttle and expendable launch vehicle (ELV) experience. The incorporation of the Reliability and Maintainability Analysis Tool (RMAT) as expressed by a set of response surface model equations gives a method for estimating how changing launch system characteristics affects cost and cycle time as compared to today's shuttle system. Plans are being made to improve the model. The development team will be spending the next few months devising a structured methodology that will enable verified and validated algorithms to give accurate cost estimates. To assist in this endeavor the LSOCM team is part of an Agency wide effort to combine resources with other cost and operations professionals to

  19. An Ionospheric Metric Study Using Operational Models

    Science.gov (United States)

    Sojka, J. J.; Schunk, R. W.; Thompson, D. C.; Scherliess, L.; Harris, T. J.

    2006-12-01

    One of the outstanding challenges in upgrading ionospheric operational models is quantifying their improvement. This challenge is not necessarily an absolute accuracy one, but rather answering the question, "Is the newest operational model an improvement over its predecessor under operational scenarios?" There are few documented cases where ionospheric models are compared either with each other or against "ground truth". For example a CEDAR workshop team, PRIMO, spent almost a decade carrying out a models comparison with ionosonde and incoherent scatter radar measurements from the Millstone Hill, Massachusetts location [Anderson et al.,1998]. The result of this study was that all models were different and specific conditions could be found when each was the "best" model. Similarly, a National Space Weather Metrics ionospheric challenge was held and results were presented at a National Space Weather meeting. The results were again found to be open to interpretation, and issues with the value of the specific metrics were raised (Fuller-Rowell, private communication, 2003). Hence, unlike the tropospheric weather community, who have established metrics and exercised them on new models over many decades to quantify improvement, the ionospheric community has not yet settled on a metric of both scientific and operational value. We report on a study in which metrics were used to compare various forms of the International Reference Ionosphere (IRI), the Ionospheric Forecast Model (IFM), and the Utah State University Global Assimilation of Ionospheric Measurements Model (USU-GAIM) models. The ground truth for this study was a group of 11 ionosonde data sets taken between 20 March and 19 April 2004. The metric parameter was the ionosphere's critical frequency. The metric was referenced to the IRI. Hence, the study addressed the specific question what improvement does IFM and USU-GAIM have over IRI. Both strengths (improvements) and weaknesses of these models are discussed

  20. Diel activity patterns of juvenile late fall-run Chinook salmon with implications for operation of a gated water diversion in the Sacramento–San Joaquin River Delta

    Science.gov (United States)

    Plumb, John M.; Adams, Noah S.; Perry, Russell W.; Holbrook, Christopher; Romine, Jason G.; Blake, Aaron R.; Burau, Jon R.

    2016-01-01

    In the Sacramento-San Joaquin River Delta, California, tidal forces that reverse river flows increase the proportion of water and juvenile late fall-run Chinook salmon diverted into a network of channels that were constructed to support agriculture and human consumption. This area is known as the interior delta, and it has been associated with poor fish survival. Under the rationale that the fish will be diverted in proportion to the amount of water that is diverted, the Delta Cross Channel (DCC) has been prescriptively closed during the winter out-migration to reduce fish entrainment and mortality into the interior delta. The fish are thought to migrate mostly at night, and so daytime operation of the DCC may allow for water diversion that minimizes fish entrainment and mortality. To assess this, the DCC gate was experimentally opened and closed while we released 2983 of the fish with acoustic transmitters upstream of the DCC to monitor their arrival and entrainment into the DCC. We used logistic regression to model night-time arrival and entrainment probabilities with covariates that included the proportion of each diel period with upstream flow, flow, rate of change in flow and water temperature. The proportion of time with upstream flow was the most important driver of night-time arrival probability, yet river flow had the largest effect on fish entrainment into the DCC. Modelling results suggest opening the DCC during daytime while keeping the DCC closed during night-time may allow for water diversion that minimizes fish entrainment into the interior delta.

  1. Run-of-River Impoundments Can Remain Unfilled While Transporting Gravel Bedload: Numerical Modeling Results

    Science.gov (United States)

    Pearson, A.; Pizzuto, J. E.

    2015-12-01

    Previous work at run-of-river (ROR) dams in northern Delaware has shown that bedload supplied to ROR impoundments can be transported over the dam when impoundments remain unfilled. Transport is facilitated by high levels of sand in the impoundment that lowers the critical shear stresses for particle entrainment, and an inversely sloping sediment ramp connecting the impoundment bed (where the water depth is typically equal to the dam height) with the top of the dam (Pearson and Pizzuto, in press). We demonstrate with one-dimensional bed material transport modeling that bed material can move through impoundments and that equilibrium transport (i.e., a balance between supply to and export from the impoundment, with a constant bed elevation) is possible even when the bed elevation is below the top of the dam. Based on our field work and previous HEC-RAS modeling, we assess bed material transport capacity at the base of the sediment ramp (and ignore detailed processes carrying sediment up and ramp and over the dam). The hydraulics at the base of the ramp are computed using a weir equation, providing estimates of water depth, velocity, and friction, based on the discharge and sediment grain size distribution of the impoundment. Bedload transport rates are computed using the Wilcock-Crowe equation, and changes in the impoundment's bed elevation are determined by sediment continuity. Our results indicate that impoundments pass the gravel supplied from upstream with deep pools when gravel supply rate is low, gravel grain sizes are relatively small, sand supply is high, and discharge is high. Conversely, impoundments will tend to fill their pools when gravel supply rate is high, gravel grain sizes are relatively large, sand supply is low, and discharge is low. The rate of bedload supplied to an impoundment is the primary control on how fast equilibrium transport is reached, with discharge having almost no influence on the timing of equilibrium.

  2. EMMA model: an advanced operational mesoscale air quality model for urban and regional environments

    International Nuclear Information System (INIS)

    Jose, R.S.; Rodriguez, M.A.; Cortes, E.; Gonzalez, R.M.

    1999-01-01

    Mesoscale air quality models are an important tool to forecast and analyse the air quality in regional and urban areas. In recent years an increased interest has been shown by decision makers in these types of software tools. The complexity of such a model has grown exponentially with the increase of computer power. Nowadays, medium workstations can run operational versions of these modelling systems successfully. Presents a complex mesoscale air quality model which has been installed in the Environmental Office of the Madrid community (Spain) in order to forecast accurately the ozone, nitrogen dioxide and sulphur dioxide air concentrations in a 3D domain centred on Madrid city. Describes the challenging scientific matters to be solved in order to develop an operational version of the atmospheric mesoscale numerical pollution model for urban and regional areas (ANA). Some encouraging results have been achieved in the attempts to improve the accuracy of the predictions made by the version already installed. (Author)

  3. Multiple-step model-experiment matching allows precise definition of dynamical leg parameters in human running.

    Science.gov (United States)

    Ludwig, C; Grimmer, S; Seyfarth, A; Maus, H-M

    2012-09-21

    The spring-loaded inverted pendulum (SLIP) model is a well established model for describing bouncy gaits like human running. The notion of spring-like leg behavior has led many researchers to compute the corresponding parameters, predominantly stiffness, in various experimental setups and in various ways. However, different methods yield different results, making the comparison between studies difficult. Further, a model simulation with experimentally obtained leg parameters typically results in comparatively large differences between model and experimental center of mass trajectories. Here, we pursue the opposite approach which is calculating model parameters that allow reproduction of an experimental sequence of steps. In addition, to capture energy fluctuations, an extension of the SLIP (ESLIP) is required and presented. The excellent match of the models with the experiment validates the description of human running by the SLIP with the obtained parameters which we hence call dynamical leg parameters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Modeling for operational event risk assessment

    International Nuclear Information System (INIS)

    Sattison, M.B.

    1997-01-01

    The U.S. Nuclear Regulatory Commission has been using risk models to evaluate the risk significance of operational events in U.S. commercial nuclear power plants for more seventeen years. During that time, the models have evolved in response to the advances in risk assessment technology and insights gained with experience. Evaluation techniques fall into two categories, initiating event assessments and condition assessments. The models used for these analyses have become uniquely specialized for just this purpose

  5. A simple running model with rolling contact and its role as a template for dynamic locomotion on a hexapod robot.

    Science.gov (United States)

    Huang, Ke-Jung; Huang, Chun-Kai; Lin, Pei-Chun

    2014-10-07

    We report on the development of a robot's dynamic locomotion based on a template which fits the robot's natural dynamics. The developed template is a low degree-of-freedom planar model for running with rolling contact, which we call rolling spring loaded inverted pendulum (R-SLIP). Originating from a reduced-order model of the RHex-style robot with compliant circular legs, the R-SLIP model also acts as the template for general dynamic running. The model has a torsional spring and a large circular arc as the distributed foot, so during locomotion it rolls on the ground with varied equivalent linear stiffness. This differs from the well-known spring loaded inverted pendulum (SLIP) model with fixed stiffness and ground contact points. Through dimensionless steps-to-fall and return map analysis, within a wide range of parameter spaces, the R-SLIP model is revealed to have self-stable gaits and a larger stability region than that of the SLIP model. The R-SLIP model is then embedded as the reduced-order 'template' in a more complex 'anchor', the RHex-style robot, via various mapping definitions between the template and the anchor. Experimental validation confirms that by merely deploying the stable running gaits of the R-SLIP model on the empirical robot with simple open-loop control strategy, the robot can easily initiate its dynamic running behaviors with a flight phase and can move with similar body state profiles to those of the model, in all five testing speeds. The robot, embedded with the SLIP model but performing walking locomotion, further confirms the importance of finding an adequate template of the robot for dynamic locomotion.

  6. A simple running model with rolling contact and its role as a template for dynamic locomotion on a hexapod robot

    International Nuclear Information System (INIS)

    Huang, Ke-Jung; Huang, Chun-Kai; Lin, Pei-Chun

    2014-01-01

    We report on the development of a robot’s dynamic locomotion based on a template which fits the robot’s natural dynamics. The developed template is a low degree-of-freedom planar model for running with rolling contact, which we call rolling spring loaded inverted pendulum (R-SLIP). Originating from a reduced-order model of the RHex-style robot with compliant circular legs, the R-SLIP model also acts as the template for general dynamic running. The model has a torsional spring and a large circular arc as the distributed foot, so during locomotion it rolls on the ground with varied equivalent linear stiffness. This differs from the well-known spring loaded inverted pendulum (SLIP) model with fixed stiffness and ground contact points. Through dimensionless steps-to-fall and return map analysis, within a wide range of parameter spaces, the R-SLIP model is revealed to have self-stable gaits and a larger stability region than that of the SLIP model. The R-SLIP model is then embedded as the reduced-order ‘template’ in a more complex ‘anchor’, the RHex-style robot, via various mapping definitions between the template and the anchor. Experimental validation confirms that by merely deploying the stable running gaits of the R-SLIP model on the empirical robot with simple open-loop control strategy, the robot can easily initiate its dynamic running behaviors with a flight phase and can move with similar body state profiles to those of the model, in all five testing speeds. The robot, embedded with the SLIP model but performing walking locomotion, further confirms the importance of finding an adequate template of the robot for dynamic locomotion. (paper)

  7. Renormalization Group Evolution of the Standard Model Dimension Six Operators I: Formalism and lambda Dependence

    CERN Document Server

    Jenkins, Elizabeth E; Trott, Michael

    2013-01-01

    We calculate the order \\lambda, \\lambda^2 and \\lambda y^2 terms of the 59 x 59 one-loop anomalous dimension matrix of dimension-six operators, where \\lambda and y are the Standard Model Higgs self-coupling and a generic Yukawa coupling, respectively. The dimension-six operators modify the running of the Standard Model parameters themselves, and we compute the complete one-loop result for this. We discuss how there is mixing between operators for which no direct one-particle-irreducible diagram exists, due to operator replacements by the equations of motion.

  8. Dark Matter Benchmark Models for Early LHC Run-2 Searches. Report of the ATLAS/CMS Dark Matter Forum

    Energy Technology Data Exchange (ETDEWEB)

    Abercrombie, Daniel [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States). et al.

    2015-07-06

    One of the guiding principles of this report is to channel the efforts of the ATLAS and CMS collaborations towards a minimal basis of dark matter models that should influence the design of the early Run-2 searches. At the same time, a thorough survey of realistic collider signals of Dark Matter is a crucial input to the overall design of the search program.

  9. Physically based dynamic run-out modelling for quantitative debris flow risk assessment: a case study in Tresenda, northern Italy

    Czech Academy of Sciences Publication Activity Database

    Quan Luna, B.; Blahůt, Jan; Camera, C.; Van Westen, C.; Apuani, T.; Jetten, V.; Sterlacchini, S.

    2014-01-01

    Roč. 72, č. 3 (2014), s. 645-661 ISSN 1866-6280 Institutional support: RVO:67985891 Keywords : debris flow * FLO-2D * run-out * quantitative hazard and risk assessment * vulnerability * numerical modelling Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.765, year: 2014

  10. On the duality between long-run relations and common trends in the I(1) versus I(2) model

    DEFF Research Database (Denmark)

    Juselius, Katarina

    1994-01-01

    Long-run relations and common trends are discussed in terms of the multivariate cointegration model given in the autoregressive and the moving average form. The basic results needed for the analysis of I(1) and 1(2)processes are reviewed and the results applied to Danish monetary data. The test...

  11. An integrated model to assess critical rain fall thresholds for the critical run-out distances of debris flows

    NARCIS (Netherlands)

    van Asch, Th.W.J.|info:eu-repo/dai/nl/304839558; Tang, C.; Alkema, D.; Zhu, J.; Zhou, W.

    2013-01-01

    A dramatic increase in debris flows occurred in the years after the 2008 Wenchuan earthquake in SW China due to the deposition of loose co-seismic landslide material. This paper proposes a preliminary integrated model, which describes the relationship between rain input and debris flow run-out in

  12. Modeling grain size adjustments in the downstream reach following run-of-river development

    Science.gov (United States)

    Fuller, Theodore K.; Venditti, Jeremy G.; Nelson, Peter A.; Palen, Wendy J.

    2016-04-01

    Disruptions to sediment supply continuity caused by run-of-river (RoR) hydropower development have the potential to cause downstream changes in surface sediment grain size which can influence the productivity of salmon habitat. The most common approach to understanding the impacts of RoR hydropower is to study channel changes in the years following project development, but by then, any impacts are manifest and difficult to reverse. Here we use a more proactive approach, focused on predicting impacts in the project planning stage. We use a one-dimensional morphodynamic model to test the hypothesis that the greatest risk of geomorphic change and impact to salmon habitat from a temporary sediment supply disruption exists where predevelopment sediment supply is high and project design creates substantial sediment storage volume. We focus on the potential impacts in the reach downstream of a powerhouse for a range of development scenarios that are typical of projects developed in the Pacific Northwest and British Columbia. Results indicate that increases in the median bed surface size (D50) are minor if development occurs on low sediment supply streams (<1 mm for supply rates 1 × 10-5 m2 s-1 or lower), and substantial for development on high sediment supply streams (8-30 mm for supply rates between 5.5 × 10-4 and 1 × 10-3 m2 s-1). However, high sediment supply streams recover rapidly to the predevelopment surface D50 (˜1 year) if sediment supply can be reestablished.

  13. Renormalizations and operator expansion in sigma model

    International Nuclear Information System (INIS)

    Terentyev, M.V.

    1988-01-01

    The operator expansion (OPE) is studied for the Green function at x 2 → 0 (n(x) is the dynamical field ofσ-model) in the framework of the two-dimensional σ-model with the O(N) symmetry group at large N. As a preliminary step we formulate the renormalization scheme which permits introduction of an arbitrary intermediate scale μ 2 in the framework of 1/N expansion and discuss factorization (separation) of small (p μ) momentum region. It is shown that definition of composite local operators and coefficient functions figuring in OPE is unambiguous only in the leading order in 1/N expansion when dominant are the solutions with extremum of action. Corrections of order f(μ 2 )/N (here f(μ 2 ) is the effective interaction constant at the point μ 2 ) in composite operators and coefficient functions essentially depend on factorization method of high and low momentum regions. It is shown also that contributions to the power corrections of order m 2 x 2 f(μ 2 )/N in the Green function (here m is the dynamical mass-scale factor in σ-model) arise simultaneously from two sources: from the mean vacuum value of the composite operator n ∂ 2 n and from the hard particle contributions in the coefficient function of unite operator. Due to the analogy between σ-model and QCD the obtained result indicates theoretical limitations to the sum rule method in QCD. (author)

  14. Modeling Operations Costs for Human Exploration Architectures

    Science.gov (United States)

    Shishko, Robert

    2013-01-01

    Operations and support (O&S) costs for human spaceflight have not received the same attention in the cost estimating community as have development costs. This is unfortunate as O&S costs typically comprise a majority of life-cycle costs (LCC) in such programs as the International Space Station (ISS) and the now-cancelled Constellation Program. Recognizing this, the Constellation Program and NASA HQs supported the development of an O&S cost model specifically for human spaceflight. This model, known as the Exploration Architectures Operations Cost Model (ExAOCM), provided the operations cost estimates for a variety of alternative human missions to the moon, Mars, and Near-Earth Objects (NEOs) in architectural studies. ExAOCM is philosophically based on the DoD Architecture Framework (DoDAF) concepts of operational nodes, systems, operational functions, and milestones. This paper presents some of the historical background surrounding the development of the model, and discusses the underlying structure, its unusual user interface, and lastly, previous examples of its use in the aforementioned architectural studies.

  15. Search for non-standard model signatures in the WZ/ZZ final state at CDF run II

    Energy Technology Data Exchange (ETDEWEB)

    Norman, Matthew [Univ. of California, San Diego, CA (United States)

    2009-01-01

    This thesis discusses a search for non-Standard Model physics in heavy diboson production in the dilepton-dijet final state, using 1.9 fb -1 of data from the CDF Run II detector. New limits are set on the anomalous coupling parameters for ZZ and WZ production based on limiting the production cross-section at high š. Additionally limits are set on the direct decay of new physics to ZZ andWZ diboson pairs. The nature and parameters of the CDF Run II detector are discussed, as are the influences that it has on the methods of our analysis.

  16. Running Away

    Science.gov (United States)

    ... the streets in the United States. Why Kids Run Away Remember how you felt the last time you got in ... how to express angry feelings without violence. Know how to calm yourself down after you're upset. Maybe you need to run around outside, listen to music, draw, or write ...

  17. A multi-state model for wind farms considering operational outage probability

    DEFF Research Database (Denmark)

    Cheng, Lin; Liu, Manjun; Sun, Yuanzhang

    2013-01-01

    developed by considering the following factors: running time, operating environment, operating conditions, and wind speed fluctuations. A multi-state model for wind farms is also established. Numerical results illustrate that the proposed model can be well applied to power system reliability assessment...... power penetration levels. Therefore, a more comprehensive analysis toward WECS as well as an appropriate reliability assessment model are essential for maintaining the reliable operation of power systems. In this paper, the impact of wind turbine outage probability on system reliability is firstly...

  18. An operator model-based filtering scheme

    International Nuclear Information System (INIS)

    Sawhney, R.S.; Dodds, H.L.; Schryer, J.C.

    1990-01-01

    This paper presents a diagnostic model developed at Oak Ridge National Laboratory (ORNL) for off-normal nuclear power plant events. The diagnostic model is intended to serve as an embedded module of a cognitive model of the human operator, one application of which could be to assist control room operators in correctly responding to off-normal events by providing a rapid and accurate assessment of alarm patterns and parameter trends. The sequential filter model is comprised of two distinct subsystems --- an alarm analysis followed by an analysis of interpreted plant signals. During the alarm analysis phase, the alarm pattern is evaluated to generate hypotheses of possible initiating events in order of likelihood of occurrence. Each hypothesis is further evaluated through analysis of the current trends of state variables in order to validate/reject (in the form of increased/decreased certainty factor) the given hypothesis. 7 refs., 4 figs

  19. Model Based Autonomy for Robust Mars Operations

    Science.gov (United States)

    Kurien, James A.; Nayak, P. Pandurang; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    Space missions have historically relied upon a large ground staff, numbering in the hundreds for complex missions, to maintain routine operations. When an anomaly occurs, this small army of engineers attempts to identify and work around the problem. A piloted Mars mission, with its multiyear duration, cost pressures, half-hour communication delays and two-week blackouts cannot be closely controlled by a battalion of engineers on Earth. Flight crew involvement in routine system operations must also be minimized to maximize science return. It also may be unrealistic to require the crew have the expertise in each mission subsystem needed to diagnose a system failure and effect a timely repair, as engineers did for Apollo 13. Enter model-based autonomy, which allows complex systems to autonomously maintain operation despite failures or anomalous conditions, contributing to safe, robust, and minimally supervised operation of spacecraft, life support, In Situ Resource Utilization (ISRU) and power systems. Autonomous reasoning is central to the approach. A reasoning algorithm uses a logical or mathematical model of a system to infer how to operate the system, diagnose failures and generate appropriate behavior to repair or reconfigure the system in response. The 'plug and play' nature of the models enables low cost development of autonomy for multiple platforms. Declarative, reusable models capture relevant aspects of the behavior of simple devices (e.g. valves or thrusters). Reasoning algorithms combine device models to create a model of the system-wide interactions and behavior of a complex, unique artifact such as a spacecraft. Rather than requiring engineers to all possible interactions and failures at design time or perform analysis during the mission, the reasoning engine generates the appropriate response to the current situation, taking into account its system-wide knowledge, the current state, and even sensor failures or unexpected behavior.

  20. Effective operator treatment of the Lipkin model

    International Nuclear Information System (INIS)

    Abraham, K.J.; Vary, J.P.

    2004-01-01

    We analyze the Lipkin model in the strong coupling limit using effective operator techniques. We present both analytical and numerical results for low energy effective Hamiltonians. We investigate the reliability of various approximations used to simplify the nuclear many body problem, such as the cluster approximation. We demonstrate, in explicit examples, certain limits to the validity of the cluster approximation but caution that these limits may be particular to this model where the interactions are of unlimited range

  1. From control to causation: Validating a 'complex systems model' of running-related injury development and prevention.

    Science.gov (United States)

    Hulme, A; Salmon, P M; Nielsen, R O; Read, G J M; Finch, C F

    2017-11-01

    There is a need for an ecological and complex systems approach for better understanding the development and prevention of running-related injury (RRI). In a previous article, we proposed a prototype model of the Australian recreational distance running system which was based on the Systems Theoretic Accident Mapping and Processes (STAMP) method. That model included the influence of political, organisational, managerial, and sociocultural determinants alongside individual-level factors in relation to RRI development. The purpose of this study was to validate that prototype model by drawing on the expertise of both systems thinking and distance running experts. This study used a modified Delphi technique involving a series of online surveys (December 2016- March 2017). The initial survey was divided into four sections containing a total of seven questions pertaining to different features associated with the prototype model. Consensus in opinion about the validity of the prototype model was reached when the number of experts who agreed or disagreed with survey statement was ≥75% of the total number of respondents. A total of two Delphi rounds was needed to validate the prototype model. Out of a total of 51 experts who were initially contacted, 50.9% (n = 26) completed the first round of the Delphi, and 92.3% (n = 24) of those in the first round participated in the second. Most of the 24 full participants considered themselves to be a running expert (66.7%), and approximately a third indicated their expertise as a systems thinker (33.3%). After the second round, 91.7% of the experts agreed that the prototype model was a valid description of the Australian distance running system. This is the first study to formally examine the development and prevention of RRI from an ecological and complex systems perspective. The validated model of the Australian distance running system facilitates theoretical advancement in terms of identifying practical system

  2. Operation and modeling of the MOS transistor

    CERN Document Server

    Tsividis, Yannis

    2011-01-01

    Operation and Modeling of the MOS Transistor has become a standard in academia and industry. Extensively revised and updated, the third edition of this highly acclaimed text provides a thorough treatment of the MOS transistor - the key element of modern microelectronic chips.

  3. A practical model for sustainable operational performance

    International Nuclear Information System (INIS)

    Vlek, C.A.J.; Steg, E.M.; Feenstra, D.; Gerbens-Leenis, W.; Lindenberg, S.; Moll, H.; Schoot Uiterkamp, A.; Sijtsma, F.; Van Witteloostuijn, A.

    2002-01-01

    By means of a concrete model for sustainable operational performance enterprises can report uniformly on the sustainability of their contributions to the economy, welfare and the environment. The development and design of a three-dimensional monitoring system is presented and discussed [nl

  4. Modeling Casualty Sustainment During Peacekeeping Operations

    Science.gov (United States)

    2003-10-09

    Medicine, 1999, 164(8), Supplement. 23. Blood CG, Anderson ME. The Battle for Hue: Casualty and Disease Rates during Urban Warfare, Military Medicine...NAVAL HEALTH RESEARCH CENTER MODELING CASUALTY SUSTAINMENT DURING PEACEKEEPING OPERATIONS G. J. Walker C. G. Bloodl...Report No. 03-21 Approved for public release; distribution unlimited. NAVAL HEALTH RESEARCH

  5. Business intelligence modeling in launch operations

    Science.gov (United States)

    Bardina, Jorge E.; Thirumalainambi, Rajkumar; Davis, Rodney D.

    2005-05-01

    The future of business intelligence in space exploration will focus on the intelligent system-of-systems real-time enterprise. In present business intelligence, a number of technologies that are most relevant to space exploration are experiencing the greatest change. Emerging patterns of set of processes rather than organizational units leading to end-to-end automation is becoming a major objective of enterprise information technology. The cost element is a leading factor of future exploration systems. This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations, process models, systems and environment models, and cost models as a comprehensive disciplined

  6. Business Intelligence Modeling in Launch Operations

    Science.gov (United States)

    Bardina, Jorge E.; Thirumalainambi, Rajkumar; Davis, Rodney D.

    2005-01-01

    This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation .based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations. process models, systems and environment models, and cost models as a comprehensive disciplined enterprise analysis environment. Significant emphasis is being placed on adapting root cause from existing Shuttle operations to exploration. Technical challenges include cost model validation, integration of parametric models with discrete event process and systems simulations. and large-scale simulation integration. The enterprise architecture is required for coherent integration of systems models. It will also require a plan for evolution over the life of the program. The proposed technology will produce

  7. Performance and Operation Experience of the ATLAS SemiConductor Tracker in LHC Run 1 (2009-2012)

    CERN Document Server

    Robichaud-Veronneau, A; The ATLAS collaboration

    2013-01-01

    After more than 3 years of successful operation at the LHC, we report on the operation and performance of the Semi-Conductor Tracker (SCT) functioning in a high luminosity, high radiation environment. The SCT is part of the ATLAS experiment at CERN and is constructed of 4088 silicon detector modules for a total of 6.3 million strips. Each module is designed, constructed and tested to operate as a stand-alone unit, mechanically, electrically, optically and thermally. The modules are mounted into two types of structures: one barrel (4 cylinders) and two end-cap systems (9 disks on each end of the barrel). The SCT silicon micro-strip sensors are processed in the planar p-in-n technology. The signals are processed in the front-end ABCD3TA ASICs, which use a binary readout architecture. Data is transferred to the off-detector readout electronics via optical fibers. We find 99.3% of the SCT modules are operational, noise occupancy and hit efficiency exceed the design specifications; the alignment is very close to t...

  8. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers

    Science.gov (United States)

    Tomkins, James L [Albuquerque, NM; Camp, William J [Albuquerque, NM

    2009-03-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  9. Dezenflasyon Sürecinde Türkiye’de Enflasyonun Uzun ve Kısa Dönem Dinamiklerinin Modellenmesi(Modelling The Long Run and The Short Run Dynamics of Inflation In The Disinflation Process In Turkey

    Directory of Open Access Journals (Sweden)

    Macide ÇİÇEK

    2005-01-01

    Full Text Available In this study, it is employed that Expectations-Augmented Philips Curve Model to investigate the link between inflation and unit labor costs, output gap (proxy for demand shocks, real exchange rate (proxy for supply shocks and price expectations for Turkey using monthly data from 2000:01 to 2004:12. The methodology employed in this paper uses unit root test, Johansen Cointegration Test to examine the existence of possible long run relationships among the variables included in the model and a single equation error correction model for the inflation equation estimated by OLS to examine the short run dynamics of inflation, respectively. It is find that in the long run, mark-up behaviour of output prices over unit labor costs is the main cause of inflation, real exchange rate has a rather big impact on reduced inflation and demand shocks don’t led to an increase in prices. The short run dynamics of the inflation equation indicate that supply shocks are the determinant of inflation in the short run. It is also find that exchange rate is the variable that trigger an inflation adjustment the most rapidly in the short run.

  10. The national operational environment model (NOEM)

    Science.gov (United States)

    Salerno, John J.; Romano, Brian; Geiler, Warren

    2011-06-01

    The National Operational Environment Model (NOEM) is a strategic analysis/assessment tool that provides insight into the complex state space (as a system) that is today's modern operational environment. The NOEM supports baseline forecasts by generating plausible futures based on the current state. It supports what-if analysis by forecasting ramifications of potential "Blue" actions on the environment. The NOEM also supports sensitivity analysis by identifying possible pressure (leverage) points in support of the Commander that resolves forecasted instabilities, and by ranking sensitivities in a list for each leverage point and response. The NOEM can be used to assist Decision Makers, Analysts and Researchers with understanding the inter-workings of a region or nation state, the consequences of implementing specific policies, and the ability to plug in new operational environment theories/models as they mature. The NOEM is built upon an open-source, license-free set of capabilities, and aims to provide support for pluggable modules that make up a given model. The NOEM currently has an extensive number of modules (e.g. economic, security & social well-being pieces such as critical infrastructure) completed along with a number of tools to exercise them. The focus this year is on modeling the social and behavioral aspects of a populace within their environment, primarily the formation of various interest groups, their beliefs, their requirements, their grievances, their affinities, and the likelihood of a wide range of their actions, depending on their perceived level of security and happiness. As such, several research efforts are currently underway to model human behavior from a group perspective, in the pursuit of eventual integration and balance of populace needs/demands within their respective operational environment and the capacity to meet those demands. In this paper we will provide an overview of the NOEM, the need for and a description of its main components

  11. EMD-Based Methodology for the Identification of a High-Speed Train Running in a Gear Operating State

    Directory of Open Access Journals (Sweden)

    Alejandro Bustos

    2018-03-01

    Full Text Available An efficient maintenance is a key consideration in systems of railway transport, especially in high-speed trains, in order to avoid accidents with catastrophic consequences. In this sense, having a method that allows for the early detection of defects in critical elements, such as the bogie mechanical components, is a crucial for increasing the availability of rolling stock and reducing maintenance costs. The main contribution of this work is the proposal of a methodology that, based on classical signal processing techniques, provides a set of parameters for the fast identification of the operating state of a critical mechanical system. With this methodology, the vibratory behaviour of a very complex mechanical system is characterised, through variable inputs, which will allow for the detection of possible changes in the mechanical elements. This methodology is applied to a real high-speed train in commercial service, with the aim of studying the vibratory behaviour of the train (specifically, the bogie before and after a maintenance operation. The results obtained with this methodology demonstrated the usefulness of the new procedure and allowed for the disclosure of reductions between 15% and 45% in the spectral power of selected Intrinsic Mode Functions (IMFs after the maintenance operation.

  12. Multi-response optimization of diesel engine operating parameters running with water-in-diesel emulsion fuel

    Directory of Open Access Journals (Sweden)

    Vellaiyan Suresh

    2017-01-01

    Full Text Available Water-in-diesel emulsion fuel is a promising alternative diesel fuel, which has the potential to promote better performance and emission characteristics in an existing Diesel engine without engine modification and added cost. The key factor that has to be focused with the introduction of such fuel in existing Diesel engine is desired engine-operating conditions. The present study attempts to address the previous issue with two-phases of experiments. In the first phase, stable water-in-diesel emulsion fuels (5, 10, 15, and 20 water-in-diesel are prepared and their stability period and physico-chemical properties are measured. In the second phase, experiments are conducted in a single cylinder, 4-stroke Diesel engine with pre-pared water-in-diesel emulsion fuel blends based on L16 orthogonal array suggested in Taguchi’s quality control concept to record the output responses (perormance and emission levels. Based on signal-to-noise ratio and grey relational analysis, optimal level of operating factors are determined to obtain better response and verified through confirmation experiments. A statistical analysis of variance is applied to measure the significance of individual operating parameters on overall engine performance. Results indicate that the emulsion fuel prepared by Sorbitan monolaurate surfactant at high stirrer speed endows with better emulsion stability and acceptable variation in physicochemical properties. Results of this study also reveal that the optimal parametric setting effectively improves the combustion, performance, and emission characteristics of Diesel engine.

  13. EMD-Based Methodology for the Identification of a High-Speed Train Running in a Gear Operating State.

    Science.gov (United States)

    Bustos, Alejandro; Rubio, Higinio; Castejón, Cristina; García-Prada, Juan Carlos

    2018-03-06

    An efficient maintenance is a key consideration in systems of railway transport, especially in high-speed trains, in order to avoid accidents with catastrophic consequences. In this sense, having a method that allows for the early detection of defects in critical elements, such as the bogie mechanical components, is a crucial for increasing the availability of rolling stock and reducing maintenance costs. The main contribution of this work is the proposal of a methodology that, based on classical signal processing techniques, provides a set of parameters for the fast identification of the operating state of a critical mechanical system. With this methodology, the vibratory behaviour of a very complex mechanical system is characterised, through variable inputs, which will allow for the detection of possible changes in the mechanical elements. This methodology is applied to a real high-speed train in commercial service, with the aim of studying the vibratory behaviour of the train (specifically, the bogie) before and after a maintenance operation. The results obtained with this methodology demonstrated the usefulness of the new procedure and allowed for the disclosure of reductions between 15% and 45% in the spectral power of selected Intrinsic Mode Functions (IMFs) after the maintenance operation.

  14. Configuring a Graphical User Interface for Managing Local HYSPLIT Model Runs Through AWIPS

    Science.gov (United States)

    Wheeler, mark M.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian; VanSpeybroeck, Kurt M.

    2009-01-01

    Responding to incidents involving the release of harmful airborne pollutants is a continual challenge for Weather Forecast Offices in the National Weather Service. When such incidents occur, current protocol recommends forecaster-initiated requests of NOAA's Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model output through the National Centers of Environmental Prediction to obtain critical dispersion guidance. Individual requests are submitted manually through a secured web site, with desired multiple requests submitted in sequence, for the purpose of obtaining useful trajectory and concentration forecasts associated with the significant release of harmful chemical gases, radiation, wildfire smoke, etc., into local the atmosphere. To help manage the local HYSPLIT for both routine and emergency use, a graphical user interface was designed for operational efficiency. The interface allows forecasters to quickly determine the current HYSPLIT configuration for the list of predefined sites (e.g., fixed sites and floating sites), and to make any necessary adjustments to key parameters such as Input Model. Number of Forecast Hours, etc. When using the interface, forecasters will obtain desired output more confidently and without the danger of corrupting essential configuration files.

  15. Operator formulation of the droplet model

    International Nuclear Information System (INIS)

    Lee, B.W.

    1987-01-01

    We study in detail the implications of the operator formulation of the droplet model. The picture of high-energy scattering that emerges from this model attributed the interaction between two colliding particles at high energies to an instantaneous, multiple exchange between two extended charge distributions. Thus the study of charge correlation functions becomes the most important problem in the droplet model. We find that in order for the elastic cross section to have a finite limit at infinite energy, the charge must be a conserved one. In quantum electrodynamics the charge in question is the electric charge. In hadronic physics, we conjecture, it is the baryonic charge. Various arguments for and implications of this hypothesis are presented. We study formal properties of the charge correlation functions that follow from microcausality, T, C, P invariances, and charge conservation. Perturbation expansion of the correlation functions is studied, and their cluster properties are deduced. A cluster expansion of the high-energy T matrix is developed, and the exponentiation of the interaction potential in this scheme is noted. The operator droplet model is put to the test of reproducing the high-energy limit of elastic scattering quantum electrodynamics found by Cheng and Wu in perturbation theory. We find that the droplet model reproduces exactly the results of Cheng and Wu as to the impact factor. In fact, the ''impact picture'' of Cheng and Wu is completely equivalent to the droplet model in the operator version. An appraisal is made of the possible limitation of the model. (author). 13 refs

  16. Batch vs continuous-feeding operational mode for the removal of pesticides from agricultural run-off by microalgae systems: A laboratory scale study

    Energy Technology Data Exchange (ETDEWEB)

    Matamoros, Víctor, E-mail: victor.matamoros@idaea.csic.es; Rodríguez, Yolanda

    2016-05-15

    Highlights: • The effect of microalgae on the removal of pesticides has been evaluated. • Continuous feeding operational mode is more efficient for removing pesticides. • Microalgae increased the removal of some pesticides. • Pesticide TPs confirmed that biodegradation was relevant. - Abstract: Microalgae-based water treatment technologies have been used in recent years to treat different water effluents, but their effectiveness for removing pesticides from agricultural run-off has not yet been addressed. This paper assesses the effect of microalgae in pesticide removal, as well as the influence of different operation strategies (continuous vs batch feeding). The following pesticides were studied: mecoprop, atrazine, simazine, diazinone, alachlor, chlorfenvinphos, lindane, malathion, pentachlorobenzene, chlorpyrifos, endosulfan and clofibric acid (tracer). 2 L batch reactors and 5 L continuous reactors were spiked to 10 μg L{sup −1} of each pesticide. Additionally, three different hydraulic retention times (HRTs) were assessed (2, 4 and 8 days) in the continuous feeding reactors. The batch-feeding experiments demonstrated that the presence of microalgae increased the efficiency of lindane, alachlor and chlorpyrifos by 50%. The continuous feeding reactors had higher removal efficiencies than the batch reactors for pentachlorobenzene, chlorpyrifos and lindane. Whilst longer HRTs increased the technology’s effectiveness, a low HRT of 2 days was capable of removing malathion, pentachlorobenzene, chlorpyrifos, and endosulfan by up to 70%. This study suggests that microalgae-based treatment technologies can be an effective alternative for removing pesticides from agricultural run-off.

  17. Effects of Degree of Superheat on the Running Performance of an Organic Rankine Cycle (ORC Waste Heat Recovery System for Diesel Engines under Various Operating Conditions

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2014-04-01

    Full Text Available This study analyzed the variation law of engine exhaust energy under various operating conditions to improve the thermal efficiency and fuel economy of diesel engines. An organic Rankine cycle (ORC waste heat recovery system with internal heat exchanger (IHE was designed to recover waste heat from the diesel engine exhaust. The zeotropic mixture R416A was used as the working fluid for the ORC. Three evaluation indexes were presented as follows: waste heat recovery efficiency (WHRE, engine thermal efficiency increasing ratio (ETEIR, and output energy density of working fluid (OEDWF. In terms of various operating conditions of the diesel engine, this study investigated the variation tendencies of the running performances of the ORC waste heat recovery system and the effects of the degree of superheat on the running performance of the ORC waste heat recovery system through theoretical calculations. The research findings showed that the net power output, WHRE, and ETEIR of the ORC waste heat recovery system reach their maxima when the degree of superheat is 40 K, engine speed is 2200 r/min, and engine torque is 1200 N·m. OEDWF gradually increases with the increase in the degree of superheat, which indicates that the required mass flow rate of R416A decreases for a certain net power output, thereby significantly decreasing the risk of environmental pollution.

  18. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  19. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  20. Voluntary Running Attenuates Memory Loss, Decreases Neuropathological Changes and Induces Neurogenesis in a Mouse Model of Alzheimer's Disease.

    Science.gov (United States)

    Tapia-Rojas, Cheril; Aranguiz, Florencia; Varela-Nallar, Lorena; Inestrosa, Nibaldo C

    2016-01-01

    Alzheimer's disease (AD) is a neurodegenerative disorder characterized by loss of memory and cognitive abilities, and the appearance of amyloid plaques composed of the amyloid-β peptide (Aβ) and neurofibrillary tangles formed of tau protein. It has been suggested that exercise might ameliorate the disease; here, we evaluated the effect of voluntary running on several aspects of AD including amyloid deposition, tau phosphorylation, inflammatory reaction, neurogenesis and spatial memory in the double transgenic APPswe/PS1ΔE9 mouse model of AD. We report that voluntary wheel running for 10 weeks decreased Aβ burden, Thioflavin-S-positive plaques and Aβ oligomers in the hippocampus. In addition, runner APPswe/PS1ΔE9 mice showed fewer phosphorylated tau protein and decreased astrogliosis evidenced by lower staining of GFAP. Further, runner APPswe/PS1ΔE9 mice showed increased number of neurons in the hippocampus and exhibited increased cell proliferation and generation of cells positive for the immature neuronal protein doublecortin, indicating that running increased neurogenesis. Finally, runner APPswe/PS1ΔE9 mice showed improved spatial memory performance in the Morris water maze. Altogether, our findings indicate that in APPswe/PS1ΔE9 mice, voluntary running reduced all the neuropathological hallmarks of AD studied, reduced neuronal loss, increased hippocampal neurogenesis and reduced spatial memory loss. These findings support that voluntary exercise might have therapeutic value on AD. © 2015 International Society of Neuropathology.

  1. Dual-use tools and systematics-aware analysis workflows in the ATLAS Run-2 analysis model

    CERN Document Server

    FARRELL, Steven; The ATLAS collaboration; Calafiura, Paolo; Delsart, Pierre-Antoine; Elsing, Markus; Koeneke, Karsten; Krasznahorkay, Attila; Krumnack, Nils; Lancon, Eric; Lavrijsen, Wim; Laycock, Paul; Lei, Xiaowen; Strandberg, Sara Kristina; Verkerke, Wouter; Vivarelli, Iacopo; Woudstra, Martin

    2015-01-01

    The ATLAS analysis model has been overhauled for the upcoming run of data collection in 2015 at 13 TeV. One key component of this upgrade was the Event Data Model (EDM), which now allows for greater flexibility in the choice of analysis software framework and provides powerful new features that can be exploited by analysis software tools. A second key component of the upgrade is the introduction of a dual-use tool technology, which provides abstract interfaces for analysis software tools to run in either the Athena framework or a ROOT-based framework. The tool interfaces, including a new interface for handling systematic uncertainties, have been standardized for the development of improved analysis workflows and consolidation of high-level analysis tools. This paper will cover the details of the dual-use tool functionality, the systematics interface, and how these features fit into a centrally supported analysis environment.

  2. Dual-use tools and systematics-aware analysis workflows in the ATLAS Run-II analysis model

    CERN Document Server

    FARRELL, Steven; The ATLAS collaboration

    2015-01-01

    The ATLAS analysis model has been overhauled for the upcoming run of data collection in 2015 at 13 TeV. One key component of this upgrade was the Event Data Model (EDM), which now allows for greater flexibility in the choice of analysis software framework and provides powerful new features that can be exploited by analysis software tools. A second key component of the upgrade is the introduction of a dual-use tool technology, which provides abstract interfaces for analysis software tools to run in either the Athena framework or a ROOT-based framework. The tool interfaces, including a new interface for handling systematic uncertainties, have been standardized for the development of improved analysis workflows and consolidation of high-level analysis tools. This presentation will cover the details of the dual-use tool functionality, the systematics interface, and how these features fit into a centrally supported analysis environment.

  3. Modelling Team Adversarial Actions in UAV Operations

    Science.gov (United States)

    2007-11-01

    organization and behaviour of a (possible large and heterogeneous) network of assets, and its ability to perform tasks over a region and time of interest...the number of requests fulfilled, and the time taken to satisfy requests [13]. Here “theatre” is used to denote a specific geographic region within... Desenvolvimento . REFERENCES [1] Cruz, J.B., Simaan, M.A., Gacic, A. Jiang, H., Letellier, B. and Li, M. “Modelling and Control of Military Operations

  4. Dark Matter Benchmark Models for Early LHC Run-2 Searches: Report of the ATLAS/CMS Dark Matter Forum

    CERN Document Server

    Abercrombie, Daniel; Akilli, Ece; Alcaraz Maestre, Juan; Allen, Brandon; Alvarez Gonzalez, Barbara; Andrea, Jeremy; Arbey, Alexandre; Azuelos, Georges; Azzi, Patrizia; Backovic, Mihailo; Bai, Yang; Banerjee, Swagato; Beacham, James; Belyaev, Alexander; Boveia, Antonio; Brennan, Amelia Jean; Buchmueller, Oliver; Buckley, Matthew R.; Busoni, Giorgio; Buttignol, Michael; Cacciapaglia, Giacomo; Caputo, Regina; Carpenter, Linda; Filipe Castro, Nuno; Gomez Ceballos, Guillelmo; Cheng, Yangyang; Chou, John Paul; Cortes Gonzalez, Arely; Cowden, Chris; D'Eramo, Francesco; De Cosa, Annapaola; De Gruttola, Michele; De Roeck, Albert; De Simone, Andrea; Deandrea, Aldo; Demiragli, Zeynep; DiFranzo, Anthony; Doglioni, Caterina; du Pree, Tristan; Erbacher, Robin; Erdmann, Johannes; Fischer, Cora; Flaecher, Henning; Fox, Patrick J.; Fuks, Benjamin; Genest, Marie-Helene; Gomber, Bhawna; Goudelis, Andreas; Gramling, Johanna; Gunion, John; Hahn, Kristian; Haisch, Ulrich; Harnik, Roni; Harris, Philip C.; Hoepfner, Kerstin; Hoh, Siew Yan; Hsu, Dylan George; Hsu, Shih-Chieh; Iiyama, Yutaro; Ippolito, Valerio; Jacques, Thomas; Ju, Xiangyang; Kahlhoefer, Felix; Kalogeropoulos, Alexis; Kaplan, Laser Seymour; Kashif, Lashkar; Khoze, Valentin V.; Khurana, Raman; Kotov, Khristian; Kovalskyi, Dmytro; Kulkarni, Suchita; Kunori, Shuichi; Kutzner, Viktor; Lee, Hyun Min; Lee, Sung-Won; Liew, Seng Pei; Lin, Tongyan; Lowette, Steven; Madar, Romain; Malik, Sarah; Maltoni, Fabio; Martinez Perez, Mario; Mattelaer, Olivier; Mawatari, Kentarou; McCabe, Christopher; Megy, Theo; Morgante, Enrico; Mrenna, Stephen; Narayanan, Siddharth M.; Nelson, Andy; Novaes, Sergio F.; Padeken, Klaas Ole; Pani, Priscilla; Papucci, Michele; Paulini, Manfred; Paus, Christoph; Pazzini, Jacopo; Penning, Bjorn; Peskin, Michael E.; Pinna, Deborah; Procura, Massimiliano; Qazi, Shamona F.; Racco, Davide; Re, Emanuele; Riotto, Antonio; Rizzo, Thomas G.; Roehrig, Rainer; Salek, David; Sanchez Pineda, Arturo; Sarkar, Subir; Schmidt, Alexander; Schramm, Steven Randolph; Shepherd, William; Singh, Gurpreet; Soffi, Livia; Srimanobhas, Norraphat; Sung, Kevin; Tait, Tim M.P.; Theveneaux-Pelzer, Timothee; Thomas, Marc; Tosi, Mia; Trocino, Daniele; Undleeb, Sonaina; Vichi, Alessandro; Wang, Fuquan; Wang, Lian-Tao; Wang, Ren-Jie; Whallon, Nikola; Worm, Steven; Wu, Mengqing; Wu, Sau Lan; Yang, Hongtao; Yang, Yong; Yu, Shin-Shan; Zaldivar, Bryan; Zanetti, Marco; Zhang, Zhiqing; Zucchetta, Alberto

    2015-01-01

    This document is the final report of the ATLAS-CMS Dark Matter Forum, a forum organized by the ATLAS and CMS collaborations with the participation of experts on theories of Dark Matter, to select a minimal basis set of dark matter simplified models that should support the design of the early LHC Run-2 searches. A prioritized, compact set of benchmark models is proposed, accompanied by studies of the parameter space of these models and a repository of generator implementations. This report also addresses how to apply the Effective Field Theory formalism for collider searches and present the results of such interpretations.

  5. Snow model design for operational purposes

    Science.gov (United States)

    Kolberg, Sjur

    2017-04-01

    A parsimonious distributed energy balance snow model intended for operational use is evaluated using discharge, snow covered area and grain size; the latter two as observed from the MODIS sensor. The snow model is an improvement of the existing GamSnow model, which is a part of the Enki modelling framework. Core requirements for the new version have been: 1. Reduction of calibration freedom, motivated by previous experience of non-identifiable parameters in the existing version 2. Improvement of process representation based on recent advances in physically based snow modelling 3. Limiting the sensitivity to forcing data which are poorly known over the spatial domain of interest (often in mountainous areas) 4. Preference for observable states, and the ability to improve from updates. The albedo calculation is completely revised, now based on grain size through an emulation of the SNICAR model (Flanner and Zender, 2006; Gardener and Sharp, 2010). The number of calibration parameters in the albedo model is reduced from 6 to 2. The wind function governing turbulent energy fluxes has been reduced from 2 to 1 parameter. Following Raleigh et al (2011), snow surface radiant temperature is split from the top layer thermodynamic temperature, using bias-corrected wet-bulb temperature to model the former. Analyses are ongoing, and the poster will bring evaluation results from 16 years of MODIS observations and more than 25 catchments in southern Norway.

  6. Up and running with AutoCAD 2014 2D and 3D drawing and modeling

    CERN Document Server

    Gindis, Elliot

    2013-01-01

    Get ""Up and Running"" with AutoCAD using Gindis's combination of step-by-step instruction, examples, and insightful explanations. The emphasis from the beginning is on core concepts and practical application of AutoCAD in architecture, engineering and design. Equally useful in instructor-led classroom training, self-study, or as a professional reference, the book is written with the user in mind by a long-time AutoCAD professional and instructor based on what works in the industry and the classroom. Strips away complexities, both real and perceived, and reduces AutoCAD t

  7. Numerical Model Metrics Tools in Support of Navy Operations

    Science.gov (United States)

    Dykes, J. D.; Fanguy, P.

    2017-12-01

    Increasing demands of accurate ocean forecasts that are relevant to the Navy mission decision makers demand tools that quickly provide relevant numerical model metrics to the forecasters. Increasing modelling capabilities with ever-higher resolution domains including coupled and ensemble systems as well as the increasing volume of observations and other data sources to which to compare the model output requires more tools for the forecaster to enable doing more with less. These data can be appropriately handled in a geographic information system (GIS) fused together to provide useful information and analyses, and ultimately a better understanding how the pertinent model performs based on ground truth.. Oceanographic measurements like surface elevation, profiles of temperature and salinity, and wave height can all be incorporated into a set of layers correlated to geographic information such as bathymetry and topography. In addition, an automated system that runs concurrently with the models on high performance machines matches routinely available observations to modelled values to form a database of matchups with which statistics can be calculated and displayed, to facilitate validation of forecast state and derived variables. ArcMAP, developed by Environmental Systems Research Institute, is a GIS application used by the Naval Research Laboratory (NRL) and naval operational meteorological and oceanographic centers to analyse the environment in support of a range of Navy missions. For example, acoustic propagation in the ocean is described with a three-dimensional analysis of sound speed that depends on profiles of temperature, pressure and salinity predicted by the Navy Coastal Ocean Model. The data and model output must include geo-referencing information suitable for accurately placing the data within the ArcMAP framework. NRL has developed tools that facilitate merging these geophysical data and their analyses, including intercomparisons between model

  8. Inverse dynamic modelling of jumping in the red-legged running frog,Kassina maculata.

    Science.gov (United States)

    Porro, Laura B; Collings, Amber J; Eberhard, Enrico A; Chadwick, Kyle P; Richards, Christopher T

    2017-05-15

    Although the red-legged running frog, Kassina maculata , is secondarily a walker/runner, it retains the capacity for multiple locomotor modes, including jumping at a wide range of angles (nearly 70 deg). Using simultaneous hind limb kinematics and single-foot ground reaction forces, we performed inverse dynamics analyses to calculate moment arms and torques about the hind limb joints during jumping at different angles in K. maculata. We show that forward thrust is generated primarily at the hip and ankle, while body elevation is primarily driven by the ankle. Steeper jumps are achieved by increased thrust at the hip and ankle and greater downward rotation of the distal limb segments. Because of its proximity to the GRF vector, knee posture appears to be important in controlling torque directions about this joint and, potentially, torque magnitudes at more distal joints. Other factors correlated with higher jump angles include increased body angle in the preparatory phase, faster joint openings and increased joint excursion, higher ventrally directed force, and greater acceleration and velocity. Finally, we demonstrate that jumping performance in K. maculata does not appear to be compromised by presumed adaptation to walking/running. Our results provide new insights into how frogs engage in a wide range of locomotor behaviours and the multi-functionality of anuran limbs. © 2017. Published by The Company of Biologists Ltd.

  9. The effect of two training models on the average changes in running speed in 2400m races

    Directory of Open Access Journals (Sweden)

    Bolas Nikolaos

    2014-01-01

    Full Text Available Running at an even pace is, in both physical and tactical aspect, an essential factor when achieving good results in middle and long distance races. The appropriate strategy for running a tactically effective race starts by selecting the optimal running speed. Two models of training lasting for six weeks were applied on the group of subjects (N=43 composed of students from the Faculty of Sport and Physical Education, University of Belgrade. The aim of the study was to determine how the applied models of training would affect the deviations of running speed from the mean values in 2400m races when running for the best result and also, how the applied models of training would affect the improvement of aerobic capacities, showed through maximal oxygen uptake. The analysis of the obtained results showed that no statistically significant differences in the average deviations of running speed from the mean values in 2400m races were recorded in any of the experimental groups either in the initial (G1=2.44±1.74 % and G2=1±0.75 % or the final measurements (G1=3.72±3.69 % and G2=4.57±3.63 %. Although there were no statistically significant differences after training stimulus in either final measurements, the subjects achieved better result, that is, they improved the running speed in the final (G1=4.12±0.48 m/s and G2=4.23±0.31 m/s as compared with the initial measurement (G1=3.7±0.36 m/s and G2=3.84±0.38 m/s. The results of the study showed that in both groups, there was a statistically significant improvement in the final measurement (G1=56.05±6.91 ml/kg/min and G2=59.55±6.95 ml/kg/min as compared to the initial measurement (G1=53.71±7.23 ml/kg/min and G2=54.58±6.49 ml/kg/min regarding the maximal oxygen uptake so that both training models have a significant effect on this variable. The results obtained could have a significant contribution when working with students and school population, assuming that in the lessons of theory and

  10. Evaluation of low-frequency operational limit of proposed ITER low-field-side reflectometer waveguide run including miter bends

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Guiding [Univ. of California, Los Angeles, CA (United States). Dept. of Physics and Astronomy and Plasma Science and Technology Inst. (PSTI); Peebles, W. A. [Univ. of California, Los Angeles, CA (United States). Dept. of Physics and Astronomy and Plasma Science and Technology Inst. (PSTI); Doyle, E. J. [Univ. of California, Los Angeles, CA (United States). Dept. of Physics and Astronomy and Plasma Science and Technology Inst. (PSTI); Crocker, N. A. [Univ. of California, Los Angeles, CA (United States). Dept. of Physics and Astronomy and Plasma Science and Technology Inst. (PSTI); Wannberg, C. [Univ. of California, Los Angeles, CA (United States). Dept. of Physics and Astronomy and Plasma Science and Technology Inst. (PSTI); Lau, Cornwall H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hanson, Gregory R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Doane, John L. [General Atomics, San Diego, CA (United States)

    2017-10-19

    The present design concept for the ITER low-field-side reflectometer transmission line (TL) consists of an ~40 m long, 6.35 cm diameter helically corrugated waveguide (WG) together with ten 90° miter bends. This paper presents an evaluation of the TL performance at low frequencies (33-50 GHz) where the predicted HE11 mode ohmic and mode conversion losses start to increase significantly. Quasi-optical techniques were used to form a near Gaussian beam to efficiently couple radiation in this frequency range into the WG. We observed that the output beams from the guide remained linearly polarized with cross-polarization power levels of ~1.5%-3%. The polarization rotation due to the helical corrugations was in the range ~1°-3°. The radiated beam power profiles typically show excellent Gaussian propagation characteristics at distances >20 cm from the final exit aperture. The round trip propagation loss was found to be ~2.5 dB at 50 GHz and ~6.5 dB at 35 GHz, showing an inverse increase with frequency. This was consistent with updated calculations of miter bend and ohmic losses. At low frequencies (33-50 GHz), the mode purity remained very good at the exit of the waveguide, and the losses are perfectly acceptable for operation in ITER. Finally, the primary challenge may come from the future addition of a Gaussian telescope and other filter components within the corrugated guide, which will likely introduce additional perturbations to the beam profile and an increase in mode-conversion loss.

  11. Nordic Model of Subregional Co-Operation

    Directory of Open Access Journals (Sweden)

    Grzela Joanna

    2017-12-01

    Full Text Available Nordic co-operation is renowned throughout the world and perceived as the collaboration of a group of countries which are similar in their views and activities. The main pillars of the Nordic model of co-operation are the tradition of constitutional principles, activity of public movements and organisations, freedom of speech, equality, solidarity, and respect for the natural environment. In connection with labour and entrepreneurship, these elements are the features of a society which favours efficiency, a sense of security and balance between an individual and a group. Currently, the collaboration is a complex process, including many national, governmental and institutional connections which form the “Nordic family”.

  12. Modeling decisions information fusion and aggregation operators

    CERN Document Server

    Torra, Vicenc

    2007-01-01

    Information fusion techniques and aggregation operators produce the most comprehensive, specific datum about an entity using data supplied from different sources, thus enabling us to reduce noise, increase accuracy, summarize and extract information, and make decisions. These techniques are applied in fields such as economics, biology and education, while in computer science they are particularly used in fields such as knowledge-based systems, robotics, and data mining. This book covers the underlying science and application issues related to aggregation operators, focusing on tools used in practical applications that involve numerical information. Starting with detailed introductions to information fusion and integration, measurement and probability theory, fuzzy sets, and functional equations, the authors then cover the following topics in detail: synthesis of judgements, fuzzy measures, weighted means and fuzzy integrals, indices and evaluation methods, model selection, and parameter extraction. The method...

  13. Dipole operator constraints on composite Higgs models.

    Science.gov (United States)

    König, Matthias; Neubert, Matthias; Straub, David M

    Flavour- and CP-violating electromagnetic or chromomagnetic dipole operators in the quark sector are generated in a large class of new physics models and are strongly constrained by measurements of the neutron electric dipole moment and observables sensitive to flavour-changing neutral currents, such as the [Formula: see text] branching ratio and [Formula: see text]. After a model-independent discussion of the relevant constraints, we analyze these effects in models with partial compositeness, where the quarks get their masses by mixing with vector-like composite fermions. These scenarios can be seen as the low-energy limit of composite Higgs or warped extra dimensional models. We study different choices for the electroweak representations of the composite fermions motivated by electroweak precision tests as well as different flavour structures, including flavour anarchy and [Formula: see text] or [Formula: see text] flavour symmetries in the strong sector. In models with "wrong-chirality" Yukawa couplings, we find a strong bound from the neutron electric dipole moment, irrespective of the flavour structure. In the case of flavour anarchy, we also find strong bounds from flavour-violating dipoles, while these constraints are mild in the flavour-symmetric models.

  14. Dipole operator constraints on composite Higgs models

    Energy Technology Data Exchange (ETDEWEB)

    Koenig, Matthias [Johannes Gutenberg University, PRISMA Cluster of Excellence and Mainz Institute for Theoretical Physics, Mainz (Germany); Neubert, Matthias [Johannes Gutenberg University, PRISMA Cluster of Excellence and Mainz Institute for Theoretical Physics, Mainz (Germany); Cornell University, Department of Physics, LEPP, Ithaca, NY (United States); Straub, David M. [Excellence Cluster Universe, Technische Universitaet Muenchen, Garching (Germany)

    2014-07-15

    Flavour- and CP-violating electromagnetic or chromomagnetic dipole operators in the quark sector are generated in a large class of new physics models and are strongly constrained by measurements of the neutron electric dipolemoment and observables sensitive to flavour-changing neutral currents, such as the B→ X{sub s}γ branching ratio and ε'/ε. After a model-independent discussion of the relevant constraints, we analyze these effects in models with partial compositeness, where the quarks get their masses by mixing with vector-like composite fermions. These scenarios can be seen as the low-energy limit of composite Higgs or warped extra dimensional models. We study different choices for the electroweak representations of the composite fermions motivated by electroweak precision tests as well as different flavour structures, including flavour anarchy and U(3){sup 3} or U(2){sup 3} flavour symmetries in the strong sector. In models with ''wrong-chirality'' Yukawa couplings, we find a strong bound from the neutron electric dipole moment, irrespective of the flavour structure. In the case of flavour anarchy, we also find strong bounds from flavour-violating dipoles, while these constraints are mild in the flavour-symmetric models. (orig.)

  15. Cost Model for Risk Assessment of Company Operation in Audit

    Directory of Open Access Journals (Sweden)

    S. V.

    2017-12-01

    Full Text Available This article explores the approach to assessing the risk of company activities termination by building a cost model. This model gives auditors information on managers’ understanding of factors influencing change in the value of assets and liabilities, and the methods to identify it in more effective and reliable ways. Based on this information, the auditor can assess the adequacy of use of the assumption on continuity of company operation by management personnel when preparing financial statements. Financial uncertainty entails real manifestations of factors creating risks of the occurrence of costs, revenue losses due their manifestations, which in the long run can be a reason for termination of company operation, and, therefore, need to be foreseen in the auditor’s assessment of the adequacy of use of the continuity assumption when preparing financial statements by company management. The purpose of the study is to explore and develop a methodology for use of cost models to assess the risk of termination of company operation in audit. The issue of methodology for assessing the audit risk through analyzing methods for company valuation has not been dealt with. The review of methodologies for assessing the risks of termination of company operation in course of audit gives grounds for the conclusion that use of cost models can be an effective methodology for identification and assessment of such risks. The analysis of the above methods gives understanding of the existing system for company valuation, integrated into the management system, and the consequences of its use, i. e. comparison of the asset price data with the accounting data and the market value of the asset data. Overvalued or undervalued company assets may be a sign of future sale or liquidation of a company, which may signal on high probability of termination of company operation. A wrong choice or application of valuation methods can be indicative of the risk of non

  16. Evaluation and comparison of the operational Bristol Channel Model storm surge suite

    OpenAIRE

    Williams, J.A.; Horsburgh, K.J.

    2013-01-01

    Due to its exceptional tidal range, complex geometry, and exposure to flood risk the operational storm surge modelling system for the Bristol Channel (running four times each day as part of the UK Coastal Monitoring and Forecasting service) comprises a number of nested hydrodynamic models. Forecasts for the region are available from the shelf-wide storm surge model, CS3X, as well as from two finer-scale models of the Bristol Channel itself (the Bristol Channel model, BCM, and the Severn River...

  17. Renormalization Group Equations of d=6 Operators in the Standard Model Effective Field Theory

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The one-loop renormalization group equations for the Standard Model (SM) Effective Field Theory (EFT) including dimension-six operators are calculated. The complete 2499 × 2499 one-loop anomalous dimension matrix of the d=6 Lagrangian is obtained, as well as the contribution of d=6 operators to the running of the parameters of the renormalizable SM Lagrangian. The presence of higher-dimension operators has implications for the flavor problem of the SM. An approximate holomorphy of the one-loop anomalous dimension matrix is found, even though the SM EFT is not a supersymmetric theory.

  18. REAL STOCK PRICES AND THE LONG-RUN MONEY DEMAND FUNCTION IN MALAYSIA: Evidence from Error Correction Model

    Directory of Open Access Journals (Sweden)

    Naziruddin Abdullah

    2004-06-01

    Full Text Available This study adopts the error correction model to empirically investigate the role of real stock prices in the long run-money demand in the Malaysian financial or money market for the period 1977: Q1-1997: Q2. Specifically, an attempt is made to check whether the real narrow money (M1/P is cointegrated with the selected variables like industrial production index (IPI, one-year T-Bill rates (TB12, and real stock prices (RSP. If a cointegration between the variables, i.e., the dependent and independent variables, is found to be the case, it may imply that there exists a long-run co-movement among these variables in the Malaysian money market. From the empirical results it is found that the cointegration between money demand and real stock prices (RSP is positive, implying that in the long run there is a positive association between real stock prices (RSP and demand for real narrow money (M1/P. The policy implication that can be extracted from this study is that an increase in stock prices is likely to necessitate an expansionary monetary policy to prevent nominal income or inflation target from undershooting.

  19. Modeling lift operations with SASmacr Simulation Studio

    Science.gov (United States)

    Kar, Leow Soo

    2016-10-01

    Lifts or elevators are an essential part of multistorey buildings which provide vertical transportation for its occupants. In large and high-rise apartment buildings, its occupants are permanent, while in buildings, like hospitals or office blocks, the occupants are temporary or users of the buildings. They come in to work or to visit, and thus, the population of such buildings are much higher than those in residential apartments. It is common these days that large office blocks or hospitals have at least 8 to 10 lifts serving its population. In order to optimize the level of service performance, different transportation schemes are devised to control the lift operations. For example, one lift may be assigned to solely service the even floors and another solely for the odd floors, etc. In this paper, a basic lift system is modelled using SAS Simulation Studio to study the effect of factors such as the number of floors, capacity of the lift car, arrival rate and exit rate of passengers at each floor, peak and off peak periods on the system performance. The simulation is applied to a real lift operation in Sunway College's North Building to validate the model.

  20. Running Club

    CERN Multimedia

    Running Club

    2011-01-01

    The cross country running season has started well this autumn with two events: the traditional CERN Road Race organized by the Running Club, which took place on Tuesday 5th October, followed by the ‘Cross Interentreprises’, a team event at the Evaux Sports Center, which took place on Saturday 8th October. The participation at the CERN Road Race was slightly down on last year, with 65 runners, however the participants maintained the tradition of a competitive yet friendly atmosphere. An ample supply of refreshments before the prize giving was appreciated by all after the race. Many thanks to all the runners and volunteers who ensured another successful race. The results can be found here: https://espace.cern.ch/Running-Club/default.aspx CERN participated successfully at the cross interentreprises with very good results. The teams succeeded in obtaining 2nd and 6th place in the Mens category, and 2nd place in the Mixed category. Congratulations to all. See results here: http://www.c...

  1. An improved cellular automata model for train operation simulation with dynamic acceleration

    Science.gov (United States)

    Li, Wen-Jun; Nie, Lei

    2018-03-01

    Urban rail transit plays an important role in the urban public traffic because of its advantages of fast speed, large transport capacity, high safety, reliability and low pollution. This study proposes an improved cellular automaton (CA) model by considering the dynamic characteristic of the train acceleration to analyze the energy consumption and train running time. Constructing an effective model for calculating energy consumption to aid train operation improvement is the basis for studying and analyzing energy-saving measures for urban rail transit system operation.

  2. Parallel runs of a large air pollution model on a grid of Sun computers

    DEFF Research Database (Denmark)

    Alexandrov, V.N.; Owczarz, W.; Thomsen, Per Grove

    2004-01-01

    Large -scale air pollution models can successfully be used in different environmental studies. These models are described mathematically by systems of partial differential equations. Splitting procedures followed by discretization of the spatial derivatives leads to several large systems of ordin...

  3. Analysis of the Automobile Market : Modeling the Long-Run Determinants of the Demand for Automobiles : Volume 2. Simulation Analysis Using the Wharton EFA Automobile Demand Model

    Science.gov (United States)

    1979-12-01

    An econometric model is developed which provides long-run policy analysis and forecasting of annual trends, for U.S. auto stock, new sales, and their composition by auto size-class. The concept of "desired" (equilibrium) stock is introduced. "Desired...

  4. Analysis of the Automobile Market : Modeling the Long-Run Determinants of the Demand for Automobiles : Volume 1. The Wharton EFA Automobile Demand Model

    Science.gov (United States)

    1979-12-01

    An econometric model is developed which provides long-run policy analysis and forecasting of annual trends, for U.S. auto stock, new sales, and their composition by auto size-class. The concept of "desired" (equilibrium) stock is introduced. "Desired...

  5. Analysis of the Automobile Market : Modeling the Long-Run Determinants of the Demand for Automobiles : Volume 3. Appendices to the Wharton EFA Automobile Demand Model

    Science.gov (United States)

    1979-12-01

    An econometric model is developed which provides long-run policy analysis and forecasting of annual trends, for U.S. auto stock, new sales, and their composition by auto size-class. The concept of "desired" (equilibrium) stock is introduced. "Desired...

  6. Modelling Energy Loss Mechanisms and a Determination of the Electron Energy Scale for the CDF Run II W Mass Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Riddick, Thomas [Univ. College London, Bloomsbury (United Kingdom)

    2012-06-15

    The calibration of the calorimeter energy scale is vital to measuring the mass of the W boson at CDF Run II. For the second measurement of the W boson mass at CDF Run II, two independent simulations were developed. This thesis presents a detailed description of the modification and validation of Bremsstrahlung and pair production modelling in one of these simulations, UCL Fast Simulation, comparing to both geant4 and real data where appropriate. The total systematic uncertainty on the measurement of the W boson mass in the W → eve channel from residual inaccuracies in Bremsstrahlung modelling is estimated as 6.2 ±3.2 MeV/c2 and the total systematic uncertainty from residual inaccuracies in pair production modelling is estimated as 2.8± 2.7 MeV=c2. Two independent methods are used to calibrate the calorimeter energy scale in UCL Fast Simulation; the results of these two methods are compared to produce a measurement of the Z boson mass as a cross-check on the accuracy of the simulation.

  7. An approach to modeling operator's cognitive behavior using artificial intelligence techniques in emergency operating event sequences

    International Nuclear Information System (INIS)

    Cheon, Se Woo; Sur, Sang Moon; Lee, Yong Hee; Park, Young Taeck; Moon, Sang Joon

    1994-01-01

    Computer modeling of an operator's cognitive behavior is a promising approach for the purpose of human factors study and man-machine systems assessment. In this paper, the states of the art in modeling operator behavior and the current status in developing an operator's model (MINERVA - NPP) are presented. The model is constructed as a knowledge-based system of a blackboard framework and is simulated based on emergency operating procedures

  8. An Operational Model for the Prediction of Jet Blast

    Science.gov (United States)

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  9. Running multilevel models in MLwiN from within Stata: runmlwin

    OpenAIRE

    George Leckie; Chris Charlton

    2011-01-01

    Multilevel analysis is the statistical modeling of hierarchical and nonhierarchical clustered data. These data structures are common in social and medical sciences. Stata provides the xtmixed, xtmelogit, and xtmepoisson commands for fitting multilevel models, but these are only relevant for univariate continuous, binary, and count response variables, respectively. A much wider range of multilevel models can be fit using the user-written gllamm command, but gllamm can be computationally slow f...

  10. The short-run dynamics of optimal growyh models with delays

    OpenAIRE

    Collard, Fabrice; Licandro, Omar; Puch, Luis A.

    2003-01-01

    Differential equations with advanced and delayed time arguments may arise in the optimality conditions of simple growth models with delays. Models with investment gestation lags (time-to-build), consumption gestation lags (habit formation) or learning by using lie in this category. In this paper, we propose a shooting method to deal with leads and lags in the Euler system associated to dynamic general equilibrium models in continuous time. We introduce the discussion describing the dynamic...

  11. A Secure Operational Model for Mobile Payments

    Directory of Open Access Journals (Sweden)

    Tao-Ku Chang

    2014-01-01

    Full Text Available Instead of paying by cash, check, or credit cards, customers can now also use their mobile devices to pay for a wide range of services and both digital and physical goods. However, customers’ security concerns are a major barrier to the broad adoption and use of mobile payments. In this paper we present the design of a secure operational model for mobile payments in which access control is based on a service-oriented architecture. A customer uses his/her mobile device to get authorization from a remote server and generate a two-dimensional barcode as the payment certificate. This payment certificate has a time limit and can be used once only. The system also provides the ability to remotely lock and disable the mobile payment service.

  12. A secure operational model for mobile payments.

    Science.gov (United States)

    Chang, Tao-Ku

    2014-01-01

    Instead of paying by cash, check, or credit cards, customers can now also use their mobile devices to pay for a wide range of services and both digital and physical goods. However, customers' security concerns are a major barrier to the broad adoption and use of mobile payments. In this paper we present the design of a secure operational model for mobile payments in which access control is based on a service-oriented architecture. A customer uses his/her mobile device to get authorization from a remote server and generate a two-dimensional barcode as the payment certificate. This payment certificate has a time limit and can be used once only. The system also provides the ability to remotely lock and disable the mobile payment service.

  13. A Secure Operational Model for Mobile Payments

    Science.gov (United States)

    2014-01-01

    Instead of paying by cash, check, or credit cards, customers can now also use their mobile devices to pay for a wide range of services and both digital and physical goods. However, customers' security concerns are a major barrier to the broad adoption and use of mobile payments. In this paper we present the design of a secure operational model for mobile payments in which access control is based on a service-oriented architecture. A customer uses his/her mobile device to get authorization from a remote server and generate a two-dimensional barcode as the payment certificate. This payment certificate has a time limit and can be used once only. The system also provides the ability to remotely lock and disable the mobile payment service. PMID:25386607

  14. Fuzzy rule-based macroinvertebrate habitat suitability models for running waters

    NARCIS (Netherlands)

    Broekhoven, Van E.; Adriaenssens, V.; Baets, De B.; Verdonschot, P.F.M.

    2006-01-01

    A fuzzy rule-based approach was applied to a macroinvertebrate habitat suitability modelling problem. The model design was based on a knowledge base summarising the preferences and tolerances of 86 macroinvertebrate species for four variables describing river sites in springs up to small rivers in

  15. The Monetary Exchange Rate Model as a Long-Run Phenomenon

    NARCIS (Netherlands)

    J.J.J. Groen (Jan)

    1998-01-01

    textabstractPure time series-based tests fail to find empirical support for monetary exchange rate models. In this paper we apply pooled time series estimation on a forward-looking monetary model, resulting in parameter estimates which are in compliance with the underlying theory. Based on a panel

  16. Implementation of an operator model with error mechanisms for nuclear power plant control room operation

    International Nuclear Information System (INIS)

    Suh, Sang Moon; Cheon, Se Woo; Lee, Yong Hee; Lee, Jung Woon; Park, Young Taek

    1996-01-01

    SACOM(Simulation Analyser with Cognitive Operator Model) is being developed at Korea Atomic Energy Research Institute to simulate human operator's cognitive characteristics during the emergency situations of nuclear power plans. An operator model with error mechanisms has been developed and combined into SACOM to simulate human operator's cognitive information process based on the Rasmussen's decision ladder model. The operational logic for five different cognitive activities (Agents), operator's attentional control (Controller), short-term memory (Blackboard), and long-term memory (Knowledge Base) have been developed and implemented on blackboard architecture. A trial simulation with a scenario for emergency operation has been performed to verify the operational logic. It was found that the operator model with error mechanisms is suitable for the simulation of operator's cognitive behavior in emergency situation

  17. The ATLAS jet trigger performance in LHC Run I and Run II updates

    CERN Document Server

    Shimizu, Shima; The ATLAS collaboration

    2015-01-01

    The Large Hadron Collider (LHC) provides proton-proton collisions with a high frequency of 40 MHz maximum and the ATLAS trigger performs the first event selections online during data-taking. The ATLAS jet trigger is an important element of the ATLAS trigger, selecting collision events with jets with high transverse energy, and provides data samples for studies of Standard Model physics and searches for new physics at the LHC. During LHC Run I, the first LHC operation period from 2010 to 2012, the ATLAS jet trigger system improved as experience developed with triggering in a high luminosity and high event pileup environment. For the next LHC operation period, Run II, the system is being updated for further improved performance and stability. In this contribution, performance and improvements of the ATLAS jet trigger in Run I are presented. Updates for Run II are also shown.

  18. Radionuclide transport in running waters, sensitivity analysis of bed-load, channel geometry and model discretisation

    International Nuclear Information System (INIS)

    Jonsson, Karin; Elert, Mark

    2006-08-01

    In this report, further investigations of the model concept for radionuclide transport in stream, developed in the SKB report TR-05-03 is presented. Especially three issues have been the focus of the model investigations. The first issue was to investigate the influence of assumed channel geometry on the simulation results. The second issue was to reconsider the applicability of the equation for the bed-load transport in the stream model, and finally the last issue was to investigate how the model discretisation will influence the simulation results. The simulations showed that there were relatively small differences in results when applying different cross-sections in the model. The inclusion of the exact shape of the cross-section in the model is therefore not crucial, however, if cross-sectional data exist, the overall shape of the cross-section should be used in the model formulation. This could e.g. be accomplished by using measured values of the stream width and depth in the middle of the stream and by assuming a triangular shape. The bed-load transport was in this study determined for different sediment characteristics which can be used as an order of magnitude estimation if no exact determinations of the bed-load are available. The difference in the calculated bed-load transport for the different materials was, however, found to be limited. The investigation of model discretisation showed that a fine model discretisation to account for numerical effects is probably not important for the performed simulations. However, it can be necessary for being able to account for different conditions along a stream. For example, the application of mean slopes instead of individual values in the different stream reaches can result in very different predicted concentrations

  19. Damage Propagation Modeling for Aircraft Engine Run-to-Failure Simulation

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are...

  20. Improving traffic signal management and operations : a basic service model.

    Science.gov (United States)

    2009-12-01

    This report provides a guide for achieving a basic service model for traffic signal management and : operations. The basic service model is based on simply stated and defensible operational objectives : that consider the staffing level, expertise and...

  1. Sea Ice Trends in Climate Models Only Accurate in Runs with Biased Global Warming

    Science.gov (United States)

    Rosenblum, Erica; Eisenman, Ian

    2017-08-01

    Observations indicate that the Arctic sea ice cover is rapidly retreating while the Antarctic sea ice cover is steadily expanding. State-of-the-art climate models, by contrast, typically simulate a moderate decrease in both the Arctic and Antarctic sea ice covers. However, in each hemisphere there is a small subset of model simulations that have sea ice trends similar to the observations. Based on this, a number of recent studies have suggested that the models are consistent with the observations in each hemisphere when simulated internal climate variability is taken into account. Here we examine sea ice changes during 1979-2013 in simulations from the most recent Coupled Model Intercomparison Project (CMIP5) as well as the Community Earth System Model Large Ensemble (CESM-LE), drawing on previous work that found a close relationship in climate models between global-mean surface temperature and sea ice extent. We find that all of the simulations with 1979-2013 Arctic sea ice retreat as fast as observed have considerably more global warming than observations during this time period. Using two separate methods to estimate the sea ice retreat that would occur under the observed level of global warming in each simulation in both ensembles, we find that simulated Arctic sea ice retreat as fast as observed would occur less than 1% of the time. This implies that the models are not consistent with the observations. In the Antarctic, we find that simulated sea ice expansion as fast as observed typically corresponds with too little global warming, although these results are more equivocal. We show that because of this, the simulations do not capture the observed asymmetry between Arctic and Antarctic sea ice trends. This suggests that the models may be getting the right sea ice trends for the wrong reasons in both polar regions.

  2. Domain-Level Assessment of the Weather Running Estimate-Nowcast (WREN) Model

    Science.gov (United States)

    2016-11-01

    contaminant concentration fields resulting from atmospheric boundary layer depth uncertainty. J App Meteo Clim. 2014;53:2610–2626. Skamarock WC...Center for Atmospheric Research, which has developed a suite of Model Evaluation Tools (MET) to evaluate the accuracy of WRF forecasts. In this...Weather Impacts Decision Aid. WRF is maintained by the National Center for Atmospheric Research, which has developed a suite of Model Evaluation Tools

  3. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  4. Daily operation optimisation of hybrid stand-alone system by model predictive control considering ageing model

    International Nuclear Information System (INIS)

    Dufo-López, Rodolfo; Fernández-Jiménez, L. Alfredo; Ramírez-Rosado, Ignacio J.; Artal-Sevil, J. Sergio; Domínguez-Navarro, José A.; Bernal-Agustín, José L.

    2017-01-01

    Highlights: • Method for optimising the daily operation of photovoltaic-wind-diesel-battery systems. • Weather forecasts of hourly wind speed, irradiation, temperature and load are used. • Each day five control variables are optimised for the control of the system. • Operating cost includes real ageing of the batteries and the diesel generator. • Results show that the optimal control strategy used for each day led to cost savings. - Abstract: This article presents a method for optimising the daily operation (minimising the total operating cost) of a hybrid photovoltaic-wind-diesel-battery system using model predictive control. The model uses actual weather forecasts of hourly values of wind speed, irradiation, temperature and load. Five control variables are optimised, and thus their optimal set points values determine the optimal control strategy for each day. This involves the use of an accurate model for estimating the degradation of the batteries by considering the capacity loss due to corrosion and degradation. The model considers the extra costs of maintaining and replacing the diesel generator due to running out of its optimal conditions. The optimisation is carried out by means of genetic algorithms. An example of application compares the total operating cost obtained using the optimal control strategy for each day with the cost of using the optimal control strategy found for the whole year, obtaining savings of up to 7.8%. Also the comparison with the cost of using the “load following” control strategy is analysed, obtaining savings of up to 37.7%.

  5. Impact of treadmill running and sex on hippocampal neurogenesis in the mouse model of amyotrophic lateral sclerosis.

    Directory of Open Access Journals (Sweden)

    Xiaoxing Ma

    Full Text Available Hippocampal neurogenesis in the subgranular zone (SGZ of dentate gyrus (DG occurs throughout life and is regulated by pathological and physiological processes. The role of oxidative stress in hippocampal neurogenesis and its response to exercise or neurodegenerative diseases remains controversial. The present study was designed to investigate the impact of oxidative stress, treadmill exercise and sex on hippocampal neurogenesis in a murine model of heightened oxidative stress (G93A mice. G93A and wild type (WT mice were randomized to a treadmill running (EX or a sedentary (SED group for 1 or 4 wk. Immunohistochemistry was used to detect bromodeoxyuridine (BrdU labeled proliferating cells, surviving cells, and their phenotype, as well as for determination of oxidative stress (3-NT; 8-OHdG. BDNF and IGF1 mRNA expression was assessed by in situ hybridization. Results showed that: (1 G93A-SED mice had greater hippocampal neurogenesis, BDNF mRNA, and 3-NT, as compared to WT-SED mice. (2 Treadmill running promoted hippocampal neurogenesis and BDNF mRNA content and lowered DNA oxidative damage (8-OHdG in WT mice. (3 Male G93A mice showed significantly higher cell proliferation but a lower level of survival vs. female G93A mice. We conclude that G93A mice show higher hippocampal neurogenesis, in association with higher BDNF expression, yet running did not further enhance these phenomena in G93A mice, probably due to a 'ceiling effect' of an already heightened basal levels of hippocampal neurogenesis and BDNF expression.

  6. A description of the FAMOUS (version XDBUA climate model and control run

    Directory of Open Access Journals (Sweden)

    A. Osprey

    2008-12-01

    Full Text Available FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.

  7. Semi-Automated Landslide Mapping by Using an Expert Based Module Running on GIS Environment: Netcad Architect M-AHP Operator

    Science.gov (United States)

    Kukul, Elvan; Yilmaz, Ezgi; Nefeslioglu, Hakan A.; Sezer, Ebru A.; Toptas, Tunc E.; Celik, Deniz; Orhun, Koray; Osna, Turgay; Ak, Serdar; Gokceoglu, Candan

    2014-05-01

    In the present study semi-automated landslide mapping of an area locating between the cities Afyon and Usak (west of Turkey) was evaluated by using an expert based modelling operator developed in Netcad Architect environment. The area considerably suffers from landslides. The main public concern of this region within this respect is due to the high speed train railway route which will connect Ankara and Izmir. The study was carried out in three main stages; (i) data production, (ii) modelling for semi-automated landslide mapping, and (iii) validation of the constructed models by using the actual landslides observed in the region. The altitude, slope gradient, slope aspect and the second derivative of topography in terms of topographical wetness index parameters, geology in terms of lithology type, and normalized difference vegetation index in terms of environmental factor were evaluated to be the main conditioning factor of active and old landslides observed in the area. Two expert based models for mapping active and old landslides were constructed by using the M-AHP operator of the Netcad Architect environment. The resultant maps represent both possible active and old landslide areas which could be encountered throughout the region. According to the results of the modelling stages, almost 7 % of the area is found to be active landslide area, and almost 13 % of the area constitutes possible old failures in the region. The validations of the constructed models were performed by using the ROC (Receiver Operating Characteristics) curve operator which was also developed in Netcad Architect environment. The area under ROC curves for the models were calculated to be 0.674 and 0.728 for active and old landslides respectively. Considering expert nature of the constructed models these results are promising, and could be evaluated in route selection assessments and suitable site selection for settlement.

  8. Measuring Short- and Long-run Promotional Effectiveness on Scanner Data Using Persistence Modeling

    NARCIS (Netherlands)

    M.G. Dekimpe (Marnik); D.M. Hanssens (Dominique); V.R. Nijs; J-B.E.M. Steenkamp (Jan-Benedict)

    2003-01-01

    textabstractThe use of price promotions to stimulate brand and firm performance is increasing. We discuss how (i) the availability of longer scanner data time series, and (ii) persistence modeling, have lead to greater insights into the dynamic effects of price promotions, as one can now quantify

  9. Renormalization group running of fermion observables in an extended non-supersymmetric SO(10) model

    Energy Technology Data Exchange (ETDEWEB)

    Meloni, Davide [Dipartimento di Matematica e Fisica, Università di Roma Tre,Via della Vasca Navale 84, 00146 Rome (Italy); Ohlsson, Tommy; Riad, Stella [Department of Physics, School of Engineering Sciences,KTH Royal Institute of Technology - AlbaNova University Center,Roslagstullsbacken 21, 106 91 Stockholm (Sweden)

    2017-03-08

    We investigate the renormalization group evolution of fermion masses, mixings and quartic scalar Higgs self-couplings in an extended non-supersymmetric SO(10) model, where the Higgs sector contains the 10{sub H}, 120{sub H}, and 126{sub H} representations. The group SO(10) is spontaneously broken at the GUT scale to the Pati-Salam group and subsequently to the Standard Model (SM) at an intermediate scale M{sub I}. We explicitly take into account the effects of the change of gauge groups in the evolution. In particular, we derive the renormalization group equations for the different Yukawa couplings. We find that the computed physical fermion observables can be successfully matched to the experimental measured values at the electroweak scale. Using the same Yukawa couplings at the GUT scale, the measured values of the fermion observables cannot be reproduced with a SM-like evolution, leading to differences in the numerical values up to around 80%. Furthermore, a similar evolution can be performed for a minimal SO(10) model, where the Higgs sector consists of the 10{sub H} and 126{sub H} representations only, showing an equally good potential to describe the low-energy fermion observables. Finally, for both the extended and the minimal SO(10) models, we present predictions for the three Dirac and Majorana CP-violating phases as well as three effective neutrino mass parameters.

  10. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    Science.gov (United States)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  11. PENGEMBANGAN MODEL PEMBELAJARAN PERMAINAN COLORFUL BALLS RUN UNTUK REAKSI GERAK PADA ANAK TUNAGARHITA DI SLB NEGERI SEMARANG TAHUN 2015

    Directory of Open Access Journals (Sweden)

    Rahadian Yodha Bhakti

    2016-02-01

    Full Text Available The purpose of this study was to determine the products of The Development of Learning Colorful Balls Run for Motion Reaction of mentally disabled children of SLB Negeri Semarang grade V in the academic year of 2015.. This research is the development (research and development / R & D, which consists of 10 steps of research, namely the potential and problems, data collection, product design, design validation, design revisions, test products, product revision, trial use, testing products, mass production Because the average obtained from the experts of physical education teacher 80% (good and from learning experts gained 92% (very good. The results of trial I product of small group on cognitive aspects was 83.53% (good, affective aspects was 82.10% (good, psychomotor aspects was 81.39% (good, the average of trial I was 82.34% (good. The results of trial II, the large group in the cognitive aspects was 85.14% (good affective aspects was 83.76% (good psychomotor aspects was 83.07% (good, the average of trial II was 83.99% (good. It was concluded that the development of colorful balls run game model can be used as an alternative to learn sport especially small ball game for V graders of SLB Negeri Semarang.

  12. Preliminary Findings of the South Africa Power System Capacity Expansion and Operational Modelling Study: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Reber, Timothy J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Chartan, Erol Kevin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brinkman, Gregory L [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-01

    Wind and solar power contract prices have recently become cheaper than many conventional new-build alternatives in South Africa and trends suggest a continued increase in the share of variable renewable energy (vRE) on South Africa's power system with coal technology seeing the greatest reduction in capacity, see 'Figure 6: Percentage share by Installed Capacity (MW)' in [1]. Hence it is essential to perform a state-of-the-art grid integration study examining the effects of these high penetrations of vRE on South Africa's power system. Under the 21st Century Power Partnership (21CPP), funded by the U.S. Department of Energy, the National Renewable Energy Laboratory (NREL) has significantly augmented existing models of the South African power system to investigate future vRE scenarios. NREL, in collaboration with Eskom's Planning Department, further developed, tested and ran a combined capacity expansion and operational model of the South African power system including spatially disaggregated detail and geographical representation of system resources. New software to visualize and interpret modelling outputs has been developed, and scenario analysis of stepwise vRE build targets reveals new insight into associated planning and operational impacts and costs. The model, built using PLEXOS, is split into two components, firstly a capacity expansion model and secondly a unit commitment and economic dispatch model. The capacity expansion model optimizes new generation decisions to achieve the lowest cost, with a full understanding of capital cost and an approximated understanding of operational costs. The operational model has a greater set of detailed operational constraints and is run at daily resolutions. Both are run from 2017 through 2050. This investigation suggests that running both models in tandem may be the most effective means to plan the least cost South African power system as build plans seen to be more expensive than optimal by the

  13. Quark flavour observables in the Littlest Higgs model with T-parity after LHC Run 1.

    Science.gov (United States)

    Blanke, Monika; Buras, Andrzej J; Recksiegel, Stefan

    2016-01-01

    The Littlest Higgs model with T-parity (LHT) belongs to the simplest new physics scenarios with new sources of flavour and CP violation. The latter originate in the interactions of ordinary quarks and leptons with heavy mirror quarks and leptons that are mediated by new heavy gauge bosons. Also a heavy fermionic top partner is present in this model which communicates with the SM fermions by means of standard [Formula: see text] and [Formula: see text] gauge bosons. We present a new analysis of quark flavour observables in the LHT model in view of the oncoming flavour precision era. We use all available information on the CKM parameters, lattice QCD input and experimental data on quark flavour observables and corresponding theoretical calculations, taking into account new lower bounds on the symmetry breaking scale and the mirror quark masses from the LHC. We investigate by how much the branching ratios for a number of rare K and B decays are still allowed to depart from their SM values. This includes [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text]. Taking into account the constraints from [Formula: see text] processes, significant departures from the SM predictions for [Formula: see text] and [Formula: see text] are possible, while the effects in B decays are much smaller. In particular, the LHT model favours [Formula: see text], which is not supported by the data, and the present anomalies in [Formula: see text] decays cannot be explained in this model. With the recent lattice and large N input the imposition of the [Formula: see text] constraint implies a significant suppression of the branching ratio for [Formula: see text] with respect to its SM value while allowing only for small modifications of [Formula: see text]. Finally, we investigate how the LHT physics could be distinguished from other models by means of indirect measurements and

  14. Quark flavour observables in the Littlest Higgs model with T-parity after LHC Run 1

    CERN Document Server

    Blanke, Monika; Recksiegel, Stefan

    2016-04-02

    The Littlest Higgs Model with T-parity (LHT) belongs to the simplest new physics scenarios with new sources of flavour and CP violation. We present a new analysis of quark observables in the LHT model in view of the oncoming flavour precision era. We use all available information on the CKM parameters, lattice QCD input and experimental data on quark flavour observables and corresponding theoretical calculations, taking into account new lower bounds on the symmetry breaking scale and the mirror quark masses from the LHC. We investigate by how much the branching ratios for a number of rare $K$ and $B$ decays are still allowed to depart from their SM values. This includes $K^+\\to\\pi^+\

  15. The effect of treadmill running on passive avoidance learning in animal model of Alzheimer disease

    OpenAIRE

    Nasrin Hosseini; Hojjatallah Alaei; Parham Reisi; Maryam Radahmadi

    2013-01-01

    Background : Alzheimer′s disease was known as a progressive neurodegenerative disorder in the elderly and is characterized by dementia and severe neuronal loss in the some regions of brain such as nucleus basalis magnocellularis. It plays an important role in the brain functions such as learning and memory. Loss of cholinergic neurons of nucleus basalis magnocellularis by ibotenic acid can commonly be regarded as a suitable model of Alzheimer′s disease. Previous studies reported that exercise...

  16. Design of ProjectRun21

    DEFF Research Database (Denmark)

    Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik

    2017-01-01

    training, the runners' running experience and pace abilities can be used as estimates for load capacity. Since no evidence-based knowledge exist of how to plan appropriate half-marathon running schedules considering the level of running experience and running pace, the aim of ProjectRun21 is to investigate...... of three half-marathon running schedules developed for the study. Running data will be collected objectively by GPS. Injury will be based on the consensus-based time loss definition by Yamato et al.: "Running-related (training or competition) musculoskeletal pain in the lower limbs that causes...... the exposure to running is pre-fixed in the running schedules and thereby conditioned by design. Time-to-event models will be used for analytical purposes. DISCUSSION: ProjectRun21 will examine if particular subgroups of runners with certain running experiences and running paces seem to sustain more running...

  17. Influential factors of red-light running at signalized intersection and prediction using a rare events logistic regression model.

    Science.gov (United States)

    Ren, Yilong; Wang, Yunpeng; Wu, Xinkai; Yu, Guizhen; Ding, Chuan

    2016-10-01

    Red light running (RLR) has become a major safety concern at signalized intersection. To prevent RLR related crashes, it is critical to identify the factors that significantly impact the drivers' behaviors of RLR, and to predict potential RLR in real time. In this research, 9-month's RLR events extracted from high-resolution traffic data collected by loop detectors from three signalized intersections were applied to identify the factors that significantly affect RLR behaviors. The data analysis indicated that occupancy time, time gap, used yellow time, time left to yellow start, whether the preceding vehicle runs through the intersection during yellow, and whether there is a vehicle passing through the intersection on the adjacent lane were significantly factors for RLR behaviors. Furthermore, due to the rare events nature of RLR, a modified rare events logistic regression model was developed for RLR prediction. The rare events logistic regression method has been applied in many fields for rare events studies and shows impressive performance, but so far none of previous research has applied this method to study RLR. The results showed that the rare events logistic regression model performed significantly better than the standard logistic regression model. More importantly, the proposed RLR prediction method is purely based on loop detector data collected from a single advance loop detector located 400 feet away from stop-bar. This brings great potential for future field applications of the proposed method since loops have been widely implemented in many intersections and can collect data in real time. This research is expected to contribute to the improvement of intersection safety significantly. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. On the energetics of quadrupedal running: predicting the metabolic cost of transport via a flexible-torso model.

    Science.gov (United States)

    Cao, Qu; Poulakakis, Ioannis

    2015-09-03

    In this paper, the effect of torso flexibility on the energetics of quadrupedal bounding is examined in a template setting. Two reductive sagittal-plane models, one with a rigid, non-deformable torso and one with a flexible, unactuated torso are proposed. Both models feature non-trivial leg mass and inertia to capture the energy associated with repositioning the legs after liftoff as well as the energy lost due to impacts. Bounding motions that minimize the cost of transport are generated for both models via a simple controller that coordinates leg recirculation. Comparisons reveal that torso compliance promotes locomotion efficiency by facilitating leg recirculation in anticipation of touchdown at speeds that are sufficiently high. Furthermore, by considering non-ideal torque generating and compliant elements with biologically reasonable efficiency values, it is shown that the flexible-torso model can predict the metabolic cost of transport for different animals, estimated using measurements of oxygen consumption. This way, the proposed model offers a means for approximating the energetic cost of transport of running quadrupeds in a simple and direct fashion.

  19. Fast Atmosphere-Ocean Model Runs with Large Changes in CO2

    Science.gov (United States)

    Russell, Gary L.; Lacis, Andrew A.; Rind, David H.; Colose, Christopher; Opstbaum, Roger F.

    2013-01-01

    How does climate sensitivity vary with the magnitude of climate forcing? This question was investigated with the use of a modified coupled atmosphere-ocean model, whose stability was improved so that the model would accommodate large radiative forcings yet be fast enough to reach rapid equilibrium. Experiments were performed in which atmospheric CO2 was multiplied by powers of 2, from 1/64 to 256 times the 1950 value. From 8 to 32 times, the 1950 CO2, climate sensitivity for doubling CO2 reaches 8 C due to increases in water vapor absorption and cloud top height and to reductions in low level cloud cover. As CO2 amount increases further, sensitivity drops as cloud cover and planetary albedo stabilize. No water vapor-induced runaway greenhouse caused by increased CO2 was found for the range of CO2 examined. With CO2 at or below 1/8 of the 1950 value, runaway sea ice does occur as the planet cascades to a snowball Earth climate with fully ice covered oceans and global mean surface temperatures near 30 C.

  20. Task network modeling of human operators in nuclear power plant control rooms

    International Nuclear Information System (INIS)

    Laughery, K.R.; Plott, C.

    1990-01-01

    Studying nuclear power plant operators is expensive and often impossible. There are a limited number of operators whose time is in great demand and a limited number of simulators or plants in which experimentation could be conducted. Yet, we often need to make predictions about operator behavior in the control room. Whenever the control room changes (e.g., panel or system modifications) or the plant procedures change, there is an essential need to predict how these changes will impact operator performance and, ultimately, plant performance. The question is, if we can't study real operators, what are the alternatives? The human engineering community within the nuclear industry needs to develop predictive methods. The approaches proposes in this paper, task network modeling and Micro SAINT, present promising opportunities. First, they represent a logical outgrowth of current technologies such as task analysis and probabilistic risk analysis. Second, there are data from which these models can be constructed. However, before any extended investment is made in this technology, it should be proven. A modest study could be run to directly address the issues defining the utility of this technology to determine whether we can collect the required data to build the models and the models can be used to accurately predict changes in operator performance. From the answers to these questions and the lessons learned in addressing them, future control room operator modeling research and development can be better directed

  1. Search for the Trilepton Signal of the Minimal Supergravity Model in D0 Run II

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Meta [Munich Univ. (Germany)

    2005-06-01

    A search for associated chargino neutralino pair production is performed in the trilepton decay channel q$\\bar{q}$ → $\\tilde{Χ}$$±\\atop{1}$ $\\tilde{Χ}$$0\\atop{2}$ → ℓ± v $\\tilde{Χ}$$0\\atop{1}$ μ± μ± $\\tilde{Χ}$$0\\atop{1}$, using data collected with the D0 detector at a center-of-mass energy of 1.96 TeV at the Fermilab Tevatron Collider. The data sample corresponds to an integrated luminosity of ~300 pb-1. A dedicated event selection is applied to all samples including the data sample and the Monte Carlo simulated samples for the Standard Model background and the Supersymmetry signal. Events with two muons plus an additional isolated track, replacing the requirement of a third charged lepton in the event, are analyzed. Additionally, selected events must have a large amount of missing transverse energy due to the neutrino and the two $\\tilde{Χ}$$0\\atop{1}$. After all selection cuts are applied, 2 data events are found, with an expected number of background events of 1.75 ± 0.34 (stat.) ± 0.46 (syst.). No evidence for Supersymmetry is found and limits on the production cross section times leptonic branching fraction are set. When the presented analysis is considered in combination with three other decay channels, no evidence for Supersymmetry is found. Limits on the production cross section times leptonic branching fraction are set. A lower chargino mass limit of 117 GeV at 95% CL is then derived for the mSUGRA model in a region of parameter space with enhanced leptonic branching fractions.

  2. Running Club

    CERN Multimedia

    Running Club

    2010-01-01

    The 2010 edition of the annual CERN Road Race will be held on Wednesday 29th September at 18h. The 5.5km race takes place over 3 laps of a 1.8 km circuit in the West Area of the Meyrin site, and is open to everyone working at CERN and their families. There are runners of all speeds, with times ranging from under 17 to over 34 minutes, and the race is run on a handicap basis, by staggering the starting times so that (in theory) all runners finish together. Children (< 15 years) have their own race over 1 lap of 1.8km. As usual, there will be a “best family” challenge (judged on best parent + best child). Trophies are awarded in the usual men’s, women’s and veterans’ categories, and there is a challenge for the best age/performance. Every adult will receive a souvenir prize, financed by a registration fee of 10 CHF. Children enter free (each child will receive a medal). More information, and the online entry form, can be found at http://cern.ch/club...

  3. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2012-01-01

      On Wednesday 14 March, the machine group successfully injected beams into LHC for the first time this year. Within 48 hours they managed to ramp the beams to 4 TeV and proceeded to squeeze to β*=0.6m, settings that are used routinely since then. This brought to an end the CMS Cosmic Run at ~Four Tesla (CRAFT), during which we collected 800k cosmic ray events with a track crossing the central Tracker. That sample has been since then topped up to two million, allowing further refinements of the Tracker Alignment. The LHC started delivering the first collisions on 5 April with two bunches colliding in CMS, giving a pile-up of ~27 interactions per crossing at the beginning of the fill. Since then the machine has increased the number of colliding bunches to reach 1380 bunches and peak instantaneous luminosities around 6.5E33 at the beginning of fills. The average bunch charges reached ~1.5E11 protons per bunch which results in an initial pile-up of ~30 interactions per crossing. During the ...

  4. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2012-01-01

      With the analysis of the first 5 fb–1 culminating in the announcement of the observation of a new particle with mass of around 126 GeV/c2, the CERN directorate decided to extend the LHC run until February 2013. This adds three months to the original schedule. Since then the LHC has continued to perform extremely well, and the total luminosity delivered so far this year is 22 fb–1. CMS also continues to perform excellently, recording data with efficiency higher than 95% for fills with the magnetic field at nominal value. The highest instantaneous luminosity achieved by LHC to date is 7.6x1033 cm–2s–1, which translates into 35 interactions per crossing. On the CMS side there has been a lot of work to handle these extreme conditions, such as a new DAQ computer farm and trigger menus to handle the pile-up, automation of recovery procedures to minimise the lost luminosity, better training for the shift crews, etc. We did suffer from a couple of infrastructure ...

  5. Developing Operator Models for UAV Search Scheduling

    NARCIS (Netherlands)

    Bertuccelli, L.F.; Beckers, N.W.M.; Cummings, M.L.

    2010-01-01

    With the increased use of Unmanned Aerial Vehicles (UAVs), it is envisioned that UAV operators will become high level mission supervisors, responsible for information management and task planning. In the context of search missions, operators supervising a large number of UAVs can become overwhelmed

  6. Regional on-road vehicle running emissions modeling and evaluation for conventional and alternative vehicle technologies.

    Science.gov (United States)

    Frey, H Christopher; Zhai, Haibo; Rouphail, Nagui M

    2009-11-01

    This study presents a methodology for estimating high-resolution, regional on-road vehicle emissions and the associated reductions in air pollutant emissions from vehicles that utilize alternative fuels or propulsion technologies. The fuels considered are gasoline, diesel, ethanol, biodiesel, compressed natural gas, hydrogen, and electricity. The technologies considered are internal combustion or compression engines, hybrids, fuel cell, and electric. Road link-based emission models are developed using modal fuel use and emission rates applied to facility- and speed-specific driving cycles. For an urban case study, passenger cars were found to be the largest sources of HC, CO, and CO(2) emissions, whereas trucks contributed the largest share of NO(x) emissions. When alternative fuel and propulsion technologies were introduced in the fleet at a modest market penetration level of 27%, their emission reductions were found to be 3-14%. Emissions for all pollutants generally decreased with an increase in the market share of alternative vehicle technologies. Turnover of the light duty fleet to newer Tier 2 vehicles reduced emissions of HC, CO, and NO(x) substantially. However, modest improvements in fuel economy may be offset by VMT growth and reductions in overall average speed.

  7. Algebraic modeling and thermodynamic design of fan-supplied tube-fin evaporators running under frosting conditions

    International Nuclear Information System (INIS)

    Ribeiro, Rafael S.; Hermes, Christian J.L.

    2014-01-01

    In this study, the method of entropy generation minimization (i.e., design aimed at facilitating both heat, mass and fluid flows) is used to assess the evaporator design (aspect ratio and fin density) considering the thermodynamic losses due to heat and mass transfer, and viscous flow processes. A fully algebraic model was put forward to simulate the thermal-hydraulic behavior of tube-fin evaporator coils running under frosting conditions. The model predictions were validated against experimental data, showing a good agreement between calculated and measured counterparts. The optimization exercise has pointed out that high aspect ratio heat exchanger designs lead to lower entropy generation in cases of fixed cooling capacity and air flow rate constrained by the characteristic curve of the fan. - Highlights: • An algebraic model for frost accumulation on tube-fin heat exchangers was advanced. • Model predictions for cooling capacity and air flow rate were compared with experimental data, with errors within ±5% band. • Minimum entropy generation criterion was used to optimize the evaporator geometry. • Thermodynamic analysis led to slender designs for fixed cooling capacity and fan characteristics

  8. Development of operator thinking model and its application to nuclear reactor plant operation system

    International Nuclear Information System (INIS)

    Miki, Tetsushi; Endou, Akira; Himeno, Yoshiaki

    1992-01-01

    At first, this paper presents the developing method of an operator thinking model and the outline of the developed model. In next, it describes the nuclear reactor plant operation system which has been developed based on this model. Finally, it has been confirmed that the method described in this paper is very effective in order to construct expert systems which replace the reactor operator's role with AI (artificial intelligence) systems. (author)

  9. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    Science.gov (United States)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  10. Tactical Medical Logistics Planning Tool: Modeling Operational Risk Assessment

    National Research Council Canada - National Science Library

    Konoske, Paula

    2004-01-01

    ...) models the patient flow from the point of injury through more definitive care, and (2) supports operations research and systems analysis studies, operational risk assessment, and field medical services planning. TML+...

  11. MODEL OF AIRCRAFT ELECTRICAL POWER SUPPLY SYSTEM CHANNEL OF ALTERNATIVE CURRENT RUNNING ON A GENERALIZED UNBALANCED THREE-PHASE LOAD

    Directory of Open Access Journals (Sweden)

    Aleksej Gennad'evich Demchenko

    2017-01-01

    Full Text Available This article is devoted to mathematical modeling of the channel of AC on-board power supply systems (PSS when running on static active-inductive load, connected on a "wye with neutral" and "delta". The mathematical model of aircraft synchronous generator, electricity distribution, three-phase static active-inductive load are considered. When making a mathematical description the author used the equations for the voltages of windings and flux linkages cir- cuits of the stator and rotor of the generator in a stationary system phase coordinates "ABC". When considering the mathe- matical model of the distribution system, the equations that took into account the drop of the voltages on the active and inductive resistance of the distribution system power wires were used. When considering the mathematical models of three- phase static loads connected on a "wye with neutral" and "delta", the equations that took into account the drop of the volt- ages on the active and inductive resistance loads were used. The matrix equations system of channel PSS AC when running on a generalized three-phase static active-inductive load was obtained. The three phase static loads scheme connected ac- cording to the "delta" scheme was converted to "wye" to simplify the solution of channel PSS AC circuit matrix equations system. The choice of the phase coordinates system "ABC" for the mathematical description of the generator, distribution system and the static load was made due to its advantage over the coordinate system "dq", because the equation written in phase coordinates are valid for symmetric and asymmetric modes of the generator, while the equations written in the coor- dinate system "dq" will be valid only for symmetric modes. As a result of joint solution of the generator equations, distribution system, three-phase static loads there were obtained the formulae for the generator stator winding phases, gen- erator phases currents, the voltage drops on the load

  12. Performance of the ATLAS Tau Trigger in Run-II

    CERN Document Server

    Ikai, Takashi; The ATLAS collaboration

    2016-01-01

    As proton-proton collisions at the LHC reach instantaneous luminosities of over 10^{34}cm^{-2}s{-1}, tau trigger operation is more challenging. Hadronic tau trigger plays a important role and is used to measure Yukawa coupling constant and to search physics of Beyond Standard Model. This presents tau trigger system, operation, performance in Run2 and strategy in the future.

  13. Modelling field scale spatial variation in water run-off, soil moisture, N2O emissions and herbage biomass of a grazed pasture using the SPACSYS model.

    Science.gov (United States)

    Liu, Yi; Li, Yuefen; Harris, Paul; Cardenas, Laura M; Dunn, Robert M; Sint, Hadewij; Murray, Phil J; Lee, Michael R F; Wu, Lianhai

    2018-04-01

    In this study, we evaluated the ability of the SPACSYS model to simulate water run-off, soil moisture, N 2 O fluxes and grass growth using data generated from a field of the North Wyke Farm Platform. The field-scale model is adapted via a linked and grid-based approach (grid-to-grid) to account for not only temporal dynamics but also the within-field spatial variation in these key ecosystem indicators. Spatial variability in nutrient and water presence at the field-scale is a key source of uncertainty when quantifying nutrient cycling and water movement in an agricultural system. Results demonstrated that the new spatially distributed version of SPACSYS provided a worthy improvement in accuracy over the standard (single-point) version for biomass productivity. No difference in model prediction performance was observed for water run-off, reflecting the closed-system nature of this variable. Similarly, no difference in model prediction performance was found for N 2 O fluxes, but here the N 2 O predictions were noticeably poor in both cases. Further developmental work, informed by this study's findings, is proposed to improve model predictions for N 2 O. Soil moisture results with the spatially distributed version appeared promising but this promise could not be objectively verified.

  14. Ubuntu Up and Running

    CERN Document Server

    Nixon, Robin

    2010-01-01

    Ubuntu for everyone! This popular Linux-based operating system is perfect for people with little technical background. It's simple to install, and easy to use -- with a strong focus on security. Ubuntu: Up and Running shows you the ins and outs of this system with a complete hands-on tour. You'll learn how Ubuntu works, how to quickly configure and maintain Ubuntu 10.04, and how to use this unique operating system for networking, business, and home entertainment. This book includes a DVD with the complete Ubuntu system and several specialized editions -- including the Mythbuntu multimedia re

  15. Data Scaling for Operational Risk Modelling

    NARCIS (Netherlands)

    H.S. Na; L. Couto Miranda; J.H. van den Berg (Jan); M. Leipoldt

    2006-01-01

    textabstractIn 2004, the Basel Committee on Banking Supervision defined Operational Risk (OR) as the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. After publication of the new capital accord containing this dfinition, statistical

  16. Modeling the Coordinated Operation between Bus Rapid Transit and Bus

    Directory of Open Access Journals (Sweden)

    Jiaqing Wu

    2015-01-01

    Full Text Available The coordination between bus rapid transit (BRT and feeder bus service is helpful in improving the operational efficiency and service level of urban public transport system. Therefore, a coordinated operation model of BRT and bus is intended to develop in this paper. The total costs are formulated and optimized by genetic algorithm. Moreover, the skip-stop BRT operation is considered when building the coordinated operation model. A case of the existing bus network in Beijing is studied, the proposed coordinated operation model of BRT and bus is applied, and the optimized headway and costs are obtained. The results show that the coordinated operation model could effectively decrease the total costs of the transit system and the transfer time of passengers. The results also suggest that the coordination between the skip-stop BRT and bus during peak hour is more effective than non-coordination operation.

  17. Evaluation of operational on-line-coupled regional air quality models over Europe and North America in the context of AQMEII phase 2. Part I: Ozone

    NARCIS (Netherlands)

    Im, U.; Bianconi, R.; Solazzo, E.; Kioutsioukis, I.; Badia, A.; Balzarini, A.; Baró, R.; Bellasio, R.; Brunner, D.; Chemel, C.; Curci, G.; Flemming, J.; Forkel, R.; Giordano, L.; Jiménez-Guerrero, P.; Hirtl, M.; Hodzic, A.; Honzak, L.; Jorba, O.; Knote, C.; Kuenen, J.J.P.; Makar, P.A.; Manders-Groot, A.; Neal, L.; Pérez, J.L.; Pirovano, G.; Pouliot, G.; San Jose, R.; Savage, N.; Schroder, W.; Sokhi, R.S.; Syrakov, D.; Torian, A.; Tuccella, P.; Werhahn, J.; Wolke, R.; Yahya, K.; Zabkar, R.; Zhang, Y.; Zhang, J.; Hogrefe, C.; Galmarini, S.

    2015-01-01

    The second phase of the Air Quality Model Evaluation International Initiative (AQMEII) brought together sixteen modeling groups from Europe and North America, running eight operational online-coupled air quality models over Europe and North America on common emissions and boundary conditions. With

  18. Evaluation of operational online-coupled regional air quality models over Europe and North America in the context of AQMEII phase 2. Part II: Particulate matter

    NARCIS (Netherlands)

    Im, U.; Bianconi, R.; Solazzo, E.; Kioutsioukis, I.; Badia, A.; Balzarini, A.; Baro, R.; Bellasio, R.; Brunner, D.; Chemel, C.; Curci, G.; Denier van der Gon, H.A.C.; Flemming, J.; Forkel, R.; Giordano, L.; Jimenez-Guerrero, P.; Hirtl, M.; Hodzic, A.; Honzak, L.; Jorba, O.; Knote, C.; Makar, P.A.; Manders-Groot, A.M.M.; Neal, L.; Perez, J.L.; Pirovano, G.; Pouliot, G.; San Jose, R.; Savage, N.; Schroder, W.; Sokhi, R.S.; Syrakov, D.; Torian, A.; Tucella, P.; Wang, K.; Werhahn, J.; Wolke, R.; Zabkar, R.; Zhang, Y.; Zhang, J.; Hogrefe, C.; Galmarini, S.

    2014-01-01

    The second phase of the Air Quality Model Evaluation International Initiative (AQMEII) brought together seventeen modeling groups from Europe and North America, running eight operational online-coupled air quality models over Europe and North America using common emissions and boundary conditions.

  19. Voluntary Wheel Running in Mice.

    Science.gov (United States)

    Goh, Jorming; Ladiges, Warren

    2015-12-02

    Voluntary wheel running in the mouse is used to assess physical performance and endurance and to model exercise training as a way to enhance health. Wheel running is a voluntary activity in contrast to other experimental exercise models in mice, which rely on aversive stimuli to force active movement. This protocol consists of allowing mice to run freely on the open surface of a slanted, plastic saucer-shaped wheel placed inside a standard mouse cage. Rotations are electronically transmitted to a USB hub so that frequency and rate of running can be captured via a software program for data storage and analysis for variable time periods. Mice are individually housed so that accurate recordings can be made for each animal. Factors such as mouse strain, gender, age, and individual motivation, which affect running activity, must be considered in the design of experiments using voluntary wheel running. Copyright © 2015 John Wiley & Sons, Inc.

  20. Mapping Relational Operations onto Hypergraph Model

    Directory of Open Access Journals (Sweden)

    2010-10-01

    . However, the hypergraph model is non-tabular; thus, loses the simplicity of the relational model. In this study, we consider the means to convert a relational model into a hypergraph model in two layers. At the bottom layer, each relational tuple can be considered as a star graph centered where the primary key node is surrounded by non-primary key attributes. At the top layer, each tuple is a hypernode, and a relation is a set of hypernodes. We presented a reference implementation of relational operators (project, rename, select, inner join, natural join, left join, right join, outer join and Cartesian join on a hypergraph model. Using a simple example, we demonstrate that a relation and relational operators can be implemented on this hypergraph model.

  1. A knowledge-Induced Operator Model

    Directory of Open Access Journals (Sweden)

    M.A. Choudhury

    2007-06-01

    Full Text Available Learning systems are in the forefront of analytical investigation in the sciences. In the social sciences they occupy the study of complexity and strongly interactive world-systems. Sometimes they are diversely referred to as symbiotics and semiotics when studied in conjunction with logical expressions. In the mathematical sciences the methodology underlying learning systems with complex behavior is based on formal logic or systems analysis. In this paper relationally learning systems are shown to transcend the space-time domain of scientific investigation into the knowledge dimension. Such a knowledge domain is explained by pervasive interaction leading to integration and followed by continuous evolution as complementary processes existing between entities and systemic domains in world-systems, thus the abbreviation IIE-processes. This paper establishes a mathematical characterization of the properties of knowledge-induced process-based world-systems in the light of the epistemology of unity of knowledge signified in this paper by extensive complementarities caused by the epistemic and ontological foundation of the text of unity of knowledge, the prime example of which is the realm of the divine laws. The result is formalism in mathematical generalization of the learning phenomenon by means of an operator. This operator summarizes the properties of interaction, integration and evolution (IIE in the continuum domain of knowledge formation signified by universal complementarities across entities, systems and sub-systems in unifying world-systems. The opposite case of ‘de-knowledge’ and its operator is also briefly formalized.

  2. How Fast Should an Animal Run When Escaping? An Optimality Model Based on the Trade-Off Between Speed and Accuracy.

    Science.gov (United States)

    Wheatley, Rebecca; Angilletta, Michael J; Niehaus, Amanda C; Wilson, Robbie S

    2015-12-01

    How fast should animals move when trying to survive? Although many studies have examined how fast animals can move, the fastest speed is not always best. For example, an individual escaping from a predator must run fast enough to escape, but not so fast that it slips and falls. To explore this idea, we developed a simple mathematical model that predicts the optimal speed for an individual running from a predator along a straight beam. A beam was used as a proxy for straight-line running with severe consequences for missteps. We assumed that success, defined as reaching the end of the beam, had two broad requirements: (1) running fast enough to escape a predator, and (2) minimizing the probability of making a mistake that would compromise speed. Our model can be tailored to different systems by revising the predator's maximal speed, the prey's stride length and motor coordination, and the dimensions of the beam. Our model predicts that animals should run slower when the beam is narrower or when coordination is worse. © The Author 2015. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  3. Perturbative Power Counting, Lowest-Index Operators and Their Renormalization in Standard Model Effective Field Theory

    Science.gov (United States)

    Liao, Yi; Ma, Xiao-Dong

    2018-03-01

    We study two aspects of higher dimensional operators in standard model effective field theory. We first introduce a perturbative power counting rule for the entries in the anomalous dimension matrix of operators with equal mass dimension. The power counting is determined by the number of loops and the difference of the indices of the two operators involved, which in turn is defined by assuming that all terms in the standard model Lagrangian have an equal perturbative power. Then we show that the operators with the lowest index are unique at each mass dimension d, i.e., (H † H) d/2 for even d ≥ 4, and (LT∈ H)C(LT∈ H) T (H † H)(d-5)/2 for odd d ≥ 5. Here H, L are the Higgs and lepton doublet, and ∈, C the antisymmetric matrix of rank two and the charge conjugation matrix, respectively. The renormalization group running of these operators can be studied separately from other operators of equal mass dimension at the leading order in power counting. We compute their anomalous dimensions at one loop for general d and find that they are enhanced quadratically in d due to combinatorics. We also make connections with classification of operators in terms of their holomorphic and anti-holomorphic weights. Supported by the National Natural Science Foundation of China under Grant Nos. 11025525, 11575089, and by the CAS Center for Excellence in Particle Physics (CCEPP)

  4. Spectral decomposition of model operators in de Branges spaces

    International Nuclear Information System (INIS)

    Gubreev, Gennady M; Tarasenko, Anna A

    2011-01-01

    The paper is devoted to studying a class of completely continuous nonselfadjoint operators in de Branges spaces of entire functions. Among other results, a class of unconditional bases of de Branges spaces consisting of values of their reproducing kernels is constructed. The operators that are studied are model operators in the class of completely continuous non-dissipative operators with two-dimensional imaginary parts. Bibliography: 22 titles.

  5. Motivation dimensions for running a marathon: A new model emerging from the Motivation of Marathon Scale (MOMS

    Directory of Open Access Journals (Sweden)

    Sima Zach

    2017-09-01

    Conclusion: This study provides a sound and solid framework for studying motivation for physically demanding tasks such as marathon runs, and needs to be similarly applied and tested in studies incorporating physical tasks which vary in mental demands.

  6. A model of the gas analysis system operation process

    Science.gov (United States)

    Yakimenko, I. V.; Kanishchev, O. A.; Lyamets, L. L.; Volkova, I. V.

    2017-12-01

    The characteristic features of modeling the gas-analysis measurement system operation process on the basis of the semi-Markov process theory are discussed. The model of the measuring gas analysis system operation process is proposed, which makes it possible to take into account the influence of the replacement interval, the level of reliability and maintainability and to evaluate the product reliability.

  7. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  8. Wheel running from a juvenile age delays onset of specific motor deficits but does not alter protein aggregate density in a mouse model of Huntington's disease

    Directory of Open Access Journals (Sweden)

    Spires Tara L

    2008-04-01

    Full Text Available Abstract Background Huntington's disease (HD is a neurodegenerative disorder predominantly affecting the cerebral cortex and striatum. Transgenic mice (R6/1 line, expressing a CAG repeat encoding an expanded polyglutamine tract in the N-terminus of the huntingtin protein, closely model HD. We have previously shown that environmental enrichment of these HD mice delays the onset of motor deficits. Furthermore, wheel running initiated in adulthood ameliorates the rear-paw clasping motor sign, but not an accelerating rotarod deficit. Results We have now examined the effects of enhanced physical activity via wheel running, commenced at a juvenile age (4 weeks, with respect to the onset of various behavioral deficits and their neuropathological correlates in R6/1 HD mice. HD mice housed post-weaning with running wheels only, to enhance voluntary physical exercise, have delayed onset of a motor co-ordination deficit on the static horizontal rod, as well as rear-paw clasping, although the accelerating rotarod deficit remains unaffected. Both wheel running and environmental enrichment rescued HD-induced abnormal habituation of locomotor activity and exploratory behavior in the open field. We have found that neither environment enrichment nor wheel running ameliorates the shrinkage of the striatum and anterior cingulate cortex (ACC in HD mice, nor the overall decrease in brain weight, measured at 9 months of age. At this age, the density of ubiquitinated protein aggregates in the striatum and ACC is also not significantly ameliorated by environmental enrichment or wheel running. Conclusion These results indicate that enhanced voluntary physical activity, commenced at an early presymptomatic stage, contributes to the positive effects of environmental enrichment. However, sensory and cognitive stimulation, as well as motor stimulation not associated with running, may constitute major components of the therapeutic benefits associated with enrichment

  9. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    2016-01-01

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  10. Modeling Grinding Processes as Micro-Machining Operation ...

    African Journals Online (AJOL)

    A computational based model for surface grinding process as a micro-machined operation has been developed. In this model, grinding forces are made up of chip formation force and sliding force. Mathematical expressions for Modeling tangential grinding force and normal grinding force were obtained. The model was ...

  11. Computer-aided operations engineering with integrated models of systems and operations

    Science.gov (United States)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.

  12. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2012-03-01

    Full Text Available system, few represent target selection, attack planning, and Denial of Service attacks, and none specifically represent attack coordination within distributed groups. Finally, a canonical model has been constructed by rational reconstruction (Habermas... logical form? (Habermas, 1976). RR has been applied in computing research to redesign a seminal expert system (Cendrowski & Bramer, 1984) and to formalise Boyd?s (1996) Observe-Orient-Decide-Act (OODA) loop (Grant & Kooter, 2005). In the research...

  13. Modeling Optimal Scheduling for Pumping System to Minimize Operation Cost and Enhance Operation Reliability

    Directory of Open Access Journals (Sweden)

    Yin Luo

    2012-01-01

    Full Text Available Traditional pump scheduling models neglect the operation reliability which directly relates with the unscheduled maintenance cost and the wear cost during the operation. Just for this, based on the assumption that the vibration directly relates with the operation reliability and the degree of wear, it could express the operation reliability as the normalization of the vibration level. The characteristic of the vibration with the operation point was studied, it could be concluded that idealized flow versus vibration plot should be a distinct bathtub shape. There is a narrow sweet spot (80 to 100 percent BEP to obtain low vibration levels in this shape, and the vibration also follows similar law with the square of the rotation speed without resonance phenomena. Then, the operation reliability could be modeled as the function of the capacity and rotation speed of the pump and add this function to the traditional model to form the new. And contrast with the tradition method, the result shown that the new model could fix the result produced by the traditional, make the pump operate in low vibration, then the operation reliability could increase and the maintenance cost could decrease.

  14. A proposal for operator team behavior model and operator's thinking mechanism

    International Nuclear Information System (INIS)

    Yoshimura, Seiichi; Takano, Kenichi; Sasou, Kunihide

    1995-01-01

    Operating environment in huge systems like nuclear power plants or airplanes is changing rapidly with the advance of computer technology. It is necessary to elucidate thinking process of operators and decision-making process of an operator team in abnormal situations, in order to prevent human errors under such environment. The Central Research Institute of Electric Power Industry is promoting a research project to establish human error prevention countermeasures by modeling and simulating the thinking process of operators and decision-making process of an operator team. In the previous paper, application of multilevel flow modeling was proposed to a mental model which conducts future prediction and cause identification, and the characteristics were verified by experienced plant operators. In this paper, an operator team behavior model and a fundamental operator's thinking mechanism especially 'situation understanding' are proposed, and the proposals are evaluated by experiments using a full-scale simulator. The results reveal that some assumptions such as 'communication is done between a leader and a follower' are almost appropriate and that the situation understanding can be represented by 'probable candidates for cause, determination of a parameter which changes when an event occurs, determination of parameters which are influenced by the change of the previous parameter, determination of a principal parameter and future prediction of the principal parameter'. (author)

  15. Source fault model of the 2011 off the pacific coast of Tohoku Earthquake, estimated from the detailed distribution of tsunami run-up heights

    International Nuclear Information System (INIS)

    Matsuta, Nobuhisa; Suzuki, Yasuhiro; Sugito, Nobuhiko; Nakata, Takashi; Watanabe, Mitsuhisa

    2015-01-01

    The distribution of tsunami run-up heights generally has spatial variations, because run-up heights are controlled by coastal topography including local-scale landforms such as natural levees, in addition to land use. Focusing on relationships among coastal topography, land conditions, and tsunami run-up heights of historical tsunamis—Meiji Sanriku (1896 A.D.), Syowa Sanriku (1933 A.D.), and Chilean Sanriku (1960 A.D.) tsunamis—along the Sanriku coast, it is found that the wavelength of a tsunami determines inundation areas as well as run-up heights. Small bays facing the Pacific Ocean are sensitive to short wavelength tsunamis, and large bays are sensitive to long wavelength tsunamis. The tsunami observed off Kamaishi during the 2011 off the Pacific coast of Tohoku Earthquake was composed of both short and long wavelength components. We examined run-up heights of the Tohoku tsunami, and found that: (1) coastal areas north of Kamaishi and south of Yamamoto were mainly attacked by short wavelength tsunamis; and (2) no evidence of short wavelength tsunamis was observed from Ofunato to the Oshika Peninsula. This observation coincides with the geomorphologically proposed source fault model, and indicates that the extraordinary large slip along the shallow part of the plate boundary off Sendai, proposed by seismological and geodesic analyses, is not needed to explain the run-up heights of the Tohoku tsunami. To better understand spatial variations of tsunami run-up heights, submarine crustal movements, and source faults, a detailed analysis is required of coastal topography, land conditions, and submarine tectonic landforms from the perspective of geomorphology. (author)

  16. Comparison of extracorporeal shock wave lithotripsy running models between outsourcing cooperation and rental cooperation conducted in Taiwan.

    Science.gov (United States)

    Liu, Chih-Kuang; Ko, Ming-Chung; Chen, Shiou-Sheng; Lee, Wen-Kai; Shia, Ben-Chang; Chiang, Han-Sun

    2015-02-01

    We conducted a retrospective study to compare the cost and effectiveness between two different running models for extracorporeal shock wave lithotripsy (SWL), including the outsourcing cooperation model (OC) and the rental cooperation model (RC). Between January 1999 and December 2005, we implemented OC for the SWL, and from January 2006 to October 2011, RC was utilized. With OC, the cooperative company provided a machine and shared a variable payment with the hospital, according to treatment sessions. With RC, the cooperative company provided a machine and received a fixed rent from the hospital. We calculated the cost of each treatment session, and evaluated the break-even point to estimate the lowest number of treatment sessions to make the balance between revenue and cost every month. Effectiveness parameters, including the stone-free rate, the retreatment rate, the rate of additional procedures and complications, were evaluated. Compared with OC there were significantly less treatment sessions for RC every month (42.6±7.8 vs. 36.8±6.5, p=0.01). The cost of each treatment session was significantly higher for OC than for RC (751.6±20.0 USD vs. 684.7±16.7 USD, p=0.01). The break-even point for the hospital was 27.5 treatment sessions/month for OC, when the hospital obtained 40% of the payment, and it could be reduced if the hospital got a greater percentage. The break-even point for the hospital was 27.3 treatment sessions/month for RC. No significant differences were noticed for the stone-free rate, the retreatment rate, the rate of additional procedures and complications. Our study revealed that RC had a lower cost for every treatment session, and fewer treatment sessions of SWL/month than OC. The study might provide a managerial implication for healthcare organization managers, when they face a situation of high price equipment investment. Copyright © 2012. Published by Elsevier B.V.

  17. Operational risk quantification and modelling within Romanian insurance industry

    Directory of Open Access Journals (Sweden)

    Tudor Răzvan

    2017-07-01

    Full Text Available This paper aims at covering and describing the shortcomings of various models used to quantify and model the operational risk within insurance industry with a particular focus on Romanian specific regulation: Norm 6/2015 concerning the operational risk issued by IT systems. While most of the local insurers are focusing on implementing the standard model to compute the Operational Risk solvency capital required, the local regulator has issued a local norm that requires to identify and assess the IT based operational risks from an ISO 27001 perspective. The challenges raised by the correlations assumed in the Standard model are substantially increased by this new regulation that requires only the identification and quantification of the IT operational risks. The solvency capital requirement stipulated by the implementation of Solvency II doesn’t recommend a model or formula on how to integrate the newly identified risks in the Operational Risk capital requirements. In this context we are going to assess the academic and practitioner’s understanding in what concerns: The Frequency-Severity approach, Bayesian estimation techniques, Scenario Analysis and Risk Accounting based on risk units, and how they could support the modelling of operational risk that are IT based. Developing an internal model only for the operational risk capital requirement proved to be, so far, costly and not necessarily beneficial for the local insurers. As the IT component will play a key role in the future of the insurance industry, the result of this analysis will provide a specific approach in operational risk modelling that can be implemented in the context of Solvency II, in a particular situation when (internal or external operational risk databases are scarce or not available.

  18. Modelling of material handling operations using controlled traffic

    DEFF Research Database (Denmark)

    Bochtis, Dionysis; Sørensen, Claus Aage Grøn; Jørgensen, Rasmus Nyholm

    2009-01-01

    and maintaining permanent traffic lanes within the fields. Furthermore, field efficiency is affected by CTF due to significant increases in idle time of in-field transport and the way the fields are traversed in material handling operations. During fertilisation, when tramline length and the driving distance......, makes existing models inadequate for evaluating field efficiency. In this paper, the development of a discrete-event model for the prediction of travelled distances of a machine operating in material handling operations using the concept of CTF is presented. The model is based on the mathematical...

  19. Running-in of rolling contacts

    NARCIS (Netherlands)

    Jamari, Jamari

    2006-01-01

    This thesis deals with running-in of the pure rolling contact situation operating in the boundary lubrication regime, so that normal plastic deformation due to the contact between asperities is the main aspect. The change of the surface topography during the running-in process and the run-in

  20. Automatic Voice Pathology Detection With Running Speech by Using Estimation of Auditory Spectrum and Cepstral Coefficients Based on the All-Pole Model.

    Science.gov (United States)

    Ali, Zulfiqar; Elamvazuthi, Irraivan; Alsulaiman, Mansour; Muhammad, Ghulam

    2016-11-01

    Automatic voice pathology detection using sustained vowels has been widely explored. Because of the stationary nature of the speech waveform, pathology detection with a sustained vowel is a comparatively easier task than that using a running speech. Some disorder detection systems with running speech have also been developed, although most of them are based on a voice activity detection (VAD), that is, itself a challenging task. Pathology detection with running speech needs more investigation, and systems with good accuracy (ACC) are required. Furthermore, pathology classification systems with running speech have not received any attention from the research community. In this article, automatic pathology detection and classification systems are developed using text-dependent running speech without adding a VAD module. A set of three psychophysics conditions of hearing (critical band spectral estimation, equal loudness hearing curve, and the intensity loudness power law of hearing) is used to estimate the auditory spectrum. The auditory spectrum and all-pole models of the auditory spectrums are computed and analyzed and used in a Gaussian mixture model for an automatic decision. In the experiments using the Massachusetts Eye & Ear Infirmary database, an ACC of 99.56% is obtained for pathology detection, and an ACC of 93.33% is obtained for the pathology classification system. The results of the proposed systems outperform the existing running-speech-based systems. The developed system can effectively be used in voice pathology detection and classification systems, and the proposed features can visually differentiate between normal and pathological samples. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  1. On the duality between long-run relations and common trends in the I(1) versus I(2) model

    DEFF Research Database (Denmark)

    Juselius, Katarina

    1994-01-01

    procedures reveal that nominal money stock is essentially I(2). Long-run price homogeneity is supported by the data and imposed on the system. It is found that the bond rate is weakly exogenous for the long-run parameters and therefore act as a driving trend. Using the nonstationarity property of the data......, "excess money" is estimated and its effect on the other determinants of the system is investigated. In particular, it is found that "excess money" has no effect on price inflation...

  2. Operation quality assessment model for video conference system

    Science.gov (United States)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  3. Modeling Changes in Bed Surface Texture and Aquatic Habitat Caused by Run-of-River Hydropower Development

    Science.gov (United States)

    Fuller, T. K.; Venditti, J. G.; Nelson, P. A.; Popescu, V.; Palen, W.

    2014-12-01

    Run-of-river (RoR) hydropower has emerged as an important alternative to large reservoir-based dams in the renewable energy portfolios of China, India, Canada, and other areas around the globe. RoR projects generate electricity by diverting a portion of the channel discharge through a large pipe for several kilometers downhill where it is used to drive turbines before being returned to the channel. Individual RoR projects are thought to be less disruptive to local ecosystems than large hydropower because they involve minimal water storage, more closely match the natural hydrograph downstream of the project, and are capable of bypassing trapped sediment. However, there is concern that temporary sediment supply disruption may degrade the productivity of salmon spawning habitat downstream of the dam by causing changes in the grain size distribution of bed surface sediment. We hypothesize that salmon populations will be most susceptible to disruptions in sediment supply in channels where; 1) sediment supply is high relative to transport capacity prior to RoR development, and 2) project design creates substantial sediment storage volume. Determining the geomorphic effect of RoR development on aquatic habitat requires many years of field data collection, and even then it can be difficult to link geomorphic change to RoR development alone. As an alternative, we used a one-dimensional morphodynamic model to test our hypothesis across a range of pre-development sediment supply conditions and sediment storage volumes. Our results confirm that coarsening of the median surface grain-size is greatest in cases where pre-development sediment supply was highest and sediment storage volumes were large enough to disrupt supply over the course of the annual hydrograph or longer. In cases where the pre-development sediment supply is low, coarsening of the median surface grain-size is less than 2 mm over a multiple-year disruption period. When sediment supply is restored, our results

  4. Spectra of operators in large N tensor models

    Science.gov (United States)

    Bulycheva, Ksenia; Klebanov, Igor R.; Milekhin, Alexey; Tarnopolsky, Grigory

    2018-01-01

    We study the operators in the large N tensor models, focusing mostly on the fermionic quantum mechanics with O (N )3 symmetry which may be either global or gauged. In the model with global symmetry, we study the spectra of bilinear operators, which are in either the symmetric traceless or the antisymmetric representation of one of the O (N ) groups. In the symmetric traceless case, the spectrum of scaling dimensions is the same as in the Sachdev-Ye-Kitaev (SYK) model with real fermions; it includes the h =2 zero mode. For the operators antisymmetric in the two indices, the scaling dimensions are the same as in the additional sector found in the complex tensor and SYK models; the lowest h =0 eigenvalue corresponds to the conserved O (N ) charges. A class of singlet operators may be constructed from contracted combinations of m symmetric traceless or antisymmetric two-particle operators. Their two-point functions receive contributions from m melonic ladders. Such multiple ladders are a new phenomenon in the tensor model, which does not seem to be present in the SYK model. The more typical 2 k -particle operators do not receive any ladder corrections and have quantized large N scaling dimensions k /2 . We construct pictorial representations of various singlet operators with low k . For larger k , we use available techniques to count the operators and show that their number grows as 2kk !. As a consequence, the theory has a Hagedorn phase transition at the temperature which approaches zero in the large N limit. We also study the large N spectrum of low-lying operators in the Gurau-Witten model, which has O (N )6 symmetry. We argue that it corresponds to one of the generalized SYK models constructed by Gross and Rosenhaus. Our paper also includes studies of the invariants in large N tensor integrals with various symmetries.

  5. Designing visual displays and system models for safe reactor operations

    Energy Technology Data Exchange (ETDEWEB)

    Brown-VanHoozer, S.A.

    1995-12-31

    The material presented in this paper is based on two studies involving the design of visual displays and the user`s prospective model of a system. The studies involve a methodology known as Neuro-Linguistic Programming and its use in expanding design choices from the operator`s perspective image. The contents of this paper focuses on the studies and how they are applicable to the safety of operating reactors.

  6. Simulation Modeling of a Facility Layout in Operations Management Classes

    Science.gov (United States)

    Yazici, Hulya Julie

    2006-01-01

    Teaching quantitative courses can be challenging. Similarly, layout modeling and lean production concepts can be difficult to grasp in an introductory OM (operations management) class. This article describes a simulation model developed in PROMODEL to facilitate the learning of layout modeling and lean manufacturing. Simulation allows for the…

  7. Cognitive-Operative Model of Intelligent Learning Systems Behavior

    Science.gov (United States)

    Laureano-Cruces, Ana Lilia; Ramirez-Rodriguez, Javier; Mora-Torres, Martha; de Arriaga, Fernando; Escarela-Perez, Rafael

    2010-01-01

    In this paper behavior during the teaching-learning process is modeled by means of a fuzzy cognitive map. The elements used to model such behavior are part of a generic didactic model, which emphasizes the use of cognitive and operative strategies as part of the student-tutor interaction. Examples of possible initial scenarios for the…

  8. A model of CCTV surveillance operator performance | Donald ...

    African Journals Online (AJOL)

    cognitive processes involved in visual search and monitoring – key activities of operators. The aim of this paper was to integrate the factors into a holistic theoretical model of performance for CCTV operators, drawing on areas such as vigilance, ...

  9. Quantitative modelling in design and operation of food supply systems

    NARCIS (Netherlands)

    Beek, van P.

    2004-01-01

    During the last two decades food supply systems not only got interest of food technologists but also from the field of Operations Research and Management Science. Operations Research (OR) is concerned with quantitative modelling and can be used to get insight into the optimal configuration and

  10. Design and modeling of reservoir operation strategies for sediment management

    NARCIS (Netherlands)

    Sloff, C.J.; Omer, A.Y.A.; Heynert, K.V.; Mohamed, Y.A.

    2015-01-01

    Appropriate operation strategies that allow for sediment flushing and sluicing (sediment routing) can reduce rapid storage losses of (hydropower and water-supply) reservoirs. In this study we have shown, using field observations and computational models, that the efficiency of these operations

  11. Search for gravitational waves from Scorpius X-1 in the first Advanced LIGO observing run with a hidden Markov model

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, D.; Acernese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Afrough, M.; Agarwal, B.; Agatsuma, K.; Aggarwal, N.T.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allen, G; Allocca, A.; Almoubayyed, H.; Altin, P. A.; Amato, A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Antier, S.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; AultONeal, K.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Bae, S.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Banagiri, S.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, R.D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bawaj, M.; Bazzan, M.; Becsy, B.; Beer, C.; Bejger, M.; Belahcene, I.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Etienne, Z. B.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, D J; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blari, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bode, N.; Boer, M.; Bogaert, J.G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Bustillo, J. Calderon; Callister, T. A.; Calloni, E.; Camp, J. B.; Canepa, M.; Canizares, P.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Carney, M. F.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglia, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Chatterjee, D.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y; Cheng, H. -P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S. S. Y.; Chung, A. K. W.; Chung, S.; Ciani, G.; Ciolfi, R.; Cirelli, C. E.; Cirone, A.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L. R.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corban, P.; Corbitt, T. R.; Corley, K. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, Laura; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Costa, C. F. Da Silva; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; De, S.; Debra, D.; Deelman, E; Degallaix, J.; De laurentis, M.; Deleglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.A.; Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Diaz, M. C.; Di Fiore, L.; Giovanni, M. Di; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Renzo, F.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Alvarez, M. Dovale; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Duncan, J.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H. -B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Feicht, J.; Fejer, M. M.; Fernandez-Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M; Fong, H.; Forsyth, P. W. F.; Forsyth, S. S.; Fournier, J. -D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gabel, M.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Ganija, M. R.; Gaonkar, S. G.; Garufi, F.; Gaudio, S.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, D.J.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.P.; Glover, L.; Goetz, E.; Goetz, R.; Gomes, A.S.P.; Gonzalez, Idelmis G.; Castro, J. M. Gonzalez; Gopakumar, A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.M.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Gruning, P.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannuksela, O. A.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Haster, C. -J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.A.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Horst, C.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Intini, G.; Isa, H. N.; Isac, J. -M.; Isi, M.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jimenez-Forteza, F.; Johnson, W.; Jones, I.D.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katolik, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kawabe, K.; Kefelian, F.; Keitel, D.; Kemball, A. J.; Kennedy, R.E.; Kent, C.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan., S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, W.; Kim, S.W.; Kim, Y.M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kramer, C.; Kringel, V.; Krishnan, B.; Krolak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kumar, S.; Kuo, L.; Kutynia, A.; Kwang-Cheol, S.; Lackey, B. D.; Lai, K. H.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lee, C.H.; Lee, K.H.; Lee, M.H.; Lee, W. H.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Luck, H.; Lumaca, D.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana Hernandez, I.; Magana-Sandoval, F.; Magana Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marka, S.; Marka, Z.; Markakis, C.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matas, A.; Matichard, F.; Matone, L.; Mavalvala, N.; Mayani, R.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McCuller, L.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Mejuto-Villa, E.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minazzoli, O.; Minenkov, Y.; Ming, J.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P.G.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nelemans, G.; Nelson, T. J. N.; Gutierrez-Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Ng, K. K. Y.; Nguyen, T. T.; Nichols, D.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; Ormiston, R.; Ortega, L. F.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pace, A. E.; Page, J.; Page, M. A.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pang, B.; Pang, P. T. H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Castro-Perez, J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Porter, E. K.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Purrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Ramirez, K. E.; Rapagnani, P.; Raymond, V.; Razzano, M.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Ricker, P. M.; Rieger, S.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romel, C. L.; Romie, J. H.; Rosinska, D.; Ross, M. P.; Rowan, S.; Rudiger, A.; Ruggi, P.; Ryan, K.; Rynge, M.; Sachdev, Perminder S; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J; Schmidt, P.; Schnabel, R.B.; Schofield, R. M. S.; Schonbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schulte, B. W.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Seidel, E.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D. A.; Shaffer, T. J.; Shah, A.; Shahriar, M. S.; Shao, L.P.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, António Dias da; Singer, A; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, R. J. E.; Smith, R. J. E.; Son, E. J.; Sonnenberg, J. A.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Stratta, G.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepanczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tapai, M.; Taracchini, A.; Taylor, J. A.; Taylor, W.R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Toyra, D.; Travasso, F.; Traylor, G.; Trifiro, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tsang, K. W.; Tse, M.; Tso, R.; Tuyenbayev, D.; Ueno, K.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahi, K.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; Van Beuzekom, Martin; van den Brand, J. F. J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasuth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P.J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Vicere, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walet, R.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, J. Z.; Wang, M.; Wang, Y. -F.; Wang, Y. -F.; Ward, L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wessel, E. K.; Wessels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Wofford, J.; Wong, G.W.K.; Worden, J.; Wright, J.L.; Wu, D.S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrozny, A.; Zanolin, M.; Zelenova, T.; Zendri, J. -P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y. -H.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zweizig, J.; Suvorova, S.; Moran, W.; Evans, J.R.

    2017-01-01

    Results are presented from a semicoherent search for continuous gravitational waves from the brightest low-mass X-ray binary, Scorpius X-1, using data collected during the first Advanced LIGO observing run. The search combines a frequency domain matched filter (Bessel-weighted F-statistic) with a

  12. Modeling Methodologies for Representing Urban Cultural Geographies in Stability Operations

    National Research Council Canada - National Science Library

    Ferris, Todd P

    2008-01-01

    ... 2.0.0, in an effort to provide modeling methodologies for a single simulation tool capable of exploring the complex world of urban cultural geographies undergoing Stability Operations in an irregular warfare (IW) environment...

  13. JELO: A Model of Joint Expeditionary Logistics Operations

    National Research Council Canada - National Science Library

    Boensel, Matthew

    2004-01-01

    JELO is an Excel spreadsheet model of joint expeditionary logistics operations and allows end-to-end analysis of the options for closing forces from CONUS, through the sea base, to objectives ashore...

  14. Aviation Shipboard Operations Modeling and Simulation (ASOMS) Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — Purpose:It is the mission of the Aviation Shipboard Operations Modeling and Simulation (ASOMS) Laboratory to provide a means by which to virtually duplicate products...

  15. Aerial Search Optimization Model (ASOM) for UAVs in Special Operations

    National Research Council Canada - National Science Library

    Kress, Moshe; Royset, Johannes O

    2007-01-01

    .... The special operations team is equipped with short-range surveillance UAVs. We combine intelligence regarding the targets with availability and capability of UAVs in an integer linear programming model...

  16. Advancing reservoir operation description in physically based hydrological models

    Science.gov (United States)

    Anghileri, Daniela; Giudici, Federico; Castelletti, Andrea; Burlando, Paolo

    2016-04-01

    Last decades have seen significant advances in our capacity of characterizing and reproducing hydrological processes within physically based models. Yet, when the human component is considered (e.g. reservoirs, water distribution systems), the associated decisions are generally modeled with very simplistic rules, which might underperform in reproducing the actual operators' behaviour on a daily or sub-daily basis. For example, reservoir operations are usually described by a target-level rule curve, which represents the level that the reservoir should track during normal operating conditions. The associated release decision is determined by the current state of the reservoir relative to the rule curve. This modeling approach can reasonably reproduce the seasonal water volume shift due to reservoir operation. Still, it cannot capture more complex decision making processes in response, e.g., to the fluctuations of energy prices and demands, the temporal unavailability of power plants or varying amount of snow accumulated in the basin. In this work, we link a physically explicit hydrological model with detailed hydropower behavioural models describing the decision making process by the dam operator. In particular, we consider two categories of behavioural models: explicit or rule-based behavioural models, where reservoir operating rules are empirically inferred from observational data, and implicit or optimization based behavioural models, where, following a normative economic approach, the decision maker is represented as a rational agent maximising a utility function. We compare these two alternate modelling approaches on the real-world water system of Lake Como catchment in the Italian Alps. The water system is characterized by the presence of 18 artificial hydropower reservoirs generating almost 13% of the Italian hydropower production. Results show to which extent the hydrological regime in the catchment is affected by different behavioural models and reservoir

  17. Bayesian network modeling of operator's state recognition process

    International Nuclear Information System (INIS)

    Hatakeyama, Naoki; Furuta, Kazuo

    2000-01-01

    Nowadays we are facing a difficult problem of establishing a good relation between humans and machines. To solve this problem, we suppose that machine system need to have a model of human behavior. In this study we model the state cognition process of a PWR plant operator as an example. We use a Bayesian network as an inference engine. We incorporate the knowledge hierarchy in the Bayesian network and confirm its validity using the example of PWR plant operator. (author)

  18. Sex differences in the effect of wheel running on subsequent nicotine-seeking in a rat adolescent-onset self-administration model.

    Science.gov (United States)

    Sanchez, Victoria; Moore, Catherine F; Brunzell, Darlene H; Lynch, Wendy J

    2014-04-01

    Wheel running attenuates nicotine-seeking in male adolescent rats; however, it is not known if this effect extends to females. To determine if wheel running during abstinence would differentially attenuate subsequent nicotine-seeking in male and female rats that had extended access to nicotine self-administration during adolescence. Male (n = 49) and female (n = 43) adolescent rats self-administered saline or nicotine (5 μg/kg) under an extended access (23-h) paradigm. Following the last self-administration session, rats were moved to polycarbonate cages for an abstinence period where they either had access to a locked or unlocked running wheel for 2 h/day. Subsequently, nicotine-seeking was examined under a within-session extinction/cue-induced reinstatement paradigm. Due to low levels of nicotine-seeking in females in both wheel groups, additional groups were included that were housed without access to a running wheel during abstinence. Females self-administered more nicotine as compared to males; however, within males and females, intake did not differ between groups prior to wheel assignment. Compared to saline controls, males and females that self-administered nicotine showed a significant increase in drug-seeking during extinction. Wheel running during abstinence attenuated nicotine-seeking during extinction in males. In females, access to either locked or unlocked wheels attenuated nicotine-seeking during extinction. While responding was reinstated by cues in both males and females, levels were modest and not significantly affected by exercise in this adolescent-onset model. While wheel running reduced subsequent nicotine-seeking in males, access to a wheel, either locked or unlocked, was sufficient to suppress nicotine-seeking in females.

  19. The LHC Tier1 at PIC: experience from first LHC run

    Directory of Open Access Journals (Sweden)

    Flix J.

    2013-11-01

    Full Text Available This paper summarizes the operational experience of the Tier1 computer center at Port d’InformacióCientífica (PIC supporting the commissioning and first run (Run1 of the Large Hadron Collider (LHC. Theevolution of the experiment computing models resulting from the higher amounts of data expected after therestart of the LHC are also described.

  20. A Coupled Snow Operations-Skier Demand Model for the Ontario (Canada) Ski Region

    Science.gov (United States)

    Pons, Marc; Scott, Daniel; Steiger, Robert; Rutty, Michelle; Johnson, Peter; Vilella, Marc

    2016-04-01

    The multi-billion dollar global ski industry is one of the tourism subsectors most directly impacted by climate variability and change. In the decades ahead, the scholarly literature consistently projects decreased reliability of natural snow cover, shortened and more variable ski seasons, as well as increased reliance on snowmaking with associated increases in operational costs. In order to develop the coupled snow, ski operations and demand model for the Ontario ski region (which represents approximately 18% of Canada's ski market), the research utilized multiple methods, including: a in situ survey of over 2400 skiers, daily operations data from ski resorts over the last 10 years, climate station data (1981-2013), climate change scenario ensemble (AR5 - RCP 8.5), an updated SkiSim model (building on Scott et al. 2003; Steiger 2010), and an agent-based model (building on Pons et al. 2014). Daily snow and ski operations for all ski areas in southern Ontario were modeled with the updated SkiSim model, which utilized current differential snowmaking capacity of individual resorts, as determined from daily ski area operations data. Snowmaking capacities and decision rules were informed by interviews with ski area managers and daily operations data. Model outputs were validated with local climate station and ski operations data. The coupled SkiSim-ABM model was run with historical weather data for seasons representative of an average winter for the 1981-2010 period, as well as an anomalously cold winter (2012-13) and the record warm winter in the region (2011-12). The impact on total skier visits and revenues, and the geographic and temporal distribution of skier visits were compared. The implications of further climate adaptation (i.e., improving the snowmaking capacity of all ski areas to the level of leading resorts in the region) were also explored. This research advances system modelling, especially improving the integration of snow and ski operations models with

  1. Changes in running economy following downhill running.

    Science.gov (United States)

    Chen, Trevor C; Nosaka, Kazunori; Tu, Jui-Hung

    2007-01-01

    In this study, we examined the time course of changes in running economy following a 30-min downhill (-15%) run at 70% peak aerobic power (VO2peak). Ten young men performed level running at 65, 75, and 85% VO2peak (5 min for each intensity) before, immediately after, and 1 - 5 days after the downhill run, at which times oxygen consumption (VO2), minute ventilation, the respiratory exchange ratio (RER), heart rate, ratings of perceived exertion (RPE), and blood lactate concentration were measured. Stride length, stride frequency, and range of motion of the ankle, knee, and hip joints during the level runs were analysed using high-speed (120-Hz) video images. Downhill running induced reductions (7 - 21%, P rate, minute ventilation, RER, RPE, blood lactate concentration, and stride frequency, as well as reductions in stride length and range of motion of the ankle and knee. The results suggest that changes in running form and compromised muscle function due to muscle damage contribute to the reduction in running economy for 3 days after downhill running.

  2. Running Parallel Discrete Event Simulators on Sierra

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, P. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jefferson, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-12-03

    In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.

  3. Effect of long-term voluntary exercise wheel running on susceptibility to bacterial pulmonary infections in a mouse model

    DEFF Research Database (Denmark)

    van de Weert-van Leeuwen, Pauline B; de Vrankrijker, Angélica M M; Fentz, Joachim

    2013-01-01

    moderate exercise has many health benefits, healthy mice showed increased bacterial (P. aeruginosa) load and symptoms, after regular voluntary exercise, with perseverance of the phagocytic capacity of monocytes and neutrophils. Whether patients, suffering from bacterial infectious diseases, should......Regular moderate exercise has been suggested to exert anti-inflammatory effects and improve immune effector functions, resulting in reduced disease incidence and viral infection susceptibility. Whether regular exercise also affects bacterial infection susceptibility is unknown. The aim...... of this study was to investigate whether regular voluntary exercise wheel running prior to a pulmonary infection with bacteria (P. aeruginosa) affects lung bacteriology, sickness severity and phagocyte immune function in mice. Balb/c mice were randomly placed in a cage with or without a running wheel. After 28...

  4. INTELLECTUAL MODEL FORMATION OF RAILWAY STATION WORK DURING THE TRAIN OPERATION EXECUTION

    Directory of Open Access Journals (Sweden)

    O. V. Lavrukhin

    2014-11-01

    Full Text Available Purpose. The aim of this research work is to develop an intelligent technology for determination of the optimal route of freight trains administration on the basis of the technical and technological parameters. This will allow receiving the operational informed decisions by the station duty officer regarding to the train operation execution within the railway station. Metodology. The main elements of the research are the technical and technological parameters of the train station during the train operation. The methods of neural networks in order to form the self-teaching automated system were put in the basis of the generated model of train operation execution. Findings. The presented model of train operation execution at the railway station is realized on the basis of artificial neural networks using learning algorithm with a «teacher» in Matlab environment. The Matlab is also used for the immediate implementation of the intelligent automated control system of the train operation designed for the integration into the automated workplace of the duty station officer. The developed system is also useful to integrate on workplace of the traffic controller. This proposal is viable in case of the availability of centralized traffic control on the separate section of railway track. Originality. The model of train station operation during the train operation execution with elements of artificial intelligence was formed. It allows providing informed decisions to the station duty officer concerning a choice of rational and a safe option of reception and non-stop run of the trains with the ability of self-learning and adaptation to changing conditions. This condition is achieved by the principles of the neural network functioning. Practical value. The model of the intelligent system management of the process control for determining the optimal route receptionfor different categories of trains was formed.In the operational mode it offers the possibility

  5. A model to predict productivity of different chipping operations ...

    African Journals Online (AJOL)

    Additional international case studies from North America, South America, and central and northern Europe were used to test the accuracy of the model, in which 15 studies confirmed the model's validity and two failed to pass the test. Keywords: average piece size, chipper, power, sensitivity analysis, type of operation, unit ...

  6. A Model for Resource Allocation Using Operational Knowledge Assets

    Science.gov (United States)

    Andreou, Andreas N.; Bontis, Nick

    2007-01-01

    Purpose: The paper seeks to develop a business model that shows the impact of operational knowledge assets on intellectual capital (IC) components and business performance and use the model to show how knowledge assets can be prioritized in driving resource allocation decisions. Design/methodology/approach: Quantitative data were collected from 84…

  7. The development of a model of control room operator cognition

    International Nuclear Information System (INIS)

    Harrison, C. Felicity

    1998-01-01

    The nuclear generation station CRO is one of the main contributors to plant performance and safety. In the past, studies of operator behaviour have been made under emergency or abnormal situations, with little consideration being given to the more routine aspects of plant operation. One of the tasks of the operator is to detect the early signs of a problem, and to take steps to prevent a transition to an abnormal plant state. In order to do this CRO must determine that plant indications are no longer in the normal range, and take action to prevent a further move away from normal. This task is made more difficult by the extreme complexity of the control room, and by the may hindrances that the operator must face. It would therefore be of great benefit to understand CRO cognitive performance, especially under normal operating conditions. Through research carried out at several Canadian nuclear facilities we were able to develop a deeper understanding of CRO monitoring of highly automated systems during normal operations, and specifically to investigate the contributions of cognitive skills to monitoring performance. The consultants were asked to develop a deeper understanding of CRO monitoring during normal operations, and specifically to investigate the contributions of cognitive skills to monitoring performance. The overall objective of this research was to develop and validate a model of CRO monitoring. The findings of this research have practical implications for systems integration, training, and interface design. The result of this work was a model of operator monitoring activities. (author)

  8. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  9. Run-Time Control For Software Defined Radio

    NARCIS (Netherlands)

    Smit, L.T.; Smit, Gerardus Johannes Maria; Havinga, Paul J.M.; Hurink, Johann L.; Broersma, Haitze J.

    2002-01-01

    A control system is presented, which adapts at run-time a software defined radio to the dynamic external environment. The goal is to operate with minimized use of resources and energy consumption, while satisfying an adequate quality of service. The control system is based on a model, which selects

  10. Leadership and characteristics of nonprofit mental health peer-run organizations nationwide.

    Science.gov (United States)

    Ostrow, Laysha; Hayes, Stephania L

    2015-04-01

    Mental health peer-run organizations are nonprofits providing venues for support and advocacy among people diagnosed as having mental disorders. It has been proposed that consumer involvement is essential to their operations. This study reported organizational characteristics of peer-run organizations nationwide and how these organizations differ by degree of consumer control. Data were from the 2012 National Survey of Peer-Run Organizations. The analyses described the characteristics of the organizations (N=380) on five domains of nonprofit research, comparing results for organizations grouped by degree of involvement by consumers in the board of directors. Peer-run organizations provided a range of supports and educational and advocacy activities and varied in their capacity and resources. Some variation was explained by the degree of consumer control. These organizations seemed to be operating consistently with evidence on peer-run models. The reach of peer-run organizations, and the need for in-depth research, continues to grow.

  11. Koopman Operator Framework for Time Series Modeling and Analysis

    Science.gov (United States)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  12. BSM-MBR: a benchmark simulation model to compare control and operational strategies for membrane bioreactors.

    Science.gov (United States)

    Maere, Thomas; Verrecht, Bart; Moerenhout, Stefanie; Judd, Simon; Nopens, Ingmar

    2011-03-01

    A benchmark simulation model for membrane bioreactors (BSM-MBR) was developed to evaluate operational and control strategies in terms of effluent quality and operational costs. The configuration of the existing BSM1 for conventional wastewater treatment plants was adapted using reactor volumes, pumped sludge flows and membrane filtration for the water-sludge separation. The BSM1 performance criteria were extended for an MBR taking into account additional pumping requirements for permeate production and aeration requirements for membrane fouling prevention. To incorporate the effects of elevated sludge concentrations on aeration efficiency and costs a dedicated aeration model was adopted. Steady-state and dynamic simulations revealed BSM-MBR, as expected, to out-perform BSM1 for effluent quality, mainly due to complete retention of solids and improved ammonium removal from extensive aeration combined with higher biomass levels. However, this was at the expense of significantly higher operational costs. A comparison with three large-scale MBRs showed BSM-MBR energy costs to be realistic. The membrane aeration costs for the open loop simulations were rather high, attributed to non-optimization of BSM-MBR. As proof of concept two closed loop simulations were run to demonstrate the usefulness of BSM-MBR for identifying control strategies to lower operational costs without compromising effluent quality. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Categorical model of structural operational semantics for imperative language

    Directory of Open Access Journals (Sweden)

    William Steingartner

    2016-12-01

    Full Text Available Definition of programming languages consists of the formal definition of syntax and semantics. One of the most popular semantic methods used in various stages of software engineering is structural operational semantics. It describes program behavior in the form of state changes after execution of elementary steps of program. This feature makes structural operational semantics useful for implementation of programming languages and also for verification purposes. In our paper we present a new approach to structural operational semantics. We model behavior of programs in category of states, where objects are states, an abstraction of computer memory and morphisms model state changes, execution of a program in elementary steps. The advantage of using categorical model is its exact mathematical structure with many useful proved properties and its graphical illustration of program behavior as a path, i.e. a composition of morphisms. Our approach is able to accentuate dynamics of structural operational semantics. For simplicity, we assume that data are intuitively typed. Visualization and facility of our model is  not only  a  new model of structural operational semantics of imperative programming languages but it can also serve for education purposes.

  14. VERIFICATION OF GEAR DYNAMIC MODEL IN DIFFERENT OPERATING CONDITIONS

    Directory of Open Access Journals (Sweden)

    Grzegorz PERUŃ

    2014-09-01

    Full Text Available The article presents the results of verification of the drive system dynamic model with gear. Tests were carried out on the real object in different operating conditions. For the same assumed conditions were also carried out simulation studies. Comparison of the results obtained from those two series of tests helped determine the suitability of the model and verify the possibility of replacing experimental research by simulations with use of dynamic model.

  15. Modelling and optimal operation of a small-scale integrated energy based district heating and cooling system

    International Nuclear Information System (INIS)

    Jing, Z.X.; Jiang, X.S.; Wu, Q.H.; Tang, W.H.; Hua, B.

    2014-01-01

    This paper presents a comprehensive model of a small-scale integrated energy based district heating and cooling (DHC) system located in a residential area of hot-summer and cold-winter zone, which makes joint use of wind energy, solar energy, natural gas and electric energy. The model includes an off-grid wind turbine generator, heat producers, chillers, a water supply network and terminal loads. This research also investigates an optimal operating strategy based on Group Search Optimizer (GSO), through which the daily running cost of the system is optimized in both the heating and cooling modes. The strategy can be used to find the optimal number of operating chillers, optimal outlet water temperature set points of boilers and optimal water flow set points of pumps, taking into account cost functions and various operating constraints. In order to verify the model and the optimal operating strategy, performance tests have been undertaken using MATLAB. The simulation results prove the validity of the model and show that the strategy is able to minimize the system operation cost. The proposed system is evaluated in comparison with a conventional separation production (SP) system. The feasibility of investment for the DHC system is also discussed. The comparative results demonstrate the investment feasibility, the significant energy saving and the cost reduction, achieved in daily operation in an environment, where there are varying heating loads, cooling loads, wind speeds, solar radiations and electricity prices. - Highlights: • A model of a small-scale integrated energy based DHC system is presented. • An off-grid wind generator used for water heating is embedded in the model. • An optimal control strategy is studied to optimize the running cost of the system. • The designed system is proved to be energy efficient and cost effective in operation

  16. Testing and Implementation of the Navy's Operational Circulation Model for the Mediterranean Sea

    Science.gov (United States)

    Farrar, P. D.; Mask, A. C.

    2012-04-01

    The US Naval Oceanographic Office (NAVOCEANO) has the responsibility for running ocean models in support of Navy operations. NAVOCEANO delivers Navy-relevant global, regional, and coastal ocean forecast products on a 24 hour/7 day a week schedule. In 2011, NAVOCEANO implemented an operational version of the RNCOM (Regional Navy Coastal Ocean Model) for the Mediterranean Sea (MedSea), replacing an older variation of the Princeton Ocean Model originally set up for this area back in the mid-1990's. RNCOM is a gridded model that assimilates both satellite data and in situ profile data in near real time. This 3km MedSea RNCOM is nested within a lower resolution global NCOM in the Atlantic at the 12.5 degree West longitude. Before being accepted as a source of operational products, a Navy ocean model must pass a series of validation tests and then once in service, its skill is monitored by software and regional specialists. This presentation will provide a brief summary of the initial evaluation results. Because of the oceanographic peculiarities of this basin, the MedSea implementation posed a set of new problems for an RNCOM operation. One problem was the present Navy satellite altimetry model assimilation techniques do not improve Mediterranean NCOM forecasts, so it has been turned off, pending improvements. Another problem was that since most in-situ observations were profiling floats with short five-day profiling intervals, there was a problem with temporal aliasing when comparing these observations to the NCOM predictions. Because of the time and spatial correlations in the MedSea and in the model, the observation/model comparisons would give an unrealistically optimistic estimate of model accuracy of the Mediterranean's temperature/salinity structure. Careful pre-selection of profiles for comparison during the evaluation stage, based on spatial distribution and novelty, was used to minimize this effect. NAVOCEANO's operational customers are interested primarily in

  17. Tourism Operator Sustainability Predictive Model in Marine Park

    OpenAIRE

    Mohamad, Zaleha; Ramli, Nurhafizah; Muslim, Aidy Mohamed Shawal M.; Hii, Yii Siang

    2017-01-01

    Sustainable tourism is the concept of visiting a place as a tourist and trying to make only a positive impact on the environment, society and economy. Tourism can involve primary transportation to the general location, local transportation, accommodations, entertainment, recreation, nourishment and shopping. In this context, the research study tourism is operator towards recreational. This study analyzed the sustainability tourism predictive model towards operator in marine park. The research...

  18. Designing visual displays and system models for safe reactor operations

    International Nuclear Information System (INIS)

    Brown-VanHoozer, S.A.

    1995-01-01

    The material presented in this paper is based on two studies involving the design of visual displays and the user's prospective model of a system. The studies involve a methodology known as Neuro-Linguistic Programming and its use in expanding design choices from the operator's perspective image. The contents of this paper focuses on the studies and how they are applicable to the safety of operating reactors

  19. Model of environmental life cycle assessment for coal mining operations

    Energy Technology Data Exchange (ETDEWEB)

    Burchart-Korol, Dorota, E-mail: dburchart@gig.eu; Fugiel, Agata, E-mail: afugiel@gig.eu; Czaplicka-Kolarz, Krystyna, E-mail: kczaplicka@gig.eu; Turek, Marian, E-mail: mturek@gig.eu

    2016-08-15

    This paper presents a novel approach to environmental assessment of coal mining operations, which enables assessment of the factors that are both directly and indirectly affecting the environment and are associated with the production of raw materials and energy used in processes. The primary novelty of the paper is the development of a computational environmental life cycle assessment (LCA) model for coal mining operations and the application of the model for coal mining operations in Poland. The LCA model enables the assessment of environmental indicators for all identified unit processes in hard coal mines with the life cycle approach. The proposed model enables the assessment of greenhouse gas emissions (GHGs) based on the IPCC method and the assessment of damage categories, such as human health, ecosystems and resources based on the ReCiPe method. The model enables the assessment of GHGs for hard coal mining operations in three time frames: 20, 100 and 500 years. The model was used to evaluate the coal mines in Poland. It was demonstrated that the largest environmental impacts in damage categories were associated with the use of fossil fuels, methane emissions and the use of electricity, processing of wastes, heat, and steel supports. It was concluded that an environmental assessment of coal mining operations, apart from direct influence from processing waste, methane emissions and drainage water, should include the use of electricity, heat and steel, particularly for steel supports. Because the model allows the comparison of environmental impact assessment for various unit processes, it can be used for all hard coal mines, not only in Poland but also in the world. This development is an important step forward in the study of the impacts of fossil fuels on the environment with the potential to mitigate the impact of the coal industry on the environment. - Highlights: • A computational LCA model for assessment of coal mining operations • Identification of

  20. Model of environmental life cycle assessment for coal mining operations

    International Nuclear Information System (INIS)

    Burchart-Korol, Dorota; Fugiel, Agata; Czaplicka-Kolarz, Krystyna; Turek, Marian

    2016-01-01

    This paper presents a novel approach to environmental assessment of coal mining operations, which enables assessment of the factors that are both directly and indirectly affecting the environment and are associated with the production of raw materials and energy used in processes. The primary novelty of the paper is the development of a computational environmental life cycle assessment (LCA) model for coal mining operations and the application of the model for coal mining operations in Poland. The LCA model enables the assessment of environmental indicators for all identified unit processes in hard coal mines with the life cycle approach. The proposed model enables the assessment of greenhouse gas emissions (GHGs) based on the IPCC method and the assessment of damage categories, such as human health, ecosystems and resources based on the ReCiPe method. The model enables the assessment of GHGs for hard coal mining operations in three time frames: 20, 100 and 500 years. The model was used to evaluate the coal mines in Poland. It was demonstrated that the largest environmental impacts in damage categories were associated with the use of fossil fuels, methane emissions and the use of electricity, processing of wastes, heat, and steel supports. It was concluded that an environmental assessment of coal mining operations, apart from direct influence from processing waste, methane emissions and drainage water, should include the use of electricity, heat and steel, particularly for steel supports. Because the model allows the comparison of environmental impact assessment for various unit processes, it can be used for all hard coal mines, not only in Poland but also in the world. This development is an important step forward in the study of the impacts of fossil fuels on the environment with the potential to mitigate the impact of the coal industry on the environment. - Highlights: • A computational LCA model for assessment of coal mining operations • Identification of

  1. Hysteresis modeling based on saturation operator without constraints

    International Nuclear Information System (INIS)

    Park, Y.W.; Seok, Y.T.; Park, H.J.; Chung, J.Y.

    2007-01-01

    This paper proposes a simple way to model complex hysteresis in a magnetostrictive actuator by employing the saturation operators without constraints. Having no constraints causes a singularity problem, i.e. the inverse matrix cannot be obtained during calculating the weights. To overcome it, a pseudoinverse concept is introduced. Simulation results are compared with the experimental data, based on a Terfenol-D actuator. It is clear that the proposed model is much closer to the experimental data than the modified PI model. The relative error is calculated as 12% and less than 1% with the modified PI Model and proposed model, respectively

  2. FLUKA predictions of the absorbed dose in the HCAL Endcap scintillators using a Run1 (2012) CMS FLUKA model

    CERN Document Server

    CMS Collaboration

    2016-01-01

    Estimates of absorbed dose in HCAL Endcap (HE) region as predicted by FLUKA Monte Carlo code. Dose is calculated in an R-phi-Z grid overlaying HE region, with resolution 1cm in R, 1mm in Z, and a single 360 degree bin in phi. This allows calculation of absorbed dose within a single 4mm thick scintillator layer without including other regions or materials. This note shows estimates of the cumulative dose in scintillator layers 1 and 7 during the 2012 run.

  3. Marine Vessel Models in Changing Operational Conditions - A Tutorial

    DEFF Research Database (Denmark)

    Perez, Tristan; Sørensen, Asgeir; Blanke, Mogens

    2006-01-01

    conditions (VOC). However, since marine systems operate in changing VOCs, there is a need to adapt the models. To date, there is no theory available to describe a general model valid across different VOCs due to the complexity of the hydrodynamic involved. It is believed that system identification could......This tutorial paper provides an introduction, from a systems perspective, to the topic of ship motion dynamics of surface ships. It presents a classification of parametric models currently used for monitoring and control of marine vessels. These models are valid for certain vessel operational...... provide a significant contribution towards obtaining such a general model. Therefore, the main aim of the paper is to highlight the essential characteristics of marine system dynamics so as to provide a background for practitioners who would attempt future application of system identification techniques...

  4. Effect of long-term voluntary exercise wheel running on susceptibility to bacterial pulmonary infections in a mouse model.

    Directory of Open Access Journals (Sweden)

    Pauline B van de Weert-van Leeuwen

    Full Text Available Regular moderate exercise has been suggested to exert anti-inflammatory effects and improve immune effector functions, resulting in reduced disease incidence and viral infection susceptibility. Whether regular exercise also affects bacterial infection susceptibility is unknown. The aim of this study was to investigate whether regular voluntary exercise wheel running prior to a pulmonary infection with bacteria (P. aeruginosa affects lung bacteriology, sickness severity and phagocyte immune function in mice. Balb/c mice were randomly placed in a cage with or without a running wheel. After 28 days, mice were intranasally infected with P. aeruginosa. Our study showed that regular exercise resulted in a higher sickness severity score and bacterial (P. aeruginosa loads in the lungs. The phagocytic capacity of monocytes and neutrophils from spleen and lungs was not affected. Although regular moderate exercise has many health benefits, healthy mice showed increased bacterial (P. aeruginosa load and symptoms, after regular voluntary exercise, with perseverance of the phagocytic capacity of monocytes and neutrophils. Whether patients, suffering from bacterial infectious diseases, should be encouraged to engage in exercise and physical activities with caution requires further research.

  5. A Novel Technique for Running the NASA Legacy Code LAPIN Synchronously With Simulations Developed Using Simulink

    Science.gov (United States)

    Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.

    2012-01-01

    This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.

  6. High-resolution modeling of tsunami run-up flooding: a case study of flooding in Kamaishi city, Japan, induced by the 2011 Tohoku tsunami

    Directory of Open Access Journals (Sweden)

    R. Akoh

    2017-11-01

    Full Text Available Run-up processes of the 2011 Tohoku tsunami into the city of Kamaishi, Japan, were simulated numerically using 2-D shallow water equations with a new treatment of building footprints. The model imposes an internal hydraulic condition of permeable and impermeable walls at the building footprint outline on unstructured triangular meshes. Digital data of the building footprint approximated by polygons were overlaid on a 1.0 m resolution terrain model. The hydraulic boundary conditions were ascertained using conventional tsunami propagation calculation from the seismic center to nearshore areas. Run-up flow calculations were conducted under the same hydraulic conditions for several cases having different building permeabilities. Comparison of computation results with field data suggests that the case with a small amount of wall permeability gives better agreement than the case with impermeable condition. Spatial mapping of an indicator for run-up flow intensity (IF = (hU2max, where h and U respectively denote the inundation depth and flow velocity during the flood, shows fairly good correlation with the distribution of houses destroyed by flooding. As a possible mitigation measure, the influence of the buildings on the flow was assessed using a numerical experiment for solid buildings arrayed alternately in two lines along the coast. Results show that the buildings can prevent seawater from flowing straight to the city center while maintaining access to the sea.

  7. High-resolution modeling of tsunami run-up flooding: a case study of flooding in Kamaishi city, Japan, induced by the 2011 Tohoku tsunami

    Science.gov (United States)

    Akoh, Ryosuke; Ishikawa, Tadaharu; Kojima, Takashi; Tomaru, Mahito; Maeno, Shiro

    2017-11-01

    Run-up processes of the 2011 Tohoku tsunami into the city of Kamaishi, Japan, were simulated numerically using 2-D shallow water equations with a new treatment of building footprints. The model imposes an internal hydraulic condition of permeable and impermeable walls at the building footprint outline on unstructured triangular meshes. Digital data of the building footprint approximated by polygons were overlaid on a 1.0 m resolution terrain model. The hydraulic boundary conditions were ascertained using conventional tsunami propagation calculation from the seismic center to nearshore areas. Run-up flow calculations were conducted under the same hydraulic conditions for several cases having different building permeabilities. Comparison of computation results with field data suggests that the case with a small amount of wall permeability gives better agreement than the case with impermeable condition. Spatial mapping of an indicator for run-up flow intensity (IF = (hU2)max, where h and U respectively denote the inundation depth and flow velocity during the flood, shows fairly good correlation with the distribution of houses destroyed by flooding. As a possible mitigation measure, the influence of the buildings on the flow was assessed using a numerical experiment for solid buildings arrayed alternately in two lines along the coast. Results show that the buildings can prevent seawater from flowing straight to the city center while maintaining access to the sea.

  8. Making the error-controlling algorithm of observable operator models constructive.

    Science.gov (United States)

    Zhao, Ming-Jie; Jaeger, Herbert; Thon, Michael

    2009-12-01

    Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algorithms has been developed, with increasing computational and statistical efficiency, whose recent culmination was the error-controlling (EC) algorithm developed by the first author. The EC algorithm is an iterative, asymptotically correct algorithm that yields (and minimizes) an assured upper bound on the modeling error. The run time is faster by at least one order of magnitude than EM-based HMM learning algorithms and yields significantly more accurate models than the latter. Here we present a significant improvement of the EC algorithm: the constructive error-controlling (CEC) algorithm. CEC inherits from EC the main idea of minimizing an upper bound on the modeling error but is constructive where EC needs iterations. As a consequence, we obtain further gains in learning speed without loss in modeling accuracy.

  9. Neural Networks for Hydrological Modeling Tool for Operational Purposes

    Science.gov (United States)

    Bhatt, Divya; Jain, Ashu

    2010-05-01

    Hydrological models are useful in many water resources applications such as flood control, irrigation and drainage, hydro power generation, water supply, erosion and sediment control, etc. Estimates of runoff are needed in many water resources planning, design development, operation and maintenance activities. Runoff is generally computed using rainfall-runoff models. Computer based hydrologic models have become popular for obtaining hydrological forecasts and for managing water systems. Rainfall-runoff library (RRL) is computer software developed by Cooperative Research Centre for Catchment Hydrology (CRCCH), Australia consisting of five different conceptual rainfall-runoff models, and has been in operation in many water resources applications in Australia. Recently, soft artificial intelligence tools such as Artificial Neural Networks (ANNs) have become popular for research purposes but have not been adopted in operational hydrological forecasts. There is a strong need to develop ANN models based on real catchment data and compare them with the conceptual models actually in use in real catchments. In this paper, the results from an investigation on the use of RRL and ANNs are presented. Out of the five conceptual models in the RRL toolkit, SimHyd model has been used. Genetic Algorithm has been used as an optimizer in the RRL to calibrate the SimHyd model. Trial and error procedures were employed to arrive at the best values of various parameters involved in the GA optimizer to develop the SimHyd model. The results obtained from the best configuration of the SimHyd model are presented here. Feed-forward neural network model structure trained by back-propagation training algorithm has been adopted here to develop the ANN models. The daily rainfall and runoff data derived from Bird Creek Basin, Oklahoma, USA have been employed to develop all the models included here. A wide range of error statistics have been used to evaluate the performance of all the models

  10. Model of environmental life cycle assessment for coal mining operations.

    Science.gov (United States)

    Burchart-Korol, Dorota; Fugiel, Agata; Czaplicka-Kolarz, Krystyna; Turek, Marian

    2016-08-15

    This paper presents a novel approach to environmental assessment of coal mining operations, which enables assessment of the factors that are both directly and indirectly affecting the environment and are associated with the production of raw materials and energy used in processes. The primary novelty of the paper is the development of a computational environmental life cycle assessment (LCA) model for coal mining operations and the application of the model for coal mining operations in Poland. The LCA model enables the assessment of environmental indicators for all identified unit processes in hard coal mines with the life cycle approach. The proposed model enables the assessment of greenhouse gas emissions (GHGs) based on the IPCC method and the assessment of damage categories, such as human health, ecosystems and resources based on the ReCiPe method. The model enables the assessment of GHGs for hard coal mining operations in three time frames: 20, 100 and 500years. The model was used to evaluate the coal mines in Poland. It was demonstrated that the largest environmental impacts in damage categories were associated with the use of fossil fuels, methane emissions and the use of electricity, processing of wastes, heat, and steel supports. It was concluded that an environmental assessment of coal mining operations, apart from direct influence from processing waste, methane emissions and drainage water, should include the use of electricity, heat and steel, particularly for steel supports. Because the model allows the comparison of environmental impact assessment for various unit processes, it can be used for all hard coal mines, not only in Poland but also in the world. This development is an important step forward in the study of the impacts of fossil fuels on the environment with the potential to mitigate the impact of the coal industry on the environment. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Modelling safety of gantry crane operations using Petri nets.

    Science.gov (United States)

    Singh, Karmveer; Raj, Navneet; Sahu, S K; Behera, R K; Sarkar, Sobhan; Maiti, J

    2017-03-01

    Being a powerful tool in modelling industrial and service operations, Petri net (PN) has been extremely used in different domains, but its application in safety study is limited. In this study, we model the gantry crane operations used for industrial activities using generalized stochastic PNs. The complete cycle of operations of the gantry crane is split into three parts namely inspection and loading, movement of load, and unloading of load. PN models are developed for all three parts and the whole system as well. The developed PN models have captured the safety issues through reachability tree. The hazardous states are identified and how they ultimately lead to some unwanted accidents is demonstrated. The possibility of falling of load and failure of hook, sling, attachment and hoist rope are identified. Possible suggestions based on the study are presented for redesign of the system. For example, mechanical stoppage of operations in case of loosely connected load, and warning system for use of wrong buttons is tested using modified models.

  12. Modeling of reservoir operation in UNH global hydrological model

    Science.gov (United States)

    Shiklomanov, Alexander; Prusevich, Alexander; Frolking, Steve; Glidden, Stanley; Lammers, Richard; Wisser, Dominik

    2015-04-01

    Climate is changing and river flow is an integrated characteristic reflecting numerous environmental processes and their changes aggregated over large areas. Anthropogenic impacts on the river flow, however, can significantly exceed the changes associated with climate variability. Besides of irrigation, reservoirs and dams are one of major anthropogenic factor affecting streamflow. They distort hydrological regime of many rivers by trapping of freshwater runoff, modifying timing of river discharge and increasing the evaporation rate. Thus, reservoirs is an integral part of the global hydrological system and their impacts on rivers have to be taken into account for better quantification and understanding of hydrological changes. We developed a new technique, which was incorporated into WBM-TrANS model (Water Balance Model-Transport from Anthropogenic and Natural Systems) to simulate river routing through large reservoirs and natural lakes based on information available from freely accessible databases such as GRanD (the Global Reservoir and Dam database) or NID (National Inventory of Dams for US). Different formulations were applied for unregulated spillway dams and lakes, and for 4 types of regulated reservoirs, which were subdivided based on main purpose including generic (multipurpose), hydropower generation, irrigation and water supply, and flood control. We also incorporated rules for reservoir fill up and draining at the times of construction and decommission based on available data. The model were tested for many reservoirs of different size and types located in various climatic conditions using several gridded meteorological data sets as model input and observed daily and monthly discharge data from GRDC (Global Runoff Data Center), USGS Water Data (US Geological Survey), and UNH archives. The best results with Nash-Sutcliffe model efficiency coefficient in the range of 0.5-0.9 were obtained for temperate zone of Northern Hemisphere where most of large

  13. A forced running wheel system with a microcontroller that provides high-intensity exercise training in an animal ischemic stroke model

    Energy Technology Data Exchange (ETDEWEB)

    Chen, C.C. [Department of Electrical Engineering, National Cheng-Kung University, Tainan, Taiwan (China); Chang, M.W. [Department of Electrical Engineering, Southern Taiwan University of Science and Technology, Tainan, Taiwan (China); Chang, C.P. [Department of Biotechnology, Southern Taiwan University of Science and Technology, Tainan, Taiwan (China); Chan, S.C.; Chang, W.Y.; Yang, C.L. [Department of Electrical Engineering, National Cheng-Kung University, Tainan, Taiwan (China); Lin, M.T. [Department of Medical Research, Chi Mei Medical Center, Tainan, Taiwan (China)

    2014-08-15

    We developed a forced non-electric-shock running wheel (FNESRW) system that provides rats with high-intensity exercise training using automatic exercise training patterns that are controlled by a microcontroller. The proposed system successfully makes a breakthrough in the traditional motorized running wheel to allow rats to perform high-intensity training and to enable comparisons with the treadmill at the same exercise intensity without any electric shock. A polyvinyl chloride runway with a rough rubber surface was coated on the periphery of the wheel so as to permit automatic acceleration training, and which allowed the rats to run consistently at high speeds (30 m/min for 1 h). An animal ischemic stroke model was used to validate the proposed system. FNESRW, treadmill, control, and sham groups were studied. The FNESRW and treadmill groups underwent 3 weeks of endurance running training. After 3 weeks, the experiments of middle cerebral artery occlusion, the modified neurological severity score (mNSS), an inclined plane test, and triphenyltetrazolium chloride were performed to evaluate the effectiveness of the proposed platform. The proposed platform showed that enhancement of motor function, mNSS, and infarct volumes was significantly stronger in the FNESRW group than the control group (P<0.05) and similar to the treadmill group. The experimental data demonstrated that the proposed platform can be applied to test the benefit of exercise-preconditioning-induced neuroprotection using the animal stroke model. Additional advantages of the FNESRW system include stand-alone capability, independence of subjective human adjustment, and ease of use.

  14. A forced running wheel system with a microcontroller that provides high-intensity exercise training in an animal ischemic stroke model

    Directory of Open Access Journals (Sweden)

    C.C. Chen

    2014-10-01

    Full Text Available We developed a forced non-electric-shock running wheel (FNESRW system that provides rats with high-intensity exercise training using automatic exercise training patterns that are controlled by a microcontroller. The proposed system successfully makes a breakthrough in the traditional motorized running wheel to allow rats to perform high-intensity training and to enable comparisons with the treadmill at the same exercise intensity without any electric shock. A polyvinyl chloride runway with a rough rubber surface was coated on the periphery of the wheel so as to permit automatic acceleration training, and which allowed the rats to run consistently at high speeds (30 m/min for 1 h. An animal ischemic stroke model was used to validate the proposed system. FNESRW, treadmill, control, and sham groups were studied. The FNESRW and treadmill groups underwent 3 weeks of endurance running training. After 3 weeks, the experiments of middle cerebral artery occlusion, the modified neurological severity score (mNSS, an inclined plane test, and triphenyltetrazolium chloride were performed to evaluate the effectiveness of the proposed platform. The proposed platform showed that enhancement of motor function, mNSS, and infarct volumes was significantly stronger in the FNESRW group than the control group (P<0.05 and similar to the treadmill group. The experimental data demonstrated that the proposed platform can be applied to test the benefit of exercise-preconditioning-induced neuroprotection using the animal stroke model. Additional advantages of the FNESRW system include stand-alone capability, independence of subjective human adjustment, and ease of use.

  15. ATLAS detector performance in Run1: Calorimeters

    CERN Document Server

    Burghgrave, B; The ATLAS collaboration

    2014-01-01

    ATLAS operated with an excellent efficiency during the Run 1 data taking period, recording respectively in 2011 and 2012 an integrated luminosity of 5.3 fb-1 at √s = 7 TeV and 21.6 fb-1 at √s = 8TeV. The Liquid Argon and Tile Calorimeter contributed to this effort by operating with a good data quality efficiency, improving over the whole Run 1. This poster presents the Run 1 overall status and performance, LS1 works and Preparations for Run 2.

  16. Validation of Fatigue Modeling Predictions in Aviation Operations

    Science.gov (United States)

    Gregory, Kevin; Martinez, Siera; Flynn-Evans, Erin

    2017-01-01

    Bio-mathematical fatigue models that predict levels of alertness and performance are one potential tool for use within integrated fatigue risk management approaches. A number of models have been developed that provide predictions based on acute and chronic sleep loss, circadian desynchronization, and sleep inertia. Some are publicly available and gaining traction in settings such as commercial aviation as a means of evaluating flight crew schedules for potential fatigue-related risks. Yet, most models have not been rigorously evaluated and independently validated for the operations to which they are being applied and many users are not fully aware of the limitations in which model results should be interpreted and applied.

  17. Dynamic and adaptive policy models for coalition operations

    Science.gov (United States)

    Verma, Dinesh; Calo, Seraphin; Chakraborty, Supriyo; Bertino, Elisa; Williams, Chris; Tucker, Jeremy; Rivera, Brian; de Mel, Geeth R.

    2017-05-01

    It is envisioned that the success of future military operations depends on the better integration, organizationally and operationally, among allies, coalition members, inter-agency partners, and so forth. However, this leads to a challenging and complex environment where the heterogeneity and dynamism in the operating environment intertwines with the evolving situational factors that affect the decision-making life cycle of the war fighter. Therefore, the users in such environments need secure, accessible, and resilient information infrastructures where policy-based mechanisms adopt the behaviours of the systems to meet end user goals. By specifying and enforcing a policy based model and framework for operations and security which accommodates heterogeneous coalitions, high levels of agility can be enabled to allow rapid assembly and restructuring of system and information resources. However, current prevalent policy models (e.g., rule based event-condition-action model and its variants) are not sufficient to deal with the highly dynamic and plausibly non-deterministic nature of these environments. Therefore, to address the above challenges, in this paper, we present a new approach for policies which enables managed systems to take more autonomic decisions regarding their operations.

  18. Modeling and Simulating Airport Surface Operations with Gate Conflicts

    Science.gov (United States)

    Zelinski, Shannon; Windhorst, Robert

    2017-01-01

    The Surface Operations Simulator and Scheduler (SOSS) is a fast-time simulation platform used to develop and test future surface scheduling concepts such as NASAs Air Traffic Demonstration 2 of time-based surface metering at Charlotte Douglas International Airport (CLT). Challenges associated with CLT surface operations have driven much of SOSS development. Recently, SOSS functionality for modeling hardstand operations was developed to address gate conflicts, which occur when an arrival and departure wish to occupy the same gate at the same time. Because surface metering concepts such as ATD2 have the potential to increase gates conflicts as departure are held at their gates, it is important to study the interaction between surface metering and gate conflict management. Several approaches to managing gate conflicts with and without the use of hardstands were simulated and their effects on surface operations and scheduler performance compared.

  19. Comparison of operation optimization methods in energy system modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2013-01-01

    In areas with large shares of Combined Heat and Power (CHP) production, significant introduction of intermittent renewable power production may lead to an increased number of operational constraints. As the operation pattern of each utility plant is determined by optimization of economics......, possibilities for decoupling production constraints may be valuable. Introduction of heat pumps in the district heating network may pose this ability. In order to evaluate if the introduction of heat pumps is economically viable, we develop calculation methods for the operation patterns of each of the used...... operation constraints, while the third approach uses nonlinear programming. In the present case the non-linearity occurs in the boiler efficiency of power plants and the cv-value of an extraction plant. The linear programming model is used as a benchmark, as this type is frequently used, and has the lowest...

  20. Chiefly Symmetric: Results on the Scalability of Probabilistic Model Checking for Operating-System Code

    Directory of Open Access Journals (Sweden)

    Marcus Völp

    2012-11-01

    Full Text Available Reliability in terms of functional properties from the safety-liveness spectrum is an indispensable requirement of low-level operating-system (OS code. However, with evermore complex and thus less predictable hardware, quantitative and probabilistic guarantees become more and more important. Probabilistic model checking is one technique to automatically obtain these guarantees. First experiences with the automated quantitative analysis of low-level operating-system code confirm the expectation that the naive probabilistic model checking approach rapidly reaches its limits when increasing the numbers of processes. This paper reports on our work-in-progress to tackle the state explosion problem for low-level OS-code caused by the exponential blow-up of the model size when the number of processes grows. We studied the symmetry reduction approach and carried out our experiments with a simple test-and-test-and-set lock case study as a representative example for a wide range of protocols with natural inter-process dependencies and long-run properties. We quickly see a state-space explosion for scenarios where inter-process dependencies are insignificant. However, once inter-process dependencies dominate the picture models with hundred and more processes can be constructed and analysed.

  1. Model and Adaptive Operations of an Adaptive Component

    Science.gov (United States)

    Wei, Le; Zhao, Qiuyun; Shu, Hongping

    In order to keep up with the dynamical and open internet environment and in terms of component, an adaptive component model which is based on event mechanism and policy binding is proposed. Components of the model can sense external changes and give the explicit description of the external environment. According to preset policy, component also can take adaptive operations such as adding, deleting, replacing and updating when necessary, and adjust the behavior and structure of the internetware to provide better services.

  2. Cognitive model of the power unit operator activity

    International Nuclear Information System (INIS)

    Chachko, S.A.

    1992-01-01

    Basic notions making it possible to study and simulate the peculiarities of man-operator activity, in particular his way of thiking, are considered. Special attention is paid to cognitive models based on concept of decisive role of knowledge (its acquisition, storage and application) in the man mental processes and activity. The models are based on three basic notions, which are the professional world image, activity strategy and spontaneous decisions

  3. MAESTRO -- A Model and Expert System Tuning Resource for Operators

    International Nuclear Information System (INIS)

    Lager, D.L.; Brand, H.R.; Maurer, W.J.; Coffield, F.E.; Chambers, F.

    1989-01-01

    We have developed MAESTRO, a Model And Expert System Tuning Resource for Operators. It provides a unified software environment for optimizing the performance of large, complex machines, in particular the Advanced Test Accelerator and Experimental Test Accelerator at Lawrence Livermore National Laboratory. The system incorporates three approaches to tuning: a mouse-based manual interface to select and control magnets and to view displays of machine performance; an automation based on ''cloning the operator'' by implementing the strategies and reasoning used by the operator; an automation based on a simulator model which, when accurately matched to the machine, allows downloading of optimal sets of parameters and permits diagnosing errors in the beamline. The latter two approaches are based on the Artificial Intelligence technique known as Expert Systems. 4 refs., 4 figs

  4. Automated particulate sampler field test model operations guide

    Energy Technology Data Exchange (ETDEWEB)

    Bowyer, S.M.; Miley, H.S.

    1996-10-01

    The Automated Particulate Sampler Field Test Model Operations Guide is a collection of documents which provides a complete picture of the Automated Particulate Sampler (APS) and the Field Test in which it was evaluated. The Pacific Northwest National Laboratory (PNNL) Automated Particulate Sampler was developed for the purpose of radionuclide particulate monitoring for use under the Comprehensive Test Ban Treaty (CTBT). Its design was directed by anticipated requirements of small size, low power consumption, low noise level, fully automatic operation, and most predominantly the sensitivity requirements of the Conference on Disarmament Working Paper 224 (CDWP224). This guide is intended to serve as both a reference document for the APS and to provide detailed instructions on how to operate the sampler. This document provides a complete description of the APS Field Test Model and all the activity related to its evaluation and progression.

  5. Mathematical modelling of unglazed solar collectors under extreme operating conditions

    DEFF Research Database (Denmark)

    Bunea, M.; Perers, Bengt; Eicher, S.

    2015-01-01

    average temperature levels at the evaporator. Simulation of these systems requires a collector model that can take into account operation at very low temperatures (below freezing) and under various weather conditions, particularly operation without solar irradiation.A solar collector mathematical model......Combined heat pumps and solar collectors got a renewed interest on the heating system market worldwide. Connected to the heat pump evaporator, unglazed solar collectors can considerably increase their efficiency, but they also raise the coefficient of performance of the heat pump with higher...... was found due to the condensation phenomenon and up to 40% due to frost under no solar irradiation. This work also points out the influence of the operating conditions on the collector's characteristics.Based on experiments carried out at a test facility, every heat flux on the absorber was separately...

  6. DESIGN IMPROVEMENT OF THE LOCOMOTIVE RUNNING GEARS

    Directory of Open Access Journals (Sweden)

    S. V. Myamlin

    2013-09-01

    Full Text Available Purpose. To determine the dynamic qualities of the mainline freight locomotives characterizing the safe motion in tangent and curved track sections at all operational speeds, one needs a whole set of studies, which includes a selection of the design scheme, development of the corresponding mathematical model of the locomotive spatial fluctuations, construction of the computer calculation program, conducting of the theoretical and then experimental studies of the new designs. In this case, one should compare the results with existing designs. One of the necessary conditions for the qualitative improvement of the traction rolling stock is to define the parameters of its running gears. Among the issues related to this problem, an important place is occupied by the task of determining the locomotive dynamic properties on the stage of projection, taking into account the selected technical solutions in the running gear design. Methodology. The mathematical modeling studies are carried out by the numerical integration method of the dynamic loading for the mainline locomotive using the software package «Dynamics of Rail Vehicles » («DYNRAIL». Findings. As a result of research for the improvement of locomotive running gear design it can be seen that the creation of the modern locomotive requires from engineers and scientists the realization of scientific and technical solutions. The solutions enhancing design speed with simultaneous improvement of the traction, braking and dynamic qualities to provide a simple and reliable design, especially the running gear, reducing the costs for maintenance and repair, low initial cost and operating costs for the whole service life, high traction force when starting, which is as close as possible to the ultimate force of adhesion, the ability to work in multiple traction mode and sufficient design speed. Practical Value. The generalization of theoretical, scientific and methodological, experimental studies aimed

  7. Effects of Obstacles on the Dynamics of Kinesins, Including Velocity and Run Length, Predicted by a Model of Two Dimensional Motion.

    Directory of Open Access Journals (Sweden)

    Woochul Nam

    Full Text Available Kinesins are molecular motors which walk along microtubules by moving their heads to different binding sites. The motion of kinesin is realized by a conformational change in the structure of the kinesin molecule and by a diffusion of one of its two heads. In this study, a novel model is developed to account for the 2D diffusion of kinesin heads to several neighboring binding sites (near the surface of microtubules. To determine the direction of the next step of a kinesin molecule, this model considers the extension in the neck linkers of kinesin and the dynamic behavior of the coiled-coil structure of the kinesin neck. Also, the mechanical interference between kinesins and obstacles anchored on the microtubules is characterized. The model predicts that both the kinesin velocity and run length (i.e., the walking distance before detaching from the microtubule are reduced by static obstacles. The run length is decreased more significantly by static obstacles than the velocity. Moreover, our model is able to predict the motion of kinesin when other (several motors also move along the same microtubule. Furthermore, it suggests that the effect of mechanical interaction/interference between motors is much weaker than the effect of static obstacles. Our newly developed model can be used to address unanswered questions regarding degraded transport caused by the presence of excessive tau proteins on microtubules.

  8. Water operator partnerships as a model to achieve the Millenium ...

    African Journals Online (AJOL)

    In the void left by the declining popularity of public-private partnerships, the concept of 'water operator partnerships' (WOPs) has increasingly been promoted as an alternative for improving water services provision in developing countries. This paper assesses the potential of such partnerships as a 'model' for contributing to ...

  9. Dynamic modeling of temperature change in outdoor operated tubular photobioreactors.

    Science.gov (United States)

    Androga, Dominic Deo; Uyar, Basar; Koku, Harun; Eroglu, Inci

    2017-07-01

    In this study, a one-dimensional transient model was developed to analyze the temperature variation of tubular photobioreactors operated outdoors and the validity of the model was tested by comparing the predictions of the model with the experimental data. The model included the effects of convection and radiative heat exchange on the reactor temperature throughout the day. The temperatures in the reactors increased with increasing solar radiation and air temperatures, and the predicted reactor temperatures corresponded well to the measured experimental values. The heat transferred to the reactor was mainly through radiation: the radiative heat absorbed by the reactor medium, ground radiation, air radiation, and solar (direct and diffuse) radiation, while heat loss was mainly through the heat transfer to the cooling water and forced convection. The amount of heat transferred by reflected radiation and metabolic activities of the bacteria and pump work was negligible. Counter-current cooling was more effective in controlling reactor temperature than co-current cooling. The model developed identifies major heat transfer mechanisms in outdoor operated tubular photobioreactors, and accurately predicts temperature changes in these systems. This is useful in determining cooling duty under transient conditions and scaling up photobioreactors. The photobioreactor design and the thermal modeling were carried out and experimental results obtained for the case study of photofermentative hydrogen production by Rhodobacter capsulatus, but the approach is applicable to photobiological systems that are to be operated under outdoor conditions with significant cooling demands.

  10. Modeling operational risks of the nuclear industry with Bayesian networks

    International Nuclear Information System (INIS)

    Wieland, Patricia; Lustosa, Leonardo J.

    2009-01-01

    Basically, planning a new industrial plant requires information on the industrial management, regulations, site selection, definition of initial and planned capacity, and on the estimation of the potential demand. However, this is far from enough to assure the success of an industrial enterprise. Unexpected and extremely damaging events may occur that deviates from the original plan. The so-called operational risks are not only in the system, equipment, process or human (technical or managerial) failures. They are also in intentional events such as frauds and sabotage, or extreme events like terrorist attacks or radiological accidents and even on public reaction to perceived environmental or future generation impacts. For the nuclear industry, it is a challenge to identify and to assess the operational risks and their various sources. Early identification of operational risks can help in preparing contingency plans, to delay the decision to invest or to approve a project that can, at an extreme, affect the public perception of the nuclear energy. A major problem in modeling operational risk losses is the lack of internal data that are essential, for example, to apply the loss distribution approach. As an alternative, methods that consider qualitative and subjective information can be applied, for example, fuzzy logic, neural networks, system dynamic or Bayesian networks. An advantage of applying Bayesian networks to model operational risk is the possibility to include expert opinions and variables of interest, to structure the model via causal dependencies among these variables, and to specify subjective prior and conditional probabilities distributions at each step or network node. This paper suggests a classification of operational risks in industry and discusses the benefits and obstacles of the Bayesian networks approach to model those risks. (author)

  11. Demand-based maintenance and operators support based on process models; Behovsstyrt underhaall och operatoersstoed baserat paa process modeller

    Energy Technology Data Exchange (ETDEWEB)

    Dahlquist, Erik; Widarsson, Bjoern; Tomas-Aparicio, Elena

    2012-02-15

    There is a strong demand for systems that can give early warnings on upcoming problems in process performance or sensor measurements. In this project we have developed and implemented such a system on-line. The goal with the system is to give warnings about both faults needing urgent actions, as well giving advice on roughly when service may be needed for specific functions. The use of process simulation models on-line can offer a significant tool for operators and process engineers to analyse the performance of the process and make the most correct and fastest decision when problems arise. In this project physical simulation models are used in combination with decision support tools. By using a physical model it is possible to compare the measured data to the data obtained from the simulation and give these deviations as input to a decision support tool with Bayesian Networks (BN) that will result in information about the probability for wrong measurement in the instruments, process problems and maintenance needs. The application has been implemented in a CFB boiler at Maelarenergi AB. After tuning the model the system has been used online during September - October 2010 and May - October 2011, showing that the system is working on-line with respect to running the simulation model but with batch runs with respect to the BN. Examples have been made for several variables where trends of the deviation between simulation results and measured data have been used as input to a BN, where the probability for different faults has been calculated. Combustion up in the separator/cyclones has been detected several times, problems with fuel feed on both sides of the boiler as well. A moisture sensor not functioning as it should and suspected malfunctioning temperature meters as well. Deeper investigations of the true cause of problems have been used as input to tune the BN

  12. Predicting third molar surgery operative time: a validated model.

    Science.gov (United States)

    Susarla, Srinivas M; Dodson, Thomas B

    2013-01-01

    The purpose of the present study was to develop and validate a statistical model to predict third molar (M3) operative time. This was a prospective cohort study consisting of a sample of subjects presenting for M3 removal. The demographic, anatomic, and operative variables were recorded for each subject. Using an index sample of randomly selected subjects, a multiple linear regression model was generated to predict the operating time. A nonoverlapping group of randomly selected subjects (validation sample) was used to assess model accuracy. P≤.05 was considered significant. The sample was composed of 150 subjects (n) who had 450 (k) M3s removed. The index sample (n=100 subjects, k=313 M3s extracted) had a mean age of 25.4±10.0 years. The mean extraction time was 6.4±7.0 minutes. The multiple linear regression model included M3 location, Winter's classification, tooth morphology, number of teeth extracted, procedure type, and surgical experience (R2=0.58). No statistically significant differences were seen between the index sample and the validation sample (n=50, k=137) for any of the study variables. Compared with the index model, the β-coefficients of the validation model were similar in direction and magnitude for most variables. Compared with the observed extraction time for all teeth in the sample, the predicted extraction time was not significantly different (P=.16). Fair agreement was seen between the β-coefficients for our multiple models in the index and validation populations, with no significant difference in the predicted and observed operating times. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  13. GASIFICATION TEST RUN TC06

    Energy Technology Data Exchange (ETDEWEB)

    Southern Company Services, Inc.

    2003-08-01

    This report discusses test campaign TC06 of the Kellogg Brown & Root, Inc. (KBR) Transport Reactor train with a Siemens Westinghouse Power Corporation (Siemens Westinghouse) particle filter system at the Power Systems Development Facility (PSDF) located in Wilsonville, Alabama. The Transport Reactor is an advanced circulating fluidized-bed reactor designed to operate as either a combustor or a gasifier using a particulate control device (PCD). The Transport Reactor was operated as a pressurized gasifier during TC06. Test run TC06 was started on July 4, 2001, and completed on September 24, 2001, with an interruption in service between July 25, 2001, and August 19, 2001, due to a filter element failure in the PCD caused by abnormal operating conditions while tuning the main air compressor. The reactor temperature was varied between 1,725 and 1,825 F at pressures from 190 to 230 psig. In TC06, 1,214 hours of solid circulation and 1,025 hours of coal feed were attained with 797 hours of coal feed after the filter element failure. Both reactor and PCD operations were stable during the test run with a stable baseline pressure drop. Due to its length and stability, the TC06 test run provided valuable data necessary to analyze long-term reactor operations and to identify necessary modifications to improve equipment and process performance as well as progressing the goal of many thousands of hours of filter element exposure.

  14. Operational ocean models in the Adriatic Sea: a skill assessment

    Directory of Open Access Journals (Sweden)

    J. Chiggiato

    2008-02-01

    Full Text Available In the framework of the Mediterranean Forecasting System (MFS project, the performance of regional numerical ocean forecasting systems is assessed by means of model-model and model-data comparison. Three different operational systems considered in this study are: the Adriatic REGional Model (AREG; the Adriatic Regional Ocean Modelling System (AdriaROMS and the Mediterranean Forecasting System General Circulation Model (MFS-GCM. AREG and AdriaROMS are regional implementations (with some dedicated variations of POM and ROMS, respectively, while MFS-GCM is an OPA based system. The assessment is done through standard scores. In situ and remote sensing data are used to evaluate the system performance. In particular, a set of CTD measurements collected in the whole western Adriatic during January 2006 and one year of satellite derived sea surface temperature measurements (SST allow to asses a full three-dimensional picture of the operational forecasting systems quality during January 2006 and to draw some preliminary considerations on the temporal fluctuation of scores estimated on surface quantities between summer 2005 and summer 2006.

    The regional systems share a negative bias in simulated temperature and salinity. Nonetheless, they outperform the MFS-GCM in the shallowest locations. Results on amplitude and phase errors are improved in areas shallower than 50 m, while degraded in deeper locations, where major models deficiencies are related to vertical mixing overestimation. In a basin-wide overview, the two regional models show differences in the local displacement of errors. In addition, in locations where the regional models are mutually correlated, the aggregated mean squared error was found to be smaller, that is a useful outcome of having several operational systems in the same region.

  15. Modeling the Environmental Impact of Air Traffic Operations

    Science.gov (United States)

    Chen, Neil

    2011-01-01

    There is increased interest to understand and mitigate the impacts of air traffic on the climate, since greenhouse gases, nitrogen oxides, and contrails generated by air traffic can have adverse impacts on the climate. The models described in this presentation are useful for quantifying these impacts and for studying alternative environmentally aware operational concepts. These models have been developed by leveraging and building upon existing simulation and optimization techniques developed for the design of efficient traffic flow management strategies. Specific enhancements to the existing simulation and optimization techniques include new models that simulate aircraft fuel flow, emissions and contrails. To ensure that these new models are beneficial to the larger climate research community, the outputs of these new models are compatible with existing global climate modeling tools like the FAA's Aviation Environmental Design Tool.

  16. Analysis of operating model of electronic invoice colombian Colombian electronic billing analysis of the operational model

    Directory of Open Access Journals (Sweden)

    Sérgio Roberto da Silva

    2016-06-01

    Full Text Available Colombia has been one of the first countries to introduce electronic billing process on a voluntary basis, from a traditional to a digital version. In this context, the article analyzes the electronic billing process implemented in Colombia and the advantages. Methodological research is applied, qualitative, descriptive and documentary; where the regulatory framework and the conceptualization of the model is identified; the process of adoption of electronic billing is analyzed, and finally the advantages and disadvantages of its implementation is analyzed. The findings indicate that the model applied in Colombia to issue an electronic billing in sending and receiving process, is not complex, but it requires a small adequate infrastructure and trained personnel to reach all sectors, especially the micro and business which is the largest business network in the country.

  17. Sol-Terra - AN Operational Space Weather Forecasting Model Framework

    Science.gov (United States)

    Bisi, M. M.; Lawrence, G.; Pidgeon, A.; Reid, S.; Hapgood, M. A.; Bogdanova, Y.; Byrne, J.; Marsh, M. S.; Jackson, D.; Gibbs, M.

    2015-12-01

    The SOL-TERRA project is a collaboration between RHEA Tech, the Met Office, and RAL Space funded by the UK Space Agency. The goal of the SOL-TERRA project is to produce a Roadmap for a future coupled Sun-to-Earth operational space weather forecasting system covering domains from the Sun down to the magnetosphere-ionosphere-thermosphere and neutral atmosphere. The first stage of SOL-TERRA is underway and involves reviewing current models that could potentially contribute to such a system. Within a given domain, the various space weather models will be assessed how they could contribute to such a coupled system. This will be done both by reviewing peer reviewed papers, and via direct input from the model developers to provide further insight. Once the models have been reviewed then the optimal set of models for use in support of forecast-based SWE modelling will be selected, and a Roadmap for the implementation of an operational forecast-based SWE modelling framework will be prepared. The Roadmap will address the current modelling capability, knowledge gaps and further work required, and also the implementation and maintenance of the overall architecture and environment that the models will operate within. The SOL-TERRA project will engage with external stakeholders in order to ensure independently that the project remains on track to meet its original objectives. A group of key external stakeholders have been invited to provide their domain-specific expertise in reviewing the SOL-TERRA project at critical stages of Roadmap preparation; namely at the Mid-Term Review, and prior to submission of the Final Report. This stakeholder input will ensure that the SOL-TERRA Roadmap will be enhanced directly through the input of modellers and end-users. The overall goal of the SOL-TERRA project is to develop a Roadmap for an operational forecast-based SWE modelling framework with can be implemented within a larger subsequent activity. The SOL-TERRA project is supported within

  18. An improved numerical scheme with the fully-implicit two-fluid model for a fast-running system code

    International Nuclear Information System (INIS)

    Jeong, J.J.; No, H.C.

    1987-01-01

    A new computational method is implemented in the FIDA-2 (Fully-Implicit Safety Analysis-2) code to simulate the thermal-hydraulic response to hypothetical accidents in nuclear power plants. The basis field equations of FISA-2 consist of the mixture continuity equation, void propagation equation, two phasic momentum equations, and two phasic energy equations. The fully-implicit scheme is used to elimate a time step limitation and the computation time per time step is minimized as much as possible by reducing the matrix-size to be solved. The phasic energy equations written in the nonconservation form are solved after they are set up to be decoupled from other field equations. The void propagation equation is solved to obtain the void fraction. Spatial acceleration terms in the phasic momentum equations are manipulated with the phasic continiuity equations so that pseudo-phasic mass flux may be expressed in terms of pressure only. Putting the pseudo-phasic mass flux into the mixture continuity equation, we obtain linear equations with pressure variables only as unknowns. By solving the linear equations, pressures at all the nodes are obtained and in turn other variables are obtained by back-substitution. The above procedure is performed until the convergence criterion is satisfied. Reasonable accuracy and no stability limitation with fast-running are confirmed by comparing results from FISA-2 with experimental data and results from other codes. (orig.)

  19. A Stochastic Operational Planning Model for Smart Power Systems

    Directory of Open Access Journals (Sweden)

    Sh. Jadid

    2014-12-01

    Full Text Available Smart Grids are result of utilizing novel technologies such as distributed energy resources, and communication technologies in power system to compensate some of its defects. Various power resources provide some benefits for operation domain however, power system operator should use a powerful methodology to manage them. Renewable resources and load add uncertainty to the problem. So, independent system operator should use a stochastic method to manage them. A Stochastic unit commitment is presented in this paper to schedule various power resources such as distributed generation units, conventional thermal generation units, wind and PV farms, and demand response resources. Demand response resources, interruptible loads, distributed generation units, and conventional thermal generation units are used to provide required reserve for compensating stochastic nature of various resources and loads. In the presented model, resources connected to distribution network can participate in wholesale market through aggregators. Moreover, a novel three-program model which can be used by aggregators is presented in this article. Loads and distributed generation can contract with aggregators by these programs. A three-bus test system and the IEEE RTS are used to illustrate usefulness of the presented model. The results show that ISO can manage the system effectively by using this model

  20. Preliminary Exploration of Adaptive State Predictor Based Human Operator Modeling

    Science.gov (United States)

    Trujillo, Anna C.; Gregory, Irene M.

    2012-01-01

    Control-theoretic modeling of the human operator dynamic behavior in manual control tasks has a long and rich history. In the last two decades, there has been a renewed interest in modeling the human operator. There has also been significant work on techniques used to identify the pilot model of a given structure. The purpose of this research is to attempt to go beyond pilot identification based on collected experimental data and to develop a predictor of pilot behavior. An experiment was conducted to quantify the effects of changing aircraft dynamics on an operator s ability to track a signal in order to eventually model a pilot adapting to changing aircraft dynamics. A gradient descent estimator and a least squares estimator with exponential forgetting used these data to predict pilot stick input. The results indicate that individual pilot characteristics and vehicle dynamics did not affect the accuracy of either estimator method to estimate pilot stick input. These methods also were able to predict pilot stick input during changing aircraft dynamics and they may have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot.

  1. Optimizing Biorefinery Design and Operations via Linear Programming Models

    Energy Technology Data Exchange (ETDEWEB)

    Talmadge, Michael; Batan, Liaw; Lamers, Patrick; Hartley, Damon; Biddy, Mary; Tao, Ling; Tan, Eric

    2017-03-28

    The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LP models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools for

  2. Application of online modeling to the operation of SLC

    International Nuclear Information System (INIS)

    Woodley, M.D.; Sanchez-Chopitea, L.; Shoaee, H.

    1987-01-01

    Online computer models of first order beam optics have been developed for the commissioning, control and operation of the entire SLC including Damping Rings, Linac, Positron Return Line and Collider Arcs. A generalized online environment utilizing these models provides the capability for interactive selection of a desire optics configuration and for the study of its properties. Automated procedures have been developed which calculate and load beamline component set-points and which can scale magnet strengths to achieve desired beam properties for any Linac energy profile. Graphic displays facilitate comparison of design, desired and actual optical characteristics of the beamlines. Measured beam properties, such as beam emittance and dispersion, can be incorporated interactively into the models and used for beam matching and optimization of injection and extraction efficiencies and beam transmissions. The online optics modeling facility also serves as the foundation for many model-driven applications such as autosteering, calculation of beam launch parameters, emittance measurement and dispersion correction

  3. A model technology transfer program for independent operators

    Energy Technology Data Exchange (ETDEWEB)

    Schoeling, L.G.

    1996-08-01

    In August 1992, the Energy Research Center (ERC) at the University of Kansas was awarded a contract by the US Department of Energy (DOE) to develop a technology transfer regional model. This report describes the development and testing of the Kansas Technology Transfer Model (KTTM) which is to be utilized as a regional model for the development of other technology transfer programs for independent operators throughout oil-producing regions in the US. It describes the linkage of the regional model with a proposed national technology transfer plan, an evaluation technique for improving and assessing the model, and the methodology which makes it adaptable on a regional basis. The report also describes management concepts helpful in managing a technology transfer program.

  4. Transparent settlement model between mobile network operator and mobile voice over Internet protocol operator

    Directory of Open Access Journals (Sweden)

    Luzango Pangani Mfupe

    2014-12-01

    Full Text Available Advances in technology have enabled network-less mobile voice over internet protocol operator (MVoIPO to offer data services (i.e. voice, text and video to mobile network operator's (MNO's subscribers through an application enabled on subscriber's user equipment using MNO's packet-based cellular network infrastructure. However, this raises the problem of how to handle interconnection settlements between the two types of operators, particularly how to deal with users who now have the ability to make ‘free’ on-net MVoIP calls among themselves within the MNO's network. This study proposes a service level agreement-based transparent settlement model (TSM to solve this problem. The model is based on concepts of achievement and reward, not violation and punishment. The TSM calculates the MVoIPO's throughput distribution by monitoring the variations of peaks and troughs at the edge of a network. This facilitates the determination of conformance and non-conformance levels to the pre-set throughput thresholds and, subsequently, the issuing of compensation to the MVoIPO by the MNO as a result of generating an economically acceptable volume of data traffic.

  5. Standard model baryogenesis through four-fermion operators in braneworlds

    International Nuclear Information System (INIS)

    Chung, Daniel J.H.; Dent, Thomas

    2002-01-01

    We study a new baryogenesis scenario in a class of braneworld models with low fundamental scale, which typically have difficulty with baryogenesis. The scenario is characterized by its minimal nature: the field content is that of the standard model and all interactions consistent with the gauge symmetry are admitted. Baryon number is violated via a dimension-6 proton decay operator, suppressed today by the mechanism of quark-lepton separation in extra dimensions; we assume that this operator was unsuppressed in the early Universe due to a time-dependent quark-lepton separation. The source of CP violation is the CKM matrix, in combination with the dimension-6 operators. We find that almost independently of cosmology, sufficient baryogenesis is nearly impossible in such a scenario if the fundamental scale is above 100 TeV, as required by an unsuppressed neutron-antineutron oscillation operator. The only exception producing sufficient baryon asymmetry is a scenario involving out-of-equilibrium c quarks interacting with equilibrium b quarks

  6. Space Weather Forecasting at NOAA with Michigan's Geospace Model: Results from the First Year in Real-Time Operations

    Science.gov (United States)

    Cash, M. D.; Singer, H. J.; Millward, G. H.; Balch, C. C.; Toth, G.; Welling, D. T.

    2017-12-01

    In October 2016, the first version of the Geospace model was transitioned into real-time operations at NOAA Space Weather Prediction Center (SWPC). The Geospace model is a part of the Space Weather Modeling Framework (SWMF) developed at the University of Michigan, and the model simulates the full time-dependent 3D Geospace environment (Earth's magnetosphere, ring current and ionosphere) and predicts global space weather parameters such as induced magnetic perturbations in space and on Earth's surface. The current version of the Geospace model uses three coupled components of SWMF: the BATS-R-US global magnetosphere model, the Rice Convection Model (RCM) of the inner magnetosphere, and the Ridley Ionosphere electrodynamics Model (RIM). In the operational mode, SWMF/Geospace runs continually in real-time as long as there is new solar wind data arriving from a satellite at L1, either DSCOVR or ACE. We present an analysis of the overall performance of the Geospace model during the first year of real-time operations. Evaluation metrics include Kp, Dst, as well as regional magnetometer stations. We will also present initial results from new products, such as the AE index, available with the recent upgrade to the Geospace model.

  7. Analysis of Operating Principles with S-system Models

    Science.gov (United States)

    Lee, Yun; Chen, Po-Wei; Voit, Eberhard O.

    2011-01-01

    Operating principles address general questions regarding the response dynamics of biological systems as we observe or hypothesize them, in comparison to a priori equally valid alternatives. In analogy to design principles, the question arises: Why are some operating strategies encountered more frequently than others and in what sense might they be superior? It is at this point impossible to study operation principles in complete generality, but the work here discusses the important situation where a biological system must shift operation from its normal steady state to a new steady state. This situation is quite common and includes many stress responses. We present two distinct methods for determining different solutions to this task of achieving a new target steady state. Both methods utilize the property of S-system models within Biochemical Systems Theory (BST) that steady-states can be explicitly represented as systems of linear algebraic equations. The first method uses matrix inversion, a pseudo-inverse, or regression to characterize the entire admissible solution space. Operations on the basis of the solution space permit modest alterations of the transients toward the target steady state. The second method uses standard or mixed integer linear programming to determine admissible solutions that satisfy criteria of functional effectiveness, which are specified beforehand. As an illustration, we use both methods to characterize alternative response patterns of yeast subjected to heat stress, and compare them with observations from the literature. PMID:21377479

  8. A practical guide for operational validation of discrete simulation models

    Directory of Open Access Journals (Sweden)

    Fabiano Leal

    2011-04-01

    Full Text Available As the number of simulation experiments increases, the necessity for validation and verification of these models demands special attention on the part of the simulation practitioners. By analyzing the current scientific literature, it is observed that the operational validation description presented in many papers does not agree on the importance designated to this process and about its applied techniques, subjective or objective. With the expectation of orienting professionals, researchers and students in simulation, this article aims to elaborate a practical guide through the compilation of statistical techniques in the operational validation of discrete simulation models. Finally, the guide's applicability was evaluated by using two study objects, which represent two manufacturing cells, one from the automobile industry and the other from a Brazilian tech company. For each application, the guide identified distinct steps, due to the different aspects that characterize the analyzed distributions

  9. Aerosol modelling in MOCAGE and operational dust forecasting at Meteo-France

    International Nuclear Information System (INIS)

    Martet, M; Peuch, V-H

    2009-01-01

    MOCAGE is the multiscale 3D Chemistry-Transport Model of Meteo-France. It is run operationally for Air Quality, UV and dust forecasting, daily up to 96h. Meteorological forcings are provided by our NWP suites, ARPEGE and ALADIN. Forecasts are uploaded to the French platform Prev'Air (http://www.prevair.org). MOCAGE is also a research tool, with over 25 publications, and it is used in several European projects, like GEMS or AMMA. Last, a specific version will become operational soon for emergency response in support of our responsibilities of RSMC and VAAC. In operations, three domains (two-ways nesting) are used: globe (2 deg.), Europe (0.5 deg.) and France (0.1 deg.). On the vertical, MOCAGE extends from surface up to 5 hPa (L47) with hybrid coordinates (□-P). A semi-lagrangian scheme is used for advection, while turbulent diffusion and convection are parameterized, using the Louis and Bechtold schemes respectively. The representation of dusts is based upon a sectional approach with 5 bins. Dust emissions are computed using the scheme of Marticorena and Bergametti over Saharan and Chinese deserts. Wet deposition, sedimentation and dry deposition are also taken into account to compute concentrations. Various diagnostics are available daily: mass concentrations on different altitude levels, column, emissions, AOD and separate sink terms. We present an overview of the system and its validation studies.

  10. Modeling Reservoir-River Networks in Support of Optimizing Seasonal-Scale Reservoir Operations

    Science.gov (United States)

    Villa, D. L.; Lowry, T. S.; Bier, A.; Barco, J.; Sun, A.

    2011-12-01

    each timestep and minimize computational overhead. Power generation for each reservoir is estimated using a 2-dimensional regression that accounts for both the available head and turbine efficiency. The object-oriented architecture makes run configuration easy to update. The dynamic model inputs include inflow and meteorological forecasts while static inputs include bathymetry data, reservoir and power generation characteristics, and topological descriptors. Ensemble forecasts of hydrological and meteorological conditions are supplied in real-time by Pacific Northwest National Laboratory and are used as a proxy for uncertainty, which is carried through the simulation and optimization process to produce output that describes the probability that different operational scenario's will be optimal. The full toolset, which includes HydroSCOPE, is currently being tested on the Feather River system in Northern California and the Upper Colorado Storage Project.

  11. Fires involving radioactive materials : transference model; operative recommendations

    International Nuclear Information System (INIS)

    Rodriguez, C.E.; Puntarulo, L.J.; Canibano, J.A.

    1988-01-01

    In all aspects related to the nuclear activity, the occurrence of an explosion, fire or burst type accident, with or without victims, is directly related to the characteristics of the site. The present work analyses the different parameters involved, describing a transference model and recommendations for evaluation and control of the radiological risk for firemen. Special emphasis is placed on the measurement of the variables existing in this kind of operations

  12. Modelling of Reservoir Operations using Fuzzy Logic and ANNs

    Science.gov (United States)

    Van De Giesen, N.; Coerver, B.; Rutten, M.

    2015-12-01

    Today, almost 40.000 large reservoirs, containing approximately 6.000 km3 of water and inundating an area of almost 400.000 km2, can be found on earth. Since these reservoirs have a storage capacity of almost one-sixth of the global annual river discharge they have a large impact on the timing, volume and peaks of river discharges. Global Hydrological Models (GHM) are thus significantly influenced by these anthropogenic changes in river flows. We developed a parametrically parsimonious method to extract operational rules based on historical reservoir storage and inflow time-series. Managing a reservoir is an imprecise and vague undertaking. Operators always face uncertainties about inflows, evaporation, seepage losses and various water demands to be met. They often base their decisions on experience and on available information, like reservoir storage and the previous periods inflow. We modeled this decision-making process through a combination of fuzzy logic and artificial neural networks in an Adaptive-Network-based Fuzzy Inference System (ANFIS). In a sensitivity analysis, we compared results for reservoirs in Vietnam, Central Asia and the USA. ANFIS can indeed capture reservoirs operations adequately when fed with a historical monthly time-series of inflows and storage. It was shown that using ANFIS, operational rules of existing reservoirs can be derived without much prior knowledge about the reservoirs. Their validity was tested by comparing actual and simulated releases with each other. For the eleven reservoirs modelled, the normalised outflow, , was predicted with a MSE of 0.002 to 0.044. The rules can be incorporated into GHMs. After a network for a specific reservoir has been trained, the inflow calculated by the hydrological model can be combined with the release and initial storage to calculate the storage for the next time-step using a mass balance. Subsequently, the release can be predicted one time-step ahead using the inflow and storage.

  13. Modelling of innovative SANEX process mal-operations

    Energy Technology Data Exchange (ETDEWEB)

    McLachlan, F. [National Nuclear Laboratory, Building D5, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Taylor, R.; Whittaker, D.; Woodhead, D. [National Nuclear Laboratory, Central Laboratory, Sellafield, Seascale, Cumbria CA20 1PG (United Kingdom); Geist, A. [Karlsruhe Institute of Technology - KIT, 76021, Karlsruhe (Germany)

    2016-07-01

    The innovative (i-) SANEX process for the separation of minor actinides from PUREX highly active raffinate is expected to employ a solvent phase comprising 0.2 M TODGA with 5 v/v% 1-octanol in an inert diluent. An initial extract / scrub section would be used to extract trivalent actinides and lanthanides from the feed whilst leaving other fission products in the aqueous phase, before the loaded solvent is contacted with a low acidity aqueous phase containing a sulphonated bis-triazinyl pyridine ligand (BTP) to effect a selective strip of the actinides, so yielding separate actinide (An) and lanthanide (Ln) product streams. This process has been demonstrated in lab scale trials at Juelich (FZJ). The SACSESS (Safety of Actinide Separation processes) project is focused on the evaluation and improvement of the safety of such future systems. A key element of this is the development of an understanding of the response of a process to upsets (mal-operations). It is only practical to study a small subset of possible mal-operations experimentally and consideration of the majority of mal-operations entails the use of a validated dynamic model of the process. Distribution algorithms for HNO{sub 3}, Am, Cm and the lanthanides have been developed and incorporated into a dynamic flowsheet model that has, so far, been configured to correspond to the extract-scrub section of the i-SANEX flowsheet trial undertaken at FZJ in 2013. Comparison is made between the steady state model results and experimental results. Results from modelling of low acidity and high temperature mal-operations are presented. (authors)

  14. Running the EGS4 Monte Carlo code with Fortran 90 on a pentium computer

    Energy Technology Data Exchange (ETDEWEB)

    Caon, M. [Flinders Univ. of South Australia, Bedford Park, SA (Australia)]|[Univercity of South Australia, SA (Australia); Bibbo, G. [Womens and Childrens hospital, SA (Australia); Pattison, J. [Univercity of South Australia, SA (Australia)

    1996-09-01

    The possibility to run the EGS4 Monte Carlo code radiation transport system for medical radiation modelling on a microcomputer is discussed. This has been done using a Fortran 77 compiler with a 32-bit memory addressing system running under a memory extender operating system. In addition a virtual memory manager such as QEMM386 was required. It has successfully run on a SUN Sparcstation2. In 1995 faster Pentium-based microcomputers became available as did the Windows 95 operating system which can handle 32-bit programs, multitasking and provides its own virtual memory management. The paper describe how with simple modification to the batch files it was possible to run EGS4 on a Pentium under Fortran 90 and Windows 95. This combination of software and hardware is cheaper and faster than running it on a SUN Sparcstation2. 8 refs., 1 tab.

  15. Running the EGS4 Monte Carlo code with Fortran 90 on a pentium computer

    International Nuclear Information System (INIS)

    Caon, M.; Bibbo, G.; Pattison, J.

    1996-01-01

    The possibility to run the EGS4 Monte Carlo code radiation transport system for medical radiation modelling on a microcomputer is discussed. This has been done using a Fortran 77 compiler with a 32-bit memory addressing system running under a memory extender operating system. In addition a virtual memory manager such as QEMM386 was required. It has successfully run on a SUN Sparcstation2. In 1995 faster Pentium-based microcomputers became available as did the Windows 95 operating system which can handle 32-bit programs, multitasking and provides its own virtual memory management. The paper describe how with simple modification to the batch files it was possible to run EGS4 on a Pentium under Fortran 90 and Windows 95. This combination of software and hardware is cheaper and faster than running it on a SUN Sparcstation2. 8 refs., 1 tab

  16. LHCf completes its first run

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    LHCf, one of the three smaller experiments at the LHC, has completed its first run. The detectors were removed last week and the analysis of data is continuing. The first results will be ready by the end of the year.   One of the two LHCf detectors during the removal operations inside the LHC tunnel. LHCf is made up of two independent detectors located in the tunnel 140 m either side of the ATLAS collision point. The experiment studies the secondary particles created during the head-on collisions in the LHC because they are similar to those created in a cosmic ray shower produced when a cosmic particle hits the Earth’s atmosphere. The focus of the experiment is to compare the various shower models used to estimate the primary energy of ultra-high-energy cosmic rays. The energy of proton-proton collisions at the LHC will be equivalent to a cosmic ray of 1017eV hitting the atmosphere, very close to the highest energies observed in the sky. “We have now completed the fir...

  17. Data Envelopment Analysis (DEA) Model in Operation Management

    Science.gov (United States)

    Malik, Meilisa; Efendi, Syahril; Zarlis, Muhammad

    2018-01-01

    Quality management is an effective system in operation management to develops, maintains, and improves quality from groups of companies that allow marketing, production, and service at the most economycal level as well as ensuring customer satisfication. Many companies are practicing quality management to improve their bussiness performance. One of performance measurement is through measurement of efficiency. One of the tools can be used to assess efficiency of companies performance is Data Envelopment Analysis (DEA). The aim of this paper is using Data Envelopment Analysis (DEA) model to assess efficiency of quality management. In this paper will be explained CCR, BCC, and SBM models to assess efficiency of quality management.

  18. Communicating Sustainability: An Operational Model for Evaluating Corporate Websites

    Directory of Open Access Journals (Sweden)

    Alfonso Siano

    2016-09-01

    Full Text Available The interest in corporate sustainability has increased rapidly in recent years and has encouraged organizations to adopt appropriate digital communication strategies, in which the corporate website plays a key role. Despite this growing attention in both the academic and business communities, models for the analysis and evaluation of online sustainability communication have not been developed to date. This paper aims to develop an operational model to identify and assess the requirements of sustainability communication in corporate websites. It has been developed from a literature review on corporate sustainability and digital communication and the analysis of the websites of the organizations included in the “Global CSR RepTrak 2015” by the Reputation Institute. The model identifies the core dimensions of online sustainability communication (orientation, structure, ergonomics, content—OSEC, sub-dimensions, such as stakeholder engagement and governance tools, communication principles, and measurable items (e.g., presence of the materiality matrix, interactive graphs. A pilot study on the websites of the energy and utilities companies included in the Dow Jones Sustainability World Index 2015 confirms the applicability of the OSEC framework. Thus, the model can provide managers and digital communication consultants with an operational tool that is useful for developing an industry ranking and assessing the best practices. The model can also help practitioners to identify corrective actions in the critical areas of digital sustainability communication and avoid greenwashing.

  19. Stability of the matrix model in operator interpretation

    Directory of Open Access Journals (Sweden)

    Katsuta Sakai

    2017-12-01

    Full Text Available The IIB matrix model is one of the candidates for nonperturbative formulation of string theory, and it is believed that the model contains gravitational degrees of freedom in some manner. In some preceding works, it was proposed that the matrix model describes the curved space where the matrices represent differential operators that are defined on a principal bundle. In this paper, we study the dynamics of the model in this interpretation, and point out the necessity of the principal bundle from the viewpoint of the stability and diffeomorphism invariance. We also compute the one-loop correction which yields a mass term for each field due to the principal bundle. We find that the stability is not violated.

  20. Yanqing solar field: Dynamic optical model and operational safety analysis

    International Nuclear Information System (INIS)

    Zhao, Dongming; Wang, Zhifeng; Xu, Ershu; Zhu, Lingzhi; Lei, Dongqiang; Xu, Li; Yuan, Guofeng

    2017-01-01

    Highlights: • A dynamic optical model of the Yanqing solar field was built. • Tracking angle characteristics were studied with different SCA layouts and time. • The average energy flux was simulated across four clear days. • Influences of defocus angles for energy flux were analyzed. - Abstract: A dynamic optical model was established for the Yanqing solar field at the parabolic trough solar thermal power plant and a simulation was conducted on four separate days of clear weather (March 3rd, June 2nd, September 25th, December 17th). The solar collector assembly (SCA) was comprised of a North-South and East-West layout. The model consisted of the following modules: DNI, SCA operational, and SCA optical. The tracking angle characteristics were analyzed and the results showed that the East-West layout of the tracking system was the most viable. The average energy flux was simulated for a given time period and different SCA layouts, yielding an average flux of 6 kW/m 2 , which was then used as the design and operational standards of the Yanqing parabolic trough plant. The mass flow of North-South layout was relatively stable. The influences of the defocus angles on both the average energy flux and the circumferential flux distribution were also studied. The results provided a theoretical basis for the following components: solar field design, mass flow control of the heat transfer fluid, design and operation of the tracking system, operational safety of SCAs, and power production prediction in the Yanqing 1 MW parabolic trough plant.

  1. Operator realization of the SU(2) WZNW model

    International Nuclear Information System (INIS)

    Furlan, P.; Todorov, I.T.

    1995-12-01

    Decoupling the chiral dynamics in the canonical approach to the WZNW model requires an extended phase space that includes left and right monodromy variables M and M-bar. Earlier work on the subject, which traced back the quantum group symmetry of the model to the Lie-Poisson symmetry of the chiral symplectic form, left some open questions: How to reconcile the necessity to set M M-bar -1 = 1 (in order to recover the monodromy invariance of the local 2D group valued field g = uu-bar) with the fact the M and M-bar obey different exchange relations? What is the status of the quantum symmetry in the 2D theory in which the chiral fields u(x-t) and u-bar(x+t) commute? Is there a consistent operator formalism in the chiral (and the extended 2D) theory in the continuum limit? We propose a constructive affirmative answer to these questions for G = SU(2) by presenting the quantum field u and u-bar as sums of products of chiral vertex operators and q Bose creation and annihilation operators. (author). 17 refs

  2. Fuzzy expert systems models for operations research and management science

    Science.gov (United States)

    Turksen, I. B.

    1993-12-01

    Fuzzy expert systems can be developed for the effective use of management within the domains of concern associated with Operations Research and Management Science. These models are designed with: (1) expressive powers of representation embedded in linguistic variables and their linguistic values in natural language expressions, and (2) improved methods of interference based on fuzzy logic which is a generalization of multi-valued logic with fuzzy quantifiers. The results of these fuzzy expert system models are either (1) approximately good in comparison with their classical counterparts, or (2) much better than their counterparts. Moreover, for fuzzy expert systems models, it is only necessary to obtain ordinal scale data. Whereas for their classical counterparts, it is generally required that data be at least on ratio and absolute scale in order to guarantee the additivity and multiplicativity assumptions.

  3. Groundwater flow modelling of the excavation and operational phases - Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Urban (Computer-aided Fluid Engineering AB, Lyckeby (Sweden)); Rhen, Ingvar (SWECO Environment AB, Falun (Sweden))

    2010-12-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken a series of groundwater flow modelling studies. These represent time periods with different hydraulic conditions and the simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. The modelling study reported here presents calculated inflow rates, drawdown of the groundwater table and upconing of deep saline water for different levels of grouting efficiency during the excavation and operational phases of a final repository at Laxemar. The inflow calculations were accompanied by a sensitivity study, which among other matters handled the impact of different deposition hole rejection criteria. The report also presents tentative modelling results for the duration of the saturation phase, which starts once the used parts of the repository are being backfilled

  4. Ergonomic evaluation model of operational room based on team performance

    Directory of Open Access Journals (Sweden)

    YANG Zhiyi

    2017-05-01

    Full Text Available A theoretical calculation model based on the ergonomic evaluation of team performance was proposed in order to carry out the ergonomic evaluation of the layout design schemes of the action station in a multitasking operational room. This model was constructed in order to calculate and compare the theoretical value of team performance in multiple layout schemes by considering such substantial influential factors as frequency of communication, distance, angle, importance, human cognitive characteristics and so on. An experiment was finally conducted to verify the proposed model under the criteria of completion time and accuracy rating. As illustrated by the experiment results,the proposed approach is conductive to the prediction and ergonomic evaluation of the layout design schemes of the action station during early design stages,and provides a new theoretical method for the ergonomic evaluation,selection and optimization design of layout design schemes.

  5. 'Outrunning' the running ear

    African Journals Online (AJOL)

    Chantel

    acute purulent otitis media should be considered when evaluating a patient with a running ear.These are listed in Table I. To outrun the running ear all these facts should be kept in mind when evaluating a patient. HISTORY. Some important questions to ask, are: • Family history. • cystic fibrosis. • allergies — nasal, chest and.

  6. Overcoming the "Run" Response

    Science.gov (United States)

    Swanson, Patricia E.

    2013-01-01

    Recent research suggests that it is not simply experiencing anxiety that affects mathematics performance but also how one responds to and regulates that anxiety (Lyons and Beilock 2011). Most people have faced mathematics problems that have triggered their "run response." The issue is not whether one wants to run, but rather…

  7. Overuse injuries in running

    DEFF Research Database (Denmark)

    Larsen, Lars Henrik; Rasmussen, Sten; Jørgensen, Jens Erik

    2016-01-01

    What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence.......What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence....

  8. An operational phenological model for numerical pollen prediction

    Science.gov (United States)

    Scheifinger, Helfried

    2010-05-01

    The general prevalence of seasonal allergic rhinitis is estimated to be about 15% in Europe, and still increasing. Pre-emptive measures require both the reliable assessment of production and release of various pollen species and the forecasting of their atmospheric dispersion. For this purpose numerical pollen prediction schemes are being developed by a number of European weather services in order to supplement and improve the qualitative pollen prediction systems by state of the art instruments. Pollen emission is spatially and temporally highly variable throughout the vegetation period and not directly observed, which precludes a straightforward application of dispersion models to simulate pollen transport. Even the beginning and end of flowering, which indicates the time period of potential pollen emission, is not (yet) available in real time. One way to create a proxy for the beginning, the course and the end of the pollen emission is its simulation as function of real time temperature observations. In this work the European phenological data set of the COST725 initiative forms the basis of modelling the beginning of flowering of 15 species, some of which emit allergic pollen. In order to keep the problem as simple as possible for the sake of spatial interpolation, a 3 parameter temperature sum model was implemented in a real time operational procedure, which calculates the spatial distribution of the entry dates for the current day and 24, 48 and 72 hours in advance. As stand alone phenological model and combined with back trajectories it is thought to support the qualitative pollen prediction scheme at the Austrian national weather service. Apart from that it is planned to incorporate it in a numerical pollen dispersion model. More details, open questions and first results of the operation phenological model will be discussed and presented.

  9. Running Injury Development

    DEFF Research Database (Denmark)

    Krogh Johansen, Karen; Hulme, Adam; Damsted, Camma

    2017-01-01

    able to compete in national championships and their coaches about factors associated with running injury development. Methods: A link to an online survey was distributed to middle- and long-distance runners and their coaches across 25 Danish Athletics Clubs. The main research question was: “Which......Background: Behavioral science methods have rarely been used in running injury research. Therefore, the attitudes amongst runners and their coaches regarding factors leading to running injuries warrants formal investigation. Purpose: To investigate the attitudes of middle- and long-distance runners......%]) to be associated with injury, while half of the runners found “insufficient recovery between running sessions” (53% [95%CI: 47%; 71%]) important. Conclusion: Runners and their coaches emphasize ignoring pain as a factor associated with injury development. The question remains how much running, if any at all...

  10. RUNNING INJURY DEVELOPMENT

    DEFF Research Database (Denmark)

    Johansen, Karen Krogh; Hulme, Adam; Damsted, Camma

    2017-01-01

    %]) to be associated with injury, while half of the runners found "insufficient recovery between running sessions" (53% [95%CI: 47%; 71%]) important. CONCLUSION: Runners and their coaches emphasize ignoring pain as a factor associated with injury development. The question remains how much running, if any at all......BACKGROUND: Behavioral science methods have rarely been used in running injury research. Therefore, the attitudes amongst runners and their coaches regarding factors leading to running injuries warrants formal investigation. PURPOSE: To investigate the attitudes of middle- and long-distance runners...... able to compete in national championships and their coaches about factors associated with running injury development. METHODS: A link to an online survey was distributed to middle- and long-distance runners and their coaches across 25 Danish Athletics Clubs. The main research question was: "Which...

  11. A Final Approach Trajectory Model for Current Operations

    Science.gov (United States)

    Gong, Chester; Sadovsky, Alexander

    2010-01-01

    Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.

  12. Modeling Characteristics of an Operational Probabilistic Safety Assessment (PSA)

    International Nuclear Information System (INIS)

    Anoba, Richard C.; Khalil, Yehia; Fluehr, J.J. III; Kellogg, Richard; Hackerott, Alan

    2002-01-01

    Probabilistic Safety Assessments (PSAs) are increasingly being used as a tool for supporting the acceptability of design, procurement, construction, operation, and maintenance activities at nuclear power plants. Since the issuance of Generic Letter 88-20 and subsequent Individual Plant Examinations (IPEs)/Individual Plant Examinations for External Events (IPEEEs), the NRC has issued several Regulatory Guides such as RG 1.182 to describe the use of PSA in risk informed regulation activities. The PSA models developed for the IPEs were typically based on a 'snapshot' of the the risk profile at the nuclear power plant. The IPE models contain implicit assumptions and simplifications that limit the ability to realistically assess current issues. For example, IPE modeling assumptions related to plant configuration limit the ability to perform online equipment out-of-service assessments. The lack of model symmetry results in skewed risk results. IPE model simplifications related to initiating events have resulted in non-conservative estimates of risk impacts when equipment is removed from service. The IPE models also do not explicitly address all external events that are potentially risk significant as equipment is removed from service. (authors)

  13. Another Look at the Relationship Between Accident- and Encroachment-Based Approaches to Run-Off-the-Road Accidents Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Miaou, Shaw-Pin

    1997-08-01

    The purpose of this study was to look for ways to combine the strengths of both approaches in roadside safety research. The specific objectives were (1) to present the encroachment-based approach in a more systematic and coherent way so that its limitations and strengths can be better understood from both statistical and engineering standpoints, and (2) to apply the analytical and engineering strengths of the encroachment-based thinking to the formulation of mean functions in accident-based models.

  14. Application of a Fuzzy Verification Technique for Assessment of the Weather Running Estimate-Nowcast (WRE-N) Model

    Science.gov (United States)

    2016-10-01

    filtered by the application of a threshold. Fuzzy methods have been developed in recent years to overcome limitations encountered when applying...release; distribution is unlimited iv List of Figures Fig. 1 Triple- nested model domains: domain center points are coincident and are centered near...List of Tables Table 1 WRE–N triple- nested domain dimensions in kilometers.......................5 Table 2 WRE–N configuration

  15. Using modeling to understand how athletes in different disciplines solve the same problem: swimming versus running versus speed skating.

    Science.gov (United States)

    de Koning, Jos J; Foster, Carl; Lucia, Alejandro; Bobbert, Maarten F; Hettinga, Florentina J; Porcari, John P

    2011-06-01

    Every new competitive season offers excellent examples of human locomotor abilities, regardless of the sport. As a natural consequence of competitions, world records are broken every now and then. World record races not only offer spectators the pleasure of watching very talented and highly trained athletes performing muscular tasks with remarkable skill, but also represent natural models of the ultimate expression of human integrated muscle biology, through strength, speed, or endurance performances. Given that humans may be approaching our species limit for muscular power output, interest in how athletes improve on world records has led to interest in the strategy of how limited energetic resources are best expended over a race. World record performances may also shed light on how athletes in different events solve exactly the same problem-minimizing the time required to reach the finish line. We have previously applied mathematical modeling to the understanding of world record performances in terms of improvements in facilities/equipment and improvements in the athletes' physical capacities. In this commentary, we attempt to demonstrate that differences in world record performances in various sports can be explained using a very simple modeling process.

  16. LHCb siliicon detectors: the Run 1 to Run 2 transition and first experience of Run 2

    CERN Document Server

    Rinnert, Kurt

    2015-01-01

    LHCb is a dedicated experiment to study New Physics in the decays of heavy hadrons at the Large Hadron Collider (LHC) at CERN. The detector includes a high precision tracking system consisting of a silicon-strip vertex detector (VELO) surrounding the pp interaction region, a large- area silicon-strip detector located upstream of a dipole magnet (TT), and three stations of silicon- strip detectors (IT) and straw drift tubes placed downstream (OT). The operational transition of the silicon detectors VELO, TT and IT from LHC Run 1 to Run 2 and first Run 2 experiences will be presented. During the long shutdown of the LHC the silicon detectors have been maintained in a safe state and operated regularly to validate changes in the control infrastructure, new operational procedures, updates to the alarm systems and monitoring software. In addition, there have been some infrastructure related challenges due to maintenance performed in the vicinity of the silicon detectors that will be discussed. The LHCb silicon dete...

  17. Object-oriented process dose modeling for glovebox operations

    International Nuclear Information System (INIS)

    Boerigter, S.T.; Fasel, J.H.; Kornreich, D.E.

    1999-01-01

    The Plutonium Facility at Los Alamos National Laboratory supports several defense and nondefense-related missions for the country by performing fabrication, surveillance, and research and development for materials and components that contain plutonium. Most operations occur in rooms with one or more arrays of gloveboxes connected to each other via trolley gloveboxes. Minimizing the effective dose equivalent (EDE) is a growing concern as a result of steadily declining allowable dose limits being imposed and a growing general awareness of safety in the workplace. In general, the authors discriminate three components of a worker's total EDE: the primary EDE, the secondary EDE, and background EDE. A particular background source of interest is the nuclear materials vault. The distinction between sources inside and outside of a particular room is arbitrary with the underlying assumption that building walls and floors provide significant shielding to justify including sources in other rooms in the background category. Los Alamos has developed the Process Modeling System (ProMoS) primarily for performing process analyses of nuclear operations. ProMoS is an object-oriented, discrete-event simulation package that has been used to analyze operations at Los Alamos and proposed facilities such as the new fabrication facilities for the Complex-21 effort. In the past, crude estimates of the process dose (the EDE received when a particular process occurred), room dose (the EDE received when a particular process occurred in a given room), and facility dose (the EDE received when a particular process occurred in the facility) were used to obtain an integrated EDE for a given process. Modifications to the ProMoS package were made to utilize secondary dose information to use dose modeling to enhance the process modeling efforts

  18. Operational derivation of Boltzmann distribution with Maxwell's demon model.

    Science.gov (United States)

    Hosoya, Akio; Maruyama, Koji; Shikano, Yutaka

    2015-11-24

    The resolution of the Maxwell's demon paradox linked thermodynamics with information theory through information erasure principle. By considering a demon endowed with a Turing-machine consisting of a memory tape and a processor, we attempt to explore the link towards the foundations of statistical mechanics and to derive results therein in an operational manner. Here, we present a derivation of the Boltzmann distribution in equilibrium as an example, without hypothesizing the principle of maximum entropy. Further, since the model can be applied to non-equilibrium processes, in principle, we demonstrate the dissipation-fluctuation relation to show the possibility in this direction.

  19. The Long-Run Impact on Population and Income of Open Access to Land in a Model with Parental Altruism

    OpenAIRE

    Jon D. Harford

    2000-01-01

    Steady state levels of population and per capita income are examined using a Becker-Barro (1988) style of model of an economy with identical altruistic parents bearing costly children who receive bequests of capital and land. Inspired by the work of North (1981) and others, the problem of open access land with ancillary negative effects on private (but not public) productivity of capital is examined. It is seen that open access to land can lead to overpopulation in a ceteris paribus sense, an...

  20. CMS software and computing for LHC Run 2

    CERN Document Server

    INSPIRE-00067576

    2016-11-09

    The CMS offline software and computing system has successfully met the challenge of LHC Run 2. In this presentation, we will discuss how the entire system was improved in anticipation of increased trigger output rate, increased rate of pileup interactions and the evolution of computing technology. The primary goals behind these changes was to increase the flexibility of computing facilities where ever possible, as to increase our operational efficiency, and to decrease the computing resources needed to accomplish the primary offline computing workflows. These changes have resulted in a new approach to distributed computing in CMS for Run 2 and for the future as the LHC luminosity should continue to increase. We will discuss changes and plans to our data federation, which was one of the key changes towards a more flexible computing model for Run 2. Our software framework and algorithms also underwent significant changes. We will summarize the our experience with a new multi-threaded framework as deployed on ou...

  1. A Validated Set of MIDAS V5 Task Network Model Scenarios to Evaluate Nextgen Closely Spaced Parallel Operations Concepts

    Science.gov (United States)

    Gore, Brian Francis; Hooey, Becky Lee; Haan, Nancy; Socash, Connie; Mahlstedt, Eric; Foyle, David C.

    2013-01-01

    The Closely Spaced Parallel Operations (CSPO) scenario is a complex, human performance model scenario that tested alternate operator roles and responsibilities to a series of off-nominal operations on approach and landing (see Gore, Hooey, Mahlstedt, Foyle, 2013). The model links together the procedures, equipment, crewstation, and external environment to produce predictions of operator performance in response to Next Generation system designs, like those expected in the National Airspaces NextGen concepts. The task analysis that is contained in the present report comes from the task analysis window in the MIDAS software. These tasks link definitions and states for equipment components, environmental features as well as operational contexts. The current task analysis culminated in 3300 tasks that included over 1000 Subject Matter Expert (SME)-vetted, re-usable procedural sets for three critical phases of flight; the Descent, Approach, and Land procedural sets (see Gore et al., 2011 for a description of the development of the tasks included in the model; Gore, Hooey, Mahlstedt, Foyle, 2013 for a description of the model, and its results; Hooey, Gore, Mahlstedt, Foyle, 2013 for a description of the guidelines that were generated from the models results; Gore, Hooey, Foyle, 2012 for a description of the models implementation and its settings). The rollout, after landing checks, taxi to gate and arrive at gate illustrated in Figure 1 were not used in the approach and divert scenarios exercised. The other networks in Figure 1 set up appropriate context settings for the flight deck.The current report presents the models task decomposition from the tophighest level and decomposes it to finer-grained levels. The first task that is completed by the model is to set all of the initial settings for the scenario runs included in the model (network 75 in Figure 1). This initialization process also resets the CAD graphic files contained with MIDAS, as well as the embedded

  2. Long-Run Neutrality and Superneutrality in an ARIMA Framework.

    OpenAIRE

    Fisher, Mark E; Seater, John J

    1993-01-01

    The authors formalize long-run neutrality and long-run superneutrality in the context of a bivariate ARIMA model; show how the restrictions implied by long-run neutrality and long-run superneutrality depend on the orders of integration of the variables; apply their analysis to previous work, showing how that work is related to long-run neutrality and long-run superneutrality; and provide some new evidence on long-run neutrality and long-run superneutrality. Copyright 1993 by American Economic...

  3. Integration of field data into operational snowmelt-runoff models

    International Nuclear Information System (INIS)

    Brandt, M.; Bergström, S.

    1994-01-01

    Conceptual runoff models have become standard tools for operational hydrological forecasting in Scandinavia. These models are normally based on observations from the national climatological networks, but in mountainous areas the stations are few and sometimes not representative. Due to the great economic importance of good hydrological forecasts for the hydro-power industry attempts have been made to improve the model simulations by support from field observations of the snowpack. The snowpack has been mapped by several methods; airborne gamma-spectrometry, airborne georadars, satellites and by conventional snow courses. The studies cover more than ten years of work in Sweden. The conclusion is that field observations of the snow cover have a potential for improvement of the forecasts of inflow to the reservoirs in the mountainous part of the country, where the climatological data coverages is poor. This is pronounced during years with unusual snow distribution. The potential for model improvement is smaller in the climatologically more homogeneous forested lowlands, where the climatological network is denser. The costs of introduction of airborne observations into the modelling procedure are high and can only be justified in areas of great hydropower potential. (author)

  4. Full Spectrum Operations: A Running Start

    Science.gov (United States)

    2009-03-31

    conversion using a package of advanced enzymes, yeast , and commercial grade anti-biotics. This package ferments the sugar and starch liquid solution for 12...application of heat to break down organic materials anaerobically , produces a fuel gas composed of methane, hydrocarbons, hydrogen, and carbon dioxide...forms of producing landfill gas using various 29 IAT.R 0571 bacteria in airtight containers called digesters. Fermentation is a form of biological WTE

  5. The Stochastic Multicloud Model as part of an operational convection parameterisation in a comprehensive GCM

    Science.gov (United States)

    Peters, Karsten; Jakob, Christian; Möbis, Benjamin

    2015-04-01

    An adequate representation of convective processes in numerical models of the atmospheric circulation (general circulation models, GCMs) remains one of the grand challenges in atmospheric science. In particular, the models struggle with correctly representing the spatial distribution and high variability of tropical convection. It is thought that this model deficiency partly results from formulating current convection parameterisation schemes in a purely deterministic manner. Here, we use observations of tropical convection to inform the design of a novel convection parameterisation with stochastic elements. The novel scheme is built around the Stochastic MultiCloud Model (SMCM, Khouider et al 2010). We present the progress made in utilising SMCM-based estimates of updraft area fractions at cloud base as part of the deep convection scheme of a GCM. The updraft area fractions are used to yield one part of the cloud base mass-flux used in the closure assumption of convective mass-flux schemes. The closure thus receives a stochastic component, potentially improving modeled convective variability and coherence. For initial investigations, we apply the above methodology to the operational convective parameterisation of the ECHAM6 GCM. We perform 5-year AMIP simulations, i.e. with prescribed observed SSTs. We find that with the SMCM, convection is weaker and more coherent and continuous from timestep to timestep compared to the standard model. Total global precipitation is reduced in the SMCM run, but this reduces i) the overall error compared to observed global precipitation (GPCP) and ii) middle tropical tropospheric temperature biases compared to ERA-Interim. Hovmoeller diagrams indicate a slightly higher degree of convective organisation compared to the base case and Wheeler-Kiladis frequency wavenumber diagrams indicate slightly more spectral power in the MJO range.

  6. Operational modal analysis modeling, Bayesian inference, uncertainty laws

    CERN Document Server

    Au, Siu-Kui

    2017-01-01

    This book presents operational modal analysis (OMA), employing a coherent and comprehensive Bayesian framework for modal identification and covering stochastic modeling, theoretical formulations, computational algorithms, and practical applications. Mathematical similarities and philosophical differences between Bayesian and classical statistical approaches to system identification are discussed, allowing their mathematical tools to be shared and their results correctly interpreted. Many chapters can be used as lecture notes for the general topic they cover beyond the OMA context. After an introductory chapter (1), Chapters 2–7 present the general theory of stochastic modeling and analysis of ambient vibrations. Readers are first introduced to the spectral analysis of deterministic time series (2) and structural dynamics (3), which do not require the use of probability concepts. The concepts and techniques in these chapters are subsequently extended to a probabilistic context in Chapter 4 (on stochastic pro...

  7. Operational Testing of Satellite based Hydrological Model (SHM)

    Science.gov (United States)

    Gaur, Srishti; Paul, Pranesh Kumar; Singh, Rajendra; Mishra, Ashok; Gupta, Praveen Kumar; Singh, Raghavendra P.

    2017-04-01

    Incorporation of the concept of transposability in model testing is one of the prominent ways to check the credibility of a hydrological model. Successful testing ensures ability of hydrological models to deal with changing conditions, along with its extrapolation capacity. For a newly developed model, a number of contradictions arises regarding its applicability, therefore testing of credibility of model is essential to proficiently assess its strength and limitations. This concept emphasizes to perform 'Hierarchical Operational Testing' of Satellite based Hydrological Model (SHM), a newly developed surface water-groundwater coupled model, under PRACRITI-2 program initiated by Space Application Centre (SAC), Ahmedabad. SHM aims at sustainable water resources management using remote sensing data from Indian satellites. It consists of grid cells of 5km x 5km resolution and comprises of five modules namely: Surface Water (SW), Forest (F), Snow (S), Groundwater (GW) and Routing (ROU). SW module (functions in the grid cells with land cover other than forest and snow) deals with estimation of surface runoff, soil moisture and evapotranspiration by using NRCS-CN method, water balance and Hragreaves method, respectively. The hydrology of F module is dependent entirely on sub-surface processes and water balance is calculated based on it. GW module generates baseflow (depending on water table variation with the level of water in streams) using Boussinesq equation. ROU module is grounded on a cell-to-cell routing technique based on the principle of Time Variant Spatially Distributed Direct Runoff Hydrograph (SDDH) to route the generated runoff and baseflow by different modules up to the outlet. For this study Subarnarekha river basin, flood prone zone of eastern India, has been chosen for hierarchical operational testing scheme which includes tests under stationary as well as transitory conditions. For this the basin has been divided into three sub-basins using three flow

  8. Metric versus observable operator representation, higher spin models

    Science.gov (United States)

    Fring, Andreas; Frith, Thomas

    2018-02-01

    We elaborate further on the metric representation that is obtained by transferring the time-dependence from a Hermitian Hamiltonian to the metric operator in a related non-Hermitian system. We provide further insight into the procedure on how to employ the time-dependent Dyson relation and the quasi-Hermiticity relation to solve time-dependent Hermitian Hamiltonian systems. By solving both equations separately we argue here that it is in general easier to solve the former. We solve the mutually related time-dependent Schrödinger equation for a Hermitian and non-Hermitian spin 1/2, 1 and 3/2 model with time-independent and time-dependent metric, respectively. In all models the overdetermined coupled system of equations for the Dyson map can be decoupled algebraic manipulations and reduces to simple linear differential equations and an equation that can be converted into the non-linear Ermakov-Pinney equation.

  9. Modeling of crushed ore agglomeration for heap leach operations

    Science.gov (United States)

    Dhawan, Nikhil

    agglomeration, specifically crushed ore agglomeration. The experimental difficulties and how to overcome them are described. An empirical model that is readily useful for plant heap leach operations is shown in detail. The analysis of constituent particles within agglomerate size class is done with a partition model. The guest and host nature of particles, thus delineated, helps one to anticipate the nature of agglomerates that would be formed with a given ore size distribution. Thus, all aspects of batch agglomeration are addressed in this work.

  10. Run-Time Control For Software Defined Radio

    OpenAIRE

    Smit, L.T.; Smit, Gerardus Johannes Maria; Havinga, Paul J.M.; Hurink, Johann L.; Broersma, Haitze J.

    2002-01-01

    A control system is presented, which adapts at run-time a software defined radio to the dynamic external environment. The goal is to operate with minimized use of resources and energy consumption, while satisfying an adequate quality of service. The control system is based on a model, which selects the most optimal configuration based on off-line gathered information and on-line measurements.

  11. Modelling operator cognitive interactions in nuclear power plant safety evaluation

    International Nuclear Information System (INIS)

    Senders, J.W.; Moray, N.; Smiley, A.; Sellen, A.

    1985-08-01

    The overall objectives of the study were to review methods which are applicable to the analysis of control room operator cognitive interactions in nuclear plant safety evaluations and to indicate where future research effort in this area should be directed. This report is based on an exhaustive search and review of the literature on NPP (Nuclear Power Plant) operator error, human error, human cognitive function, and on human performance. A number of methods which have been proposed for the estimation of data for probabilistic risk analysis have been examined and have been found wanting. None addresses the problem of diagnosis error per se. Virtually all are concerned with the more easily detected and identified errors of action. None addresses underlying cause and mechanism. It is these mechanisms which must be understood if diagnosis errors and other cognitive errors are to be controlled and predicted. We have attempted to overcome the deficiencies of earlier work and have constructed a model/taxonomy, EXHUME, which we consider to be exhaustive. This construct has proved to be fruitful in organizing our thinking about the kinds of error that can occur and the nature of self-correcting mechanisms, and has guided our thinking in suggesting a research program which can provide the data needed for quantification of cognitive error rates and of the effects of mitigating efforts. In addition a preliminary outline of EMBED, a causal model of error, is given based on general behavioural research into perception, attention, memory, and decision making. 184 refs

  12. Understanding the Impact of Reservoir Operations on Temperature Hydrodynamics at Shasta Lake through 2D and 3D Modeling

    Science.gov (United States)

    Hallnan, R.; Busby, D.; Saito, L.; Daniels, M.; Danner, E.; Tyler, S.

    2016-12-01

    Stress on California's salmon fisheries as a result of recent drought highlights a need for effective temperature management in the Sacramento River. Cool temperatures are required for Chinook salmon spawning and rearing. At Shasta Dam in northern California, managers use selective reservoir withdrawals to meet downstream temperature thresholds set for Chinook salmon populations. Shasta Dam is equipped with a temperature control device (TCD) that allows for water withdrawals at different reservoir depths. A two-dimensional CE-QUAL-W2 (W2) model of Shasta Reservoir has been used to understand the impacts of TCD operations on reservoir and discharge dynamics at Shasta. W2 models the entire reservoir based on hydrologic and meteorological inputs, and therefore can be used to simulate various hydroclimatic conditions, reservoir operations, and resulting reservoir conditions. A limitation of the W2 model is that it only captures reservoir conditions in two dimensions (length and depth), which may not represent local hydrodynamic effects of TCD operations that could affect simulation of discharge temperatures. Thus, a three-dimensional (3D) model of the TCD and the immediately adjacent upstream reservoir has been constructed using computational fluid dynamics (CFD) in ANSYS Fluent. This 3D model provides additional insight into the mixing effects of different TCD operations, and resulting reservoir outflow temperatures. The drought conditions of 2015 provide a valuable dataset for assessing the efficacy of modeling the temperature profile of Shasta Reservoir under very low inflow volumes, so the W2 and CFD models are compared for model performance in late 2015. To assist with this assessment, data from a distributed temperature sensing (DTS) deployment at Shasta Lake since August 2015 are used. This presentation describes model results from both W2 as well as the CFD model runs during late 2015, and discuss their efficacy for modeling drought conditions.

  13. Comparison of two different running models for the shock wave lithotripsy machine in Taipei City Hospital: self-support versus outsourcing cooperation.

    Science.gov (United States)

    Huang, Chi-Yi; Chen, Shiou-Sheng; Chen, Li-Kuei

    2009-10-01

    To compare two different running models including self-support and outsourcing cooperation for the extracorporeal shock wave lithotripsy (SWL) machine in Taipei City Hospital, we made a retrospective study. Self-support means that the hospital has to buy an SWL machine and get all the payment from SWL. In outsourcing cooperation, the cooperative company provides an SWL machine and shares the payment with the hospital. Between January 2002 and December 2006, we used self-support for the SWL machine, and from January 2007 to December 2008, we used outsourcing cooperation. We used the method of full costing to calculate the cost of SWL, and the break-even point was the lowest number of treatment sessions of SWL to make balance of payments every month. Quality parameters including stone-free rate, retreatment rate, additional procedures and complication rate were evaluated. When outsourcing cooperation was used, there were significantly more treatment sessions of SWL every month than when utilizing self-support (36.3 +/- 5.1 vs. 48.1 +/- 8.4, P = 0.03). The cost of SWL for every treatment session was significantly higher using self-support than with outsourcing cooperation (25027.5 +/- 1789.8 NT$ vs. 21367.4 +/- 201.0 NT$). The break-even point was 28.3 (treatment sessions) for self-support, and 28.4 for outsourcing cooperation, when the hospital got 40% of the payment, which would decrease if the percentage increased. No significant differences were noticed for stone-free rate, retreatment rate, additional procedures and complication rate of SWL between the two running models. Besides, outsourcing cooperation had lower cost (every treatment session), but a greater number of treatment sessions of SWL every month than self-support.

  14. Models for filtration during drilling, completion and stimulation operations

    Science.gov (United States)

    Xie, Jing

    Filtration of solid suspensions is encountered in many operations during drilling, completing and stimulating oil and gas wells. Filtration of drilling muds, completion and fracturing fluids, gravel packing slurries are a few examples. Most of these applications involve the filtration of non-Newtonian fluids into a porous medium containing compressible fluids. Internal and external compressible filter cakes can form under static or dynamic filtration conditions. Models for static filtration of solid-laden polymer fluids have been developed. These models solve the basic filtration equations to obtain the depth of invasion of solids and polymer into the formation. The buildup of an external filter cake is modeled after a transition time is reached when no more additional particles invade the formation. It is shown that a square root of time dependence is obtained during external filtration of polymer fluids. During the spurt loss period (internal filtration) the model allows us to calculate the extent of solids and filtrate invasion and the duration of spurt loss. The model for the first time presents a formulation where the spurt loss can be obtained from the model directly. Fluid compressibility effects as well as cake compressibility can be accounted for in the model. The results of the model allow us to better interpret leak-off data during the period in which the polymer is being squeezed into the formation. Comparisons with experiments show that fluid leak-off during the spurt loss period can be accurately estimated with the equations presented. During drilling or when a fracture is created in a frac-and-pack operation, fluid leak-off occurs by a dynamic filtration process. In this process, particles are constantly sheared away by the flow of the polymer slurry parallel to the face of the fracture with fluid leak-off occurring into the rock. A new model for dynamic filtration has been developed which takes into account the particle size distribution of the wall

  15. ADHydro: A Parallel Implementation of a Large-scale High-Resolution Multi-Physics Distributed Water Resources Model Using the Charm++ Run Time System

    Science.gov (United States)

    Steinke, R. C.; Ogden, F. L.; Lai, W.; Moreno, H. A.; Pureza, L. G.

    2014-12-01

    Physics-based watershed models are useful tools for hydrologic studies, water resources management and economic analyses in the contexts of climate, land-use, and water-use changes. This poster presents a parallel implementation of a quasi 3-dimensional, physics-based, high-resolution, distributed water resources model suitable for simulating large watersheds in a massively parallel computing environment. Developing this model is one of the objectives of the NSF EPSCoR RII Track II CI-WATER project, which is joint between Wyoming and Utah EPSCoR jurisdictions. The model, which we call ADHydro, is aimed at simulating important processes in the Rocky Mountain west, including: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow, water management and irrigation. Model forcing is provided by the Weather Research and Forecasting (WRF) model, and ADHydro is coupled with the NOAH-MP land-surface scheme for calculating fluxes between the land and atmosphere. The ADHydro implementation uses the Charm++ parallel run time system. Charm++ is based on location transparent message passing between migrateable C++ objects. Each object represents an entity in the model such as a mesh element. These objects can be migrated between processors or serialized to disk allowing the Charm++ system to automatically provide capabilities such as load balancing and checkpointing. Objects interact with each other by passing messages that the Charm++ system routes to the correct destination object regardless of its current location. This poster discusses the algorithms, communication patterns, and caching strategies used to implement ADHydro with Charm++. The ADHydro model code will be released to the hydrologic community in late 2014.

  16. Proposal for operator's mental model using the concept of multilevel flow modeling

    International Nuclear Information System (INIS)

    Yoshimura, Seiichi; Takano, Kenichi; Sasou, Kunihide

    1995-01-01

    It is necessary to analyze an operator's thinking process and a operator team's intension forming process for preventing human errors in a highly advanced huge system like a nuclear power plant. Central Research Institute of Electric Power Industry is promoting a research project to establish human error prevention countermeasures by modeling the thinking and intension forming process. The important is the future prediction and the cause identification when abnormal situations occur in a nuclear power plant. The concept of Multilevel Flow Modeling (MFM) seems to be effective as an operator's mental model which performs the future prediction and the cause identification. MFM is a concept which qualitatively describes the plant functions by energy and mass flows and also describes the plant status by breaking down the targets in a hierarchical manner which a plant should achieve. In this paper, an operator's mental model using the concept of MFM was proposed and a nuclear power plant diagnosis support system using MFM was developed. The system evaluation test by personnel who have operational experience in nuclear power plants revealed that MFM was superior in the future prediction and the cause identification to a traditional nuclear power plant status display system which used mimics and trends. MFM proved to be useful as an operator's mental model by the test. (author)

  17. Modelling the Turbocharger Cut Off Application Due to Slow Steaming Operation 12RTA96C-B Engine

    Directory of Open Access Journals (Sweden)

    Karsten Wehner

    2017-09-01

    Full Text Available Out of the total operational costs of a ship, fuel costs account for by far the highest proportion. In view of the global economic situation and the rising oil prices, shipowners and charterers are looking for solutions to cut costs by reducing fuel consumption. Low load operation, also well-known as “slow steaming”, represents the currently most effective and popular measure to cut fuel costs and, in consequence, the total operational costs for increased competitiveness in the market. Low load operation is possible and there is an increasing trend to operate in these very low engine load ranges. As the engines were not designed for this operational condition, various retrofit modifications to the engine can compensate for this. By using low load operation, the reduction of the RPM gives problems when sailing at low speed.  A turbocharger (TC compresses inlet air to a high pressure and after cooling this compressed air it results in higher mass of air in the cylinder. But when running at a low power load this air reaches temperatures that are too low for an optimal combustion process. One of the solution comes from the company Wärtsilä. They install so called “low steam engine kits”. When this kit is installed it allows the engine operators to cut off one turbocharger of the engine, this result’s in a higher RPM for the operating turbochargers. When the remaining TC’s have a higher RPM their efficiency improves and gives the engine more air for combustion.The goal of this Bachelor thesis is to make a calculation modelling and prove that by switching off one or more turbocharger on the system will improve the efficiency in slow steaming operation. Beside that, this thesis is aims to estimated the performance of the engine in both operation condition.

  18. LHC Run 2: Results and challenges

    CERN Document Server

    AUTHOR|(CDS)2068843; Arduini, Gianluigi; Bartosik, Hannes; De Maria, Riccardo; Giovannozzi, Massimo; Iadarola, Giovanni; Jowett, John; Li, Kevin Shing Bruce; Lamont, Mike; Lechner, Anton; Metral, Elias; Mirarchi, Daniele; Pieloni, Tatiana; Redaelli, Stefano; Rumolo, Giovanni; Salvant, Benoit; Tomas Garcia, Rogelio; Wenninger, Jorg

    2016-01-01

    The first proton run of the LHC was very successful and resulted in important physics discoveries. It was followed by a two-year shutdown where a large number of improvements were carried out. In 2015, the LHC was restarted and this second run aims at further exploring the physics of the standard model and beyond at an increased beam energy. This article gives a review of the performance achieved so far and the limitations encountered, as well as the future challenges for the CERN accelerators to maximize the data delivered to the LHC experiments in Run 2. Furthermore, the status of the 2016 LHC run and commissioning is discussed.

  19. Modeling and simulation of the USAVRE network and radiology operations

    Science.gov (United States)

    Martinez, Ralph; Bradford, Daniel Q.; Hatch, Jay; Sochan, John; Chimiak, William J.

    1998-07-01

    . There are three levels to the model: (1) Network model of the Cable Bundling Initiative (CBI) network and base networks (CUITIN), (2) Protocol model, including network, transport, and middleware protocols, such TCP/IP and Common Object Request Broker Architecture (CORBA) protocols, and (3) USAVRE Application layer model, including database archive systems, acquisition equipment, viewing workstations, and operations and management. The Network layer of the model contains the ATM-based backbone network provided by the CBI, interfaces into the RMC regional networks and the PACS networks at the medical centers and RMC sites. The CBI network currently is a DS-3 (45 Mbps) backbone consisting of three major hubs, at Ft. Leavenworth, KS, Ft. Belvoir, VA, and Ft. McPherson, GA. The medical center PACS networks are 100 Mbps and 1 Gbps networks. The RMC site networks are 100 Mbps speeds. The model is very beneficial in studying the multimedia transfer and operations characteristics of the USAVRE before it is completely built and deployed.

  20. Hybrid System Modeling and Full Cycle Operation Analysis of a Two-Stroke Free-Piston Linear Generator

    Directory of Open Access Journals (Sweden)

    Peng Sun

    2017-02-01

    Full Text Available Free-piston linear generators (FPLGs have attractive application prospects for hybrid electric vehicles (HEVs owing to their high-efficiency, low-emissions and multi-fuel flexibility. In order to achieve long-term stable operation, the hybrid system design and full-cycle operation strategy are essential factors that should be considered. A 25 kW FPLG consisting of an internal combustion engine (ICE, a linear electric machine (LEM and a gas spring (GS is designed. To improve the power density and generating efficiency, the LEM is assembled with two modular flat-type double-sided PM LEM units, which sandwich a common moving-magnet plate supported by a middle keel beam and bilateral slide guide rails to enhance the stiffness of the moving plate. For the convenience of operation processes analysis, the coupling hybrid system is modeled mathematically and a full cycle simulation model is established. Top-level systemic control strategies including the starting, stable operating, fault recovering and stopping strategies are analyzed and discussed. The analysis results validate that the system can run stably and robustly with the proposed full cycle operation strategy. The effective electric output power can reach 26.36 kW with an overall system efficiency of 36.32%.

  1. Chiral condensate in the Schwinger model with matrix product operators

    Energy Technology Data Exchange (ETDEWEB)

    Banuls, Mari Carmen [Max-Planck-Institut fuer Quantenoptik (MPQ), Garching (Germany); Cichy, Krzysztof [Frankfurt Univ. (Germany). Inst. fuer Theoretische Physik; Poznan Univ. (Poland). Faculty of Physics; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Saito, Hana [Tsukuba Univ. (Japan). Center for Computational Sciences

    2016-03-15

    Tensor network (TN) methods, in particular the Matrix Product States (MPS) ansatz, have proven to be a useful tool in analyzing the properties of lattice gauge theories. They allow for a very good precision, much better than standard Monte Carlo (MC) techniques for the models that have been studied so far, due to the possibility of reaching much smaller lattice spacings. The real reason for the interest in the TN approach, however, is its ability, shown so far in several condensed matter models, to deal with theories which exhibit the notorious sign problem in MC simulations. This makes it prospective for dealing with the non-zero chemical potential in QCD and other lattice gauge theories, as well as with real-time simulations. In this paper, using matrix product operators, we extend our analysis of the Schwinger model at zero temperature to show the feasibility of this approach also at finite temperature. This is an important step on the way to deal with the sign problem of QCD. We analyze in detail the chiral symmetry breaking in the massless and massive cases and show that the method works very well and gives good control over a broad range of temperatures, essentially from zero to infinite temperature.

  2. Groundwater flow modelling of the excavation and operational phases - Forsmark

    International Nuclear Information System (INIS)

    Svensson, Urban; Follin, Sven

    2010-07-01

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken a series of groundwater flow modelling studies. These represent time periods with different climate conditions and the simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. The modelling study reported here presents calculated inflow rates, drawdown of the groundwater table and upconing of deep saline water for different levels of grouting efficiency during the excavation and operational phases of a final repository at Forsmark. The inflow calculations are accompanied by a sensitivity study, which among other matters handles the impact of parameter heterogeneity, different deposition hole rejection criteria, and the SFR facility (the repository for short-lived radioactive waste located approximately 1 km to the north of the investigated candidate area for a final repository at Forsmark). The report also presents tentative modelling results for the duration of the saturation phase, which starts once the used parts of the repository are being backfilled

  3. Groundwater flow modelling of the excavation and operational phases - Forsmark

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Urban (Computer-aided Fluid Engineering AB, Lyckeby (Sweden)); Follin, Sven (SF GeoLogic AB, Taeby (Sweden))

    2010-07-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken a series of groundwater flow modelling studies. These represent time periods with different climate conditions and the simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. The modelling study reported here presents calculated inflow rates, drawdown of the groundwater table and upconing of deep saline water for different levels of grouting efficiency during the excavation and operational phases of a final repository at Forsmark. The inflow calculations are accompanied by a sensitivity study, which among other matters handles the impact of parameter heterogeneity, different deposition hole rejection criteria, and the SFR facility (the repository for short-lived radioactive waste located approximately 1 km to the north of the investigated candidate area for a final repository at Forsmark). The report also presents tentative modelling results for the duration of the saturation phase, which starts once the used parts of the repository are being backfilled.

  4. Making Risk Models Operational for Situational Awareness and Decision Support

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R.; Coles, Garill A.; Shoemaker, Steven V.

    2012-06-12

    Modernization of nuclear power operations control systems, in particular the move to digital control systems, creates an opportunity to modernize existing legacy infrastructure and extend plant life. We describe here decision support tools that allow the assessment of different facets of risk and support the optimization of available resources to reduce risk as plants are upgraded and maintained. This methodology could become an integrated part of the design review process and a part of the operations management systems. The methodology can be applied to the design of new reactors such as small nuclear reactors (SMR), and be helpful in assessing the risks of different configurations of the reactors. Our tool provides a low cost evaluation of alternative configurations and provides an expanded safety analysis by considering scenarios while early in the implementation cycle where cost impacts can be minimized. The effects of failures can be modeled and thoroughly vetted to understand their potential impact on risk. The process and tools presented here allow for an integrated assessment of risk by supporting traditional defense in depth approaches while taking into consideration the insertion of new digital instrument and control systems.

  5. EnergyPlus Run Time Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.

  6. The Second Student-Run Homeless Shelter

    Science.gov (United States)

    Seider, Scott C.

    2012-01-01

    From 1983-2011, the Harvard Square Homeless Shelter (HSHS) in Cambridge, Massachusetts, was the only student-run homeless shelter in the United States. However, college students at Villanova, Temple, Drexel, the University of Pennsylvania, and Swarthmore drew upon the HSHS model to open their own student-run homeless shelter in Philadelphia,…

  7. Teaching Bank Runs with Classroom Experiments

    Science.gov (United States)

    Balkenborg, Dieter; Kaplan, Todd; Miller, Timothy

    2011-01-01

    Once relegated to cinema or history lectures, bank runs have become a modern phenomenon that captures the interest of students. In this article, the authors explain a simple classroom experiment based on the Diamond-Dybvig model (1983) to demonstrate how a bank run--a seemingly irrational event--can occur rationally. They then present possible…

  8. Performance evaluation and financial market runs

    NARCIS (Netherlands)

    Wagner, W.B.

    2013-01-01

    This paper develops a model in which performance evaluation causes runs by fund managers and results in asset fire sales. Performance evaluation nonetheless is efficient as it disciplines managers. Optimal performance evaluation combines absolute and relative components in order to make runs less

  9. Long Run Relationship Between Agricultural Production And ...

    African Journals Online (AJOL)

    The study sought to estimate the impact of agricultural production on the long run economic growth in Nigeria using the Vector Error Correction Methodology. The result shows that long run relationship exists between agricultural production and economic growth in Nigeria. Among the variables in the model, crop production ...

  10. Thermal evolution of the Schwinger model with matrix product operators

    Energy Technology Data Exchange (ETDEWEB)

    Banuls, M.C.; Cirac, J.I. [Max-Planck-Institut fuer Quantenoptik, Garching (Germany); Cichy, K. [Frankfurt am Main Univ. (Germany). Inst. fuer Theoretische Physik; Poznan Univ. (Poland). Faculty of Physics; DESY Zeuthen (Germany). John von Neumann-Institut fuer Computing (NIC); Jansen, K.; Saito, H. [DESY Zeuthen (Germany). John von Neumann-Institut fuer Computing (NIC)

    2015-10-15

    We demonstrate the suitability of tensor network techniques for describing the thermal evolution of lattice gauge theories. As a benchmark case, we have studied the temperature dependence of the chiral condensate in the Schwinger model, using matrix product operators to approximate the thermal equilibrium states for finite system sizes with non-zero lattice spacings. We show how these techniques allow for reliable extrapolations in bond dimension, step width, system size and lattice spacing, and for a systematic estimation and control of all error sources involved in the calculation. The reached values of the lattice spacing are small enough to capture the most challenging region of high temperatures and the final results are consistent with the analytical prediction by Sachs and Wipf over a broad temperature range.

  11. An elite runner with cerebral palsy: cost of running determines ...

    African Journals Online (AJOL)

    Background: Running performance is widely understood in terms of the Joyner model (VO2max, %VO2max at ventilatory threshold (VT), running economy (often measured as cost of running (CR) as VO2 in ml.kg‑1.km‑1). Objective: To test the Joyner model by evaluating a runner in whom one element of the Joyner model ...

  12. Better in the long run

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Last week, the Chamonix workshop once again proved its worth as a place where all the stakeholders in the LHC can come together, take difficult decisions and reach a consensus on important issues for the future of particle physics. The most important decision we reached last week is to run the LHC for 18 to 24 months at a collision energy of 7 TeV (3.5 TeV per beam). After that, we’ll go into a long shutdown in which we’ll do all the necessary work to allow us to reach the LHC’s design collision energy of 14 TeV for the next run. This means that when beams go back into the LHC later this month, we’ll be entering the longest phase of accelerator operation in CERN’s history, scheduled to take us into summer or autumn 2011. What led us to this conclusion? Firstly, the LHC is unlike any previous CERN machine. Because it is a cryogenic facility, each run is accompanied by lengthy cool-down and warm-up phases. For that reason, CERN’s traditional &...

  13. Modelling error distribution in the ground reaction force during an induced-acceleration analysis of running in rear-foot strikers.

    Science.gov (United States)

    Koike, Sekiya; Nakaya, Seigo; Mori, Hiroto; Ishikawa, Tatsuya; Willmott, Alexander P

    2017-06-22

    The objective of this study was to develop and evaluate a methodology for quantifying the contributions of modelling error terms, as well as individual joint torque, gravitational force and motion-dependent terms, to the generation of ground reaction force (GRF), whose true value can be measured with high accuracy using a force platform. Dynamic contributions to the GRF were derived from the combination of (1) the equations of motion for the individual segments, (2) the equations for constraint conditions arising from the connection of adjacent segments at joints, and (3) the equations for anatomical constraint axes at certain joints. The contribution of the error term was divided into four components caused by fluctuation of segment lengths, geometric variation in the constraint joint axes, and residual joint force and moment errors. The proposed methodology was applied to the running motion of thirteen rear-foot strikers at a constant speed of 3.3 m/s. Modelling errors arose primarily from fluctuations in support leg segment lengths and rapid movement of the virtual joint between the foot and ground during the first 20% of stance phase. The magnitudes of these error contributions to the vertical and anterior/posterior components of the GRF are presented alongside the non-error contributions, of which the joint torque term was the largest.

  14. Running Boot Camp

    CERN Document Server

    Toporek, Chuck

    2008-01-01

    When Steve Jobs jumped on stage at Macworld San Francisco 2006 and announced the new Intel-based Macs, the question wasn't if, but when someone would figure out a hack to get Windows XP running on these new "Mactels." Enter Boot Camp, a new system utility that helps you partition and install Windows XP on your Intel Mac. Boot Camp does all the heavy lifting for you. You won't need to open the Terminal and hack on system files or wave a chicken bone over your iMac to get XP running. This free program makes it easy for anyone to turn their Mac into a dual-boot Windows/OS X machine. Running Bo

  15. Evaluation of Physiologically-Based Artificial Neural Network Models to Detect Operator Workload in Remotely Piloted Aircraft Operations

    Science.gov (United States)

    2016-07-13

    AFRL-RH-WP-TR-2016-0075 Evaluation of Physiologically – Based Artificial Neural Network Models to Detect Operator Workload in Remotely...16 Interim Report 1 August 2015 – 8 July 2016 4. TITLE AND SUBTITLE Evaluation of Physiologically – Based Artificial Neural Network Models to...One proposal to accomplish this is to allow operators to control multiple aircraft simultaneously (Rose, Arnold, & Howse, 2013). However, piloting

  16. Combining operational models and data into a dynamic vessel risk assessment tool for coastal regions

    Science.gov (United States)

    Fernandes, R.; Braunschweig, F.; Lourenço, F.; Neves, R.

    2016-02-01

    The technological evolution in terms of computational capacity, data acquisition systems, numerical modelling and operational oceanography is supplying opportunities for designing and building holistic approaches and complex tools for newer and more efficient management (planning, prevention and response) of coastal water pollution risk events. A combined methodology to dynamically estimate time and space variable individual vessel accident risk levels and shoreline contamination risk from ships has been developed, integrating numerical metocean forecasts and oil spill simulations with vessel tracking automatic identification systems (AIS). The risk rating combines the likelihood of an oil spill occurring from a vessel navigating in a study area - the Portuguese continental shelf - with the assessed consequences to the shoreline. The spill likelihood is based on dynamic marine weather conditions and statistical information from previous accidents. The shoreline consequences reflect the virtual spilled oil amount reaching shoreline and its environmental and socio-economic vulnerabilities. The oil reaching shoreline is quantified with an oil spill fate and behaviour model running multiple virtual spills from vessels along time, or as an alternative, a correction factor based on vessel distance from coast. Shoreline risks can be computed in real time or from previously obtained data. Results show the ability of the proposed methodology to estimate the risk properly sensitive to dynamic metocean conditions and to oil transport behaviour. The integration of meteo-oceanic + oil spill models with coastal vulnerability and AIS data in the quantification of risk enhances the maritime situational awareness and the decision support model, providing a more realistic approach in the assessment of shoreline impacts. The risk assessment from historical data can help finding typical risk patterns ("hot spots") or developing sensitivity analysis to specific conditions, whereas real

  17. The Effect of Initial Margin on Long-run and Short-run Volatilities in Japan

    Directory of Open Access Journals (Sweden)

    Sangbae Kim

    2013-09-01

    Full Text Available This paper examines the effect of initial margin requirements on long-run and short-run volatilities in the Japanese stock market using the Component GARCH model. Our empirical results show that when we do not divide the margin requirement into positive and negative changes, increasing margin requirement is effective for reducing long-run volatility, while not effective in short-run volatility. However, separating the positive and negative changes in margin requirements reveals the fact that the negative changes in margin requirements decrease long-run volatilities, while the higher margin requirements increase short-run volatilities in the Japanese stock market. This suggests that if the Japanese financial authorities intend to increase margin level to reduce volatility, unexpectedly, short-run volatility would be even higher.

  18. Modeling of a dependence between human operators in advanced main control rooms

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jaewhan; Jang, Seung-Cheol; Shin, Yeong Cheol

    2009-01-01

    For the human reliability analysis of main control room (MCR) operations, not only parameters such as the given situation and capability of the operators but also the dependence between the actions of the operators should be considered because MCR operations are team operations. The dependence between operators might be more prevalent in an advanced MCR in which operators share the same information using a computerized monitoring system or a computerized procedure system. Therefore, this work focused on the computerized operation environment of advanced MCRs and proposed a model to consider the dependence representing the recovery possibility of an operator error by another operator. The proposed model estimates human error probability values by considering adjustment values for a situation and dependence values for operators during the same operation using independent event trees. This work can be used to quantitatively calculate a more reliable operation failure probability for an advanced MCR. (author)

  19. Remote Sensing and Modeling for Improving Operational Aquatic Plant Management

    Science.gov (United States)

    Bubenheim, Dave

    2016-01-01

    The California Sacramento-San Joaquin River Delta is the hub for California’s water supply, conveying water from Northern to Southern California agriculture and communities while supporting important ecosystem services, agriculture, and communities in the Delta. Changes in climate, long-term drought, water quality changes, and expansion of invasive aquatic plants threatens ecosystems, impedes ecosystem restoration, and is economically, environmentally, and sociologically detrimental to the San Francisco Bay/California Delta complex. NASA Ames Research Center and the USDA-ARS partnered with the State of California and local governments to develop science-based, adaptive-management strategies for the Sacramento-San Joaquin Delta. The project combines science, operations, and economics related to integrated management scenarios for aquatic weeds to help land and waterway managers make science-informed decisions regarding management and outcomes. The team provides a comprehensive understanding of agricultural and urban land use in the Delta and the major water sheds (San Joaquin/Sacramento) supplying the Delta and interaction with drought and climate impacts on the environment, water quality, and weed growth. The team recommends conservation and modified land-use practices and aids local Delta stakeholders in developing management strategies. New remote sensing tools have been developed to enhance ability to assess conditions, inform decision support tools, and monitor management practices. Science gaps in understanding how native and invasive plants respond to altered environmental conditions are being filled and provide critical biological response parameters for Delta-SWAT simulation modeling. Operational agencies such as the California Department of Boating and Waterways provide testing and act as initial adopter of decision support tools. Methods developed by the project can become routine land and water management tools in complex river delta systems.

  20. Modeling Operating Modes for the Monju Nuclear Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Yoshikawa, Hidekazu; Jørgensen, Sten Bay

    2012-01-01

    The specification of supervision and control tasks in complex processes requires definition of plant states on various levels of abstraction related to plant operation in start-up, normal operation and shut-down. Modes of plant operation are often specified in relation to a plant decomposition in...

  1. Split-phase motor running as capacitor starts motor and as capacitor run motor

    Directory of Open Access Journals (Sweden)

    Yahaya Asizehi ENESI

    2016-07-01

    Full Text Available In this paper, the input parameters of a single phase split-phase induction motor is taken to investigate and to study the output performance characteristics of capacitor start and capacitor run induction motor. The value of these input parameters are used in the design characteristics of capacitor run and capacitor start motor with each motor connected to rated or standard capacitor in series with auxiliary winding or starting winding respectively for the normal operational condition. The magnitude of capacitor that will develop maximum torque in capacitor start motor and capacitor run motor are investigated and determined by simulation. Each of these capacitors is connected to the auxiliary winding of split-phase motor thereby transforming it into capacitor start or capacitor run motor. The starting current and starting torque of the split-phase motor (SPM, capacitor run motor (CRM and capacitor star motor (CSM are compared for their suitability in their operational performance and applications.

  2. Operational model updating of spinning finite element models for HAWT blades

    Science.gov (United States)

    Velazquez, Antonio; Swartz, R. Andrew; Loh, Kenneth J.; Zhao, Yingjun; La Saponara, Valeria; Kamisky, Robert J.; van Dam, Cornelis P.

    2014-04-01

    Structural health monitoring (SHM) relies on collection and interrogation of operational data from the monitored structure. To make this data meaningful, a means of understanding how damage sensitive data features relate to the physical condition of the structure is required. Model-driven SHM applications achieve this goal through model updating. This study proposed a novel approach for updating of aero-elastic turbine blade vibrational models for operational horizontal-axis wind turbines (HAWTs). The proposed approach updates estimates of modal properties for spinning HAWT blades intended for use in SHM and load estimation of these structures. Spinning structures present additional challenges for model updating due to spinning effects, dependence of modal properties on rotational velocity, and gyroscopic effects that lead to complex mode shapes. A cyclo-stationary stochastic-based eigensystem realization algorithm (ERA) is applied to operational turbine data to identify data-driven modal properties including frequencies and mode shapes. Model-driven modal properties are derived through modal condensation of spinning finite element models with variable physical parameters. Complex modes are converted into equivalent real modes through reduction transformation. Model updating is achieved through use of an adaptive simulated annealing search process, via Modal Assurance Criterion (MAC) with complex-conjugate modes, to find the physical parameters that best match the experimentally derived data.

  3. Electroweak processes at Run 2

    CERN Document Server

    Spalla, Margherita; Sestini, Lorenzo

    2016-01-01

    We present a summary of the studies of the electroweak sector of the Standard Model at LHC after the first year of data taking of Run2, focusing on possible results to be achieved with the analysis of full 2015 and 2016 data. We discuss the measurements of W and Z boson production, with particular attention to the precision determination of basic Standard Model parameters, and the study of multi-boson interactions through the analysis of boson-boson final states. This work is the result of the collaboration between scientists from the ATLAS, CMS and LHCb experiments.

  4. An Executable Architecture Tool for the Modeling and Simulation of Operational Process Models

    Science.gov (United States)

    2015-03-16

    network-based fuzzy logic control and decision system,” IEEE Trans. Comput., vol. 40, no. 12, pp. 1320–1336, 1991. [16] M. Beale, M. Hagan, and H...such as models based on neural networks [14]–[16], or genetic algorithms [17] to represent activities in the process flow. Furthermore, since the model...particularly relevant to experiments and exercises. The operational views provide a logical description of the activities and information exchanged

  5. Application of a procedure oriented crew model to modelling nuclear plant operation

    International Nuclear Information System (INIS)

    Baron, S.

    1986-01-01

    PROCRU (PROCEDURE-ORIENTED CREW MODEL) is a model developed to analyze flight crew procedures in a commercial ILS approach-to-landing. The model builds on earlier, validated control-theoretic models for human estimation and control behavior, but incorporates features appropriate to analyzing supervisory control in multi-task environments. In this paper, the basic ideas underlying the PROCRU model, and the generalization of these ideas to provide a supervisory control model of wider applicability are discussed. The potential application of this supervisory control model to nuclear power plant operations is considered. The range of problems that can be addressed, the kinds of data that will be needed and the nature of the results that might be expected from such an application are indicated

  6. Artificial Systems and Models for Risk Covering Operations

    Directory of Open Access Journals (Sweden)

    Laurenţiu Mihai Treapăt

    2017-04-01

    Full Text Available Mainly, this paper focuses on the roles of artificial intelligence based systems and especially on risk-covering operations. In this context, the paper comes with theoretical explanations on real-life based examples and applications. From a general perspective, the paper enriches its value with a wide discussion on the related subject. The paper aims to revise the volatilities’ estimation models and the correlations between the various time series and also by presenting the Risk Metrics methodology, as explained is a case study. The advantages that the VaR estimation offers, consist of its ability to quantitatively and numerically express the risk level of a portfolio, at a certain moment in time and also the risk of on open position (in titles, in FX, commodities or granted loans, belonging to an economic agent or even individual; hence, its role in a more efficient capital allocation, in the assumed risk delimitation, and also as a performance measurement instrument. In this paper and the study case that completes our work, we aim to prove how we can prevent considerable losses and even bankruptcies if VaR is known and applied accordingly. For this reason, the universities inRomaniashould include or increase their curricula with the study of the VaR model as an artificial intelligence tool. The simplicity of the presented case study, most probably, is the strongest argument of the current work because it can be understood also by the readers that are not necessarily very experienced in the risk management field.

  7. Analyzing the Effects of the Iranian Energy Subsidy Reform Plan on Short- Run Marginal Generation Cost of Electricity Using Extended Input-Output Price Model

    Directory of Open Access Journals (Sweden)

    Zohreh Salimian

    2012-01-01

    Full Text Available Subsidizing energy in Iran has imposed high costs on country's economy. Thus revising energy prices, on the basis of a subsidy reform plan, is a vital remedy to boost up the economy. While the direct consequence of cutting subsidies on electricity generation costs can be determined in a simple way, identifying indirect effects, which reflect higher costs for input factors such as labor, is a challenging problem. In this paper, variables such as compensation of employees and private consumption are endogenized by using extended Input-Output (I-O price model to evaluate direct and indirect effects of electricity and fuel prices increase on economic subsectors. The determination of the short-run marginal generation cost of electricity using I-O technique with taken into account the Iranian targeted subsidy plan's influences is the main goal of this paper. Marginal cost of electricity, in various scenarios of price adjustment of energy, is estimated for three conventional categories of thermal power plants. Our results show that the raising the price of energy leads to an increase in the electricity production costs. Accordingly, the production costs will be higher than 1000 Rials per kWh until 2014 as predicted in the beginning of the reform plan by electricity suppliers.

  8. Operative and diagnostic hysteroscopy: A novel learning model combining new animal models and virtual reality simulation.

    Science.gov (United States)

    Bassil, Alfred; Rubod, Chrystèle; Borghesi, Yves; Kerbage, Yohan; Schreiber, Elie Servan; Azaïs, Henri; Garabedian, Charles

    2017-04-01

    Hysteroscopy is one of the most common gynaecological procedure. Training for diagnostic and operative hysteroscopy can be achieved through numerous previously described models like animal models or virtual reality simulation. We present our novel combined model associating virtual reality and bovine uteruses and bladders. End year residents in obstetrics and gynaecology attended a full day workshop. The workshop was divided in theoretical courses from senior surgeons and hands-on training in operative hysteroscopy and virtual reality Essure ® procedures using the EssureSim™ and Pelvicsim™ simulators with multiple scenarios. Theoretical and operative knowledge was evaluated before and after the workshop and General Points Averages (GPAs) were calculated and compared using a Student's T test. GPAs were significantly higher after the workshop was completed. The biggest difference was observed in operative knowledge (0,28 GPA before workshop versus 0,55 after workshop, pvirtual reality simulation is an efficient model not described before. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Model Structure Analysis of Model-based Operation of Petroleum Reservoirs

    NARCIS (Netherlands)

    Van Doren, J.F.M.

    2010-01-01

    The demand for petroleum is expected to increase in the coming decades, while the production of petroleum from subsurface reservoirs is becoming increasingly complex. To meet the demand petroleum reservoirs should be operated more efficiently. Physics-based petroleum reservoir models that describe

  10. Relaxed Operational Semantics of Concurrent Programming Languages

    Directory of Open Access Journals (Sweden)

    Gustavo Petri

    2012-08-01

    Full Text Available We propose a novel, operational framework to formally describe the semantics of concurrent programs running within the context of a relaxed memory model. Our framework features a "temporary store" where the memory operations issued by the threads are recorded, in program order. A memory model then specifies the conditions under which a pending operation from this sequence is allowed to be globally performed, possibly out of order. The memory model also involves a "write grain," accounting for architectures where a thread may read a write that is not yet globally visible. Our formal model is supported by a software simulator, allowing us to run litmus tests in our semantics.

  11. Latest Community Coordinated Modeling Center (CCMC) services and innovative tools supporting the space weather research and operational communities.

    Science.gov (United States)

    Mendoza, A. M. M.; Rastaetter, L.; Kuznetsova, M. M.; Mays, M. L.; Chulaki, A.; Shim, J. S.; MacNeice, P. J.; Taktakishvili, A.; Collado-Vega, Y. M.; Weigand, C.; Zheng, Y.; Mullinix, R.; Patel, K.; Pembroke, A. D.; Pulkkinen, A. A.; Boblitt, J. M.; Bakshi, S. S.; Tsui, T.

    2017-12-01

    The Community Coordinated Modeling Center (CCMC), with the fundamental goal of aiding the transition of modern space science models into space weather forecasting while supporting space science research, has been serving as an integral hub for over 15 years, providing invaluable resources to both space weather scientific and operational communities. CCMC has developed and provided innovative web-based point of access tools varying from: Runs-On-Request System - providing unprecedented global access to the largest collection of state-of-the-art solar and space physics models, Integrated Space Weather Analysis (iSWA) - a powerful dissemination system for space weather information, Advanced Online Visualization and Analysis tools for more accurate interpretation of model results, Standard Data formats for Simulation Data downloads, and Mobile apps to view space weather data anywhere to the scientific community. In addition to supporting research and performing model evaluations, CCMC also supports space science education by hosting summer students through local universities. In this poster, we will showcase CCMC's latest innovative tools and services, and CCMC's tools that revolutionized the way we do research and improve our operational space weather capabilities. CCMC's free tools and resources are all publicly available online (http://ccmc.gsfc.nasa.gov).

  12. Modeling and Control for Islanding Operation of Active Distribution Systems

    DEFF Research Database (Denmark)

    Cha, Seung-Tae; Wu, Qiuwei; Saleem, Arshad

    2011-01-01

    Along with the increasing penetration of distributed generation (DG) in distribution systems, there are more resources for system operators to improve the operation and control of the whole system and enhance the reliability of electricity supply to customers. The distribution systems with DG...... are able to operate in is-landing operation mode intentionally or unintentionally. In order to smooth the transition from grid connected operation to islanding operation for distribution systems with DG, a multi-agent based controller is proposed to utilize different re-sources in the distribution systems...... to stabilize the frequency. Different agents are defined to represent different resources in the distribution systems. A test platform with a real time digital simulator (RTDS), an OPen Connectivity (OPC) protocol server and the multi-agent based intelligent controller is established to test the proposed multi...

  13. Operations model for utilities using wind-generator arrays

    Science.gov (United States)

    Schlueter, R. A.; Park, G. L.; Dorsey, J.; Lotfalian, M.; Shayanfar, A.

    1981-05-01

    The effects that various combinations of wind regime, array configuration and penetrations, and system characteristics have on system variables such as area control error, frequency, interchange power and spinning reserve are discussed. The characteristics of the combinations causing system operating stress or operating problems are denoted and methods for estimating effects on a simplified and on a detailed simulation basis are reported. Methods for reducing operating problems are suggested and involve array configurations, penetration, unit commitment and dispatch changes, and wind generator controls.

  14. Optimizing Warehouse Logistics Operations Through Site Selection Models: Istanbul, Turkey

    National Research Council Canada - National Science Library

    Erdemir, Ugur

    2003-01-01

    .... Given the dynamic environment surrounding the military operations, logistic sustainability requirements, rapid information technology developments, and budget-constrained Turkish DoD acquisition...

  15. Wave Run-Up on Rubble Breakwaters

    DEFF Research Database (Denmark)

    Van de Walle, Bjorn; De Rouck, Julien; Troch, Peter

    2005-01-01

    Seven sets of data for wave run-up on a rubble mound breakwater were combined and re-analysed, with full-scale, large-scale and small-scale model test results being taken into account. The dimensionless wave run-up value Ru-2%/Hm0 was considered, where R u-2% is the wave run-up height exceeded by 2....... Differences in wave run-up results between the various data sets could be explained by a difference in spectral width observed within these data sets. A multi-regression formula was fitted to all wave run-up data. The formula is valid for permeable rubble mound breakwaters covered with either grooved cubes...

  16. Modeling Operating Modes for the Monju Nuclear Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Yoshikawa, H.; Jørgensen, Sten Bay

    2012-01-01

    of the process plant, its function and its structural elements. The paper explains how the means-end concepts of MFM can be used to provide formalized definitions of plant operation modes. The paper will introduce the mode types defined by MFM and show how selected operation modes can be represented...

  17. Incorporating Worker-Specific Factors in Operations Management Models

    NARCIS (Netherlands)

    J.A. Larco Martinelli (Jose)

    2010-01-01

    textabstractTo add value, manufacturing and service operations depend on workers to do the job. As a result, the performance of these operations is ultimately dependent on the performance of individual workers. Simultaneously, workers are major stakeholders of the firm. Workers spend a

  18. Simulation of nuclear plant operation into a stochastic energy production model

    International Nuclear Information System (INIS)

    Pacheco, R.L.

    1983-04-01

    A simulation model of nuclear plant operation is developed to fit into a stochastic energy production model. In order to improve the stochastic model used, and also reduce its computational time burdened by the aggregation of the model of nuclear plant operation, a study of tail truncation of the unsupplied demand distribution function has been performed. (E.G.) [pt

  19. Understanding the T2 traffic in CMS during Run-1

    CERN Document Server

    T, Wildish

    2015-01-01

    In the run-up to Run-1 CMS was operating its facilities according to the MONARC model, where data-transfers were strictly hierarchical in nature. Direct transfers between Tier-2 nodes was excluded, being perceived as operationally intensive and risky in an era where the network was expected to be a major source of errors. By the end of Run-1 wide-area networks were more capable and stable than originally anticipated. The original data-placement model was relaxed, and traffic was allowed between Tier-2 nodes.Tier-2 to Tier-2 traffic in 2012 already exceeded the amount of Tier-2 to Tier-1 traffic, so it clearly has the potential to become important in the future. Moreover, while Tier-2 to Tier-1 traffic is mostly upload of Monte Carlo data, the Tier-2 to Tier-2 traffic represents data moved in direct response to requests from the physics analysis community. As such, problems or delays there are more likely to have a direct impact on the user community.Tier-2 to Tier-2 traffic may also traverse parts of the WAN ...

  20. "A model co-operative country": Irish-Finnish co-operative contacts at the turn of the twentieth century

    DEFF Research Database (Denmark)

    Hilson, Mary

    2017-01-01

    the Pellervo Society, to promote rural cooperation, in 1899. He noted that Ireland’s ‘tragic history’, its struggle for national self-determination and the introduction of co-operative dairies to tackle rural poverty, seemed to offer a useful example for Finnish reformers. This article explores the exchanges...... that even before the First World War it was Finland, not Ireland, that had begun to be regarded as ‘a model co-operative country’....