WorldWideScience

Sample records for connectivity parameterization approach

  1. Theoretical aspects of the internal element connectivity parameterization approach for topology optimization

    DEFF Research Database (Denmark)

    Yoon, Gil Ho; Kim, Y.Y.; Langelaar, M.

    2008-01-01

    The internal element connectivity parameterization (I-ECP) method is an alternative approach to overcome numerical instabilities associated with low-stiffness element states in non-linear problems. In I-ECP, elements are connected by zero-length links while their link stiffness values are varied....... Therefore, it is important to interpolate link stiffness properly to obtain stably converging results. The main objective of this work is two-fold (1) the investigation of the relationship between the link stiffness and the stiffness of a domain-discretizing patch by using a discrete model and a homogenized...

  2. CLOUD PARAMETERIZATIONS, CLOUD PHYSICS, AND THEIR CONNECTIONS: AN OVERVIEW

    International Nuclear Information System (INIS)

    LIU, Y.; DAUM, P.H.; CHAI, S.K.; LIU, F.

    2002-01-01

    This paper consists of three parts. The first part is concerned with the parameterization of cloud microphysics in climate models. We demonstrate the crucial importance of spectral dispersion of the cloud droplet size distribution in determining radiative properties of clouds (e.g., effective radius), and underline the necessity of specifying spectral dispersion in the parameterization of cloud microphysics. It is argued that the inclusion of spectral dispersion makes the issue of cloud parameterization essentially equivalent to that of the droplet size distribution function, bringing cloud parameterization to the forefront of cloud physics. The second part is concerned with theoretical investigations into the spectral shape of droplet size distributions in cloud physics. After briefly reviewing the mainstream theories (including entrainment and mixing theories, and stochastic theories), we discuss their deficiencies and the need for a paradigm shift from reductionist approaches to systems approaches. A systems theory that has recently been formulated by utilizing ideas from statistical physics and information theory is discussed, along with the major results derived from it. It is shown that the systems formalism not only easily explains many puzzles that have been frustrating the mainstream theories, but also reveals such new phenomena as scale-dependence of cloud droplet size distributions. The third part is concerned with the potential applications of the systems theory to the specification of spectral dispersion in terms of predictable variables and scale-dependence under different fluctuating environments

  3. Parameterization of Fuel-Optimal Synchronous Approach Trajectories to Tumbling Targets

    Directory of Open Access Journals (Sweden)

    David Charles Sternberg

    2018-04-01

    Full Text Available Docking with potentially tumbling Targets is a common element of many mission architectures, including on-orbit servicing and active debris removal. This paper studies synchronized docking trajectories as a way to ensure the Chaser satellite remains on the docking axis of the tumbling Target, thereby reducing collision risks and enabling persistent onboard sensing of the docking location. Chaser satellites have limited computational power available to them and the time allowed for the determination of a fuel optimal trajectory may be limited. Consequently, parameterized trajectories that approximate the fuel optimal trajectory while following synchronous approaches may be used to provide a computationally efficient means of determining near optimal trajectories to a tumbling Target. This paper presents a method of balancing the computation cost with the added fuel expenditure required for parameterization, including the selection of a parameterization scheme, the number of parameters in the parameterization, and a means of incorporating the dynamics of a tumbling satellite into the parameterization process. Comparisons of the parameterized trajectories are made with the fuel optimal trajectory, which is computed through the numerical propagation of Euler’s equations. Additionally, various tumble types are considered to demonstrate the efficacy of the presented computation scheme. With this parameterized trajectory determination method, Chaser satellites may perform terminal approach and docking maneuvers with both fuel and computational efficiency.

  4. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    International Nuclear Information System (INIS)

    Huang, Dong; Liu, Yangang

    2014-01-01

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models. (letter)

  5. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    Science.gov (United States)

    Huang, Dong; Liu, Yangang

    2014-12-01

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.

  6. Robust H∞ Control for Singular Time-Delay Systems via Parameterized Lyapunov Functional Approach

    Directory of Open Access Journals (Sweden)

    Li-li Liu

    2014-01-01

    Full Text Available A new version of delay-dependent bounded real lemma for singular systems with state delay is established by parameterized Lyapunov-Krasovskii functional approach. In order to avoid generating nonconvex problem formulations in control design, a strategy that introduces slack matrices and decouples the system matrices from the Lyapunov-Krasovskii parameter matrices is used. Examples are provided to demonstrate that the results in this paper are less conservative than the existing corresponding ones in the literature.

  7. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    Science.gov (United States)

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  8. Functional connectivity analysis of fMRI data using parameterized regions-of-interest.

    NARCIS (Netherlands)

    Weeda, W.D.; Waldorp, L.J.; Grasman, R.P.P.P.; van Gaal, S.; Huizenga, H.M.

    2011-01-01

    Connectivity analysis of fMRI data requires correct specification of regions-of-interest (ROIs). Selection of ROIs based on outcomes of a GLM analysis may be hindered by conservativeness of the multiple comparison correction, while selection based on brain anatomy may be biased due to inconsistent

  9. Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation

    Directory of Open Access Journals (Sweden)

    Christian Nowke

    2018-06-01

    Full Text Available Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed.

  10. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    Science.gov (United States)

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  11. The Topology Optimization of Three-dimensional Cooling Fins by the Internal Element Connectivity Parameterization Method

    International Nuclear Information System (INIS)

    Yoo, Sung Min; Kim, Yoon Young

    2007-01-01

    This work is concerned with the topology optimization of three-dimensional cooling fins or heat sinks. Motivated by earlier success of the Internal Element Connectivity Method (I-ECP) method in two dimensional problems, the extension of I-ECP to three-dimensional problems is carried out. The main efforts were made to maintain the numerical trouble-free characteristics of I-ECP for full three-dimensional problems; a serious numerical problem appearing in thermal topology optimization is erroneous temperature undershooting. The effectiveness of the present implementation was checked through the design optimization of three-dimensional fins

  12. Resolving kinematic redundancy with constraints using the FSP (Full Space Parameterization) approach

    International Nuclear Information System (INIS)

    Pin, F.G.; Tulloch, F.A.

    1996-01-01

    A solution method is presented for the motion planning and control of kinematically redundant serial-link manipulators in the presence of motion constraints such as joint limits or obstacles. Given a trajectory for the end-effector, the approach utilizes the recently proposed Full Space Parameterization (FSP) method to generate a parameterized expression for the entire space of solutions of the unconstrained system. At each time step, a constrained optimization technique is then used to analytically find the specific joint motion solution that satisfies the desired task objective and all the constraints active during the time step. The method is applicable to systems operating in a priori known environments or in unknown environments with sensor-based obstacle detection. The derivation of the analytical solution is first presented for a general type of kinematic constraint and is then applied to the problem of motion planning for redundant manipulators with joint limits and obstacle avoidance. Sample results using planar and 3-D manipulators with various degrees of redundancy are presented to illustrate the efficiency and wide applicability of constrained motion planning using the FSP approach

  13. Trajectory parameterization-A new approach to the study of the cosmic ray penumbra

    International Nuclear Information System (INIS)

    Cooke, D.J.

    1982-01-01

    A new approach to the examination of the structure of the cosmic ray penumbra has been developed, which, while utilizing the speed, efficiency and ''real'' geomagnetic field modeling capabilities of the digital computer, yields an analytical insight equivalent to that of the earlier and elegant approaches of Stormer, and of Lemaitre and Vallarta. The method involves an assigning of parameters to trajectory features in order to allow the structure and properties of the penumbra to be systematically related to trajectory configuration by means of an automated set of computer procedures. Initial use of this ''trajectory parameterization'' technique to explore the form and stability of the penumbra has yielded new and important knowledge of the penumbra and its phenomenology. Among the new findings is the discovery of isolated forbidden penumbral ''islands.''

  14. Approaches in highly parameterized inversion: bgaPEST, a Bayesian geostatistical approach implementation with PEST: documentation and instructions

    Science.gov (United States)

    Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.

    2013-01-01

    The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.

  15. Modulation wave approach to the structural parameterization and Rietveld refinement of low carnegieite

    International Nuclear Information System (INIS)

    Withers, R.L.; Thompson, J.G.

    1993-01-01

    The crystal structure of low carnegieite, NaAlSiO 4 [M r =142.05, orthorhombic, Pb2 1 a, a=10.261(1), b=14.030(2), c=5.1566(6) A, D x =2.542 g cm -3 , Z=4, Cu Kα 1 , λ=1.5406 A, μ=77.52 cm -1 , F(000)=559.85], is determined via Rietveld refinement from powder data, R p =0.057, R wp =0.076, R Bragg =0.050. Given that there are far too many parameters to be determined via unconstrained Rietveld refinement, a group theoretical or modulation wave approach is used in order to parameterize the structural deviation of low carnegieite from its underlying C9 aristotype. Appropriate crystal chemical constraints are applied in order to provide two distinct plausible starting models for the structure of the aluminosilicate framework. The correct starting model for the aluminosilicate framework as well as the ordering and positions of the non-framework Na atoms are then determined via Rietveld refinement. At all stages, chemical plausibility is checked via the use of the bond-length-bond-valence formalism. The JCPDS file number for low carnegieite is 44-1496. (orig.)

  16. Aerodynamic Shape Optimization Design of Wing-Body Configuration Using a Hybrid FFD-RBF Parameterization Approach

    Science.gov (United States)

    Liu, Yuefeng; Duan, Zhuoyi; Chen, Song

    2017-10-01

    Aerodynamic shape optimization design aiming at improving the efficiency of an aircraft has always been a challenging task, especially when the configuration is complex. In this paper, a hybrid FFD-RBF surface parameterization approach has been proposed for designing a civil transport wing-body configuration. This approach is simple and efficient, with the FFD technique used for parameterizing the wing shape and the RBF interpolation approach used for handling the wing body junction part updating. Furthermore, combined with Cuckoo Search algorithm and Kriging surrogate model with expected improvement adaptive sampling criterion, an aerodynamic shape optimization design system has been established. Finally, the aerodynamic shape optimization design on DLR F4 wing-body configuration has been carried out as a study case, and the result has shown that the approach proposed in this paper is of good effectiveness.

  17. A Symbolic Computation Approach to Parameterizing Controller for Polynomial Hamiltonian Systems

    Directory of Open Access Journals (Sweden)

    Zhong Cao

    2014-01-01

    Full Text Available This paper considers controller parameterization method of H∞ control for polynomial Hamiltonian systems (PHSs, which involves internal stability and external disturbance attenuation. The aims of this paper are to design a controller with parameters to insure that the systems are H∞ stable and propose an algorithm for solving parameters of the controller with symbolic computation. The proposed parameterization method avoids solving Hamilton-Jacobi-Isaacs equations, and thus the obtained controllers with parameters are relatively simple in form and easy in operation. Simulation with a numerical example shows that the controller is effective as it can optimize H∞ control by adjusting parameters. All these results are expected to be of use in the study of H∞ control for nonlinear systems with perturbations.

  18. On The Importance of Connecting Laboratory Measurements of Ice Crystal Growth with Model Parameterizations: Predicting Ice Particle Properties

    Science.gov (United States)

    Harrington, J. Y.

    2017-12-01

    Parameterizing the growth of ice particles in numerical models is at an interesting cross-roads. Most parameterizations developed in the past, including some that I have developed, parse model ice into numerous categories based primarily on the growth mode of the particle. Models routinely possess smaller ice, snow crystals, aggregates, graupel, and hail. The snow and ice categories in some models are further split into subcategories to account for the various shapes of ice. There has been a relatively recent shift towards a new class of microphysical models that predict the properties of ice particles instead of using multiple categories and subcategories. Particle property models predict the physical characteristics of ice, such as aspect ratio, maximum dimension, effective density, rime density, effective area, and so forth. These models are attractive in the sense that particle characteristics evolve naturally in time and space without the need for numerous (and somewhat artificial) transitions among pre-defined classes. However, particle property models often require fundamental parameters that are typically derived from laboratory measurements. For instance, the evolution of particle shape during vapor depositional growth requires knowledge of the growth efficiencies for the various axis of the crystals, which in turn depends on surface parameters that can only be determined in the laboratory. The evolution of particle shapes and density during riming, aggregation, and melting require data on the redistribution of mass across a crystals axis as that crystal collects water drops, ice crystals, or melts. Predicting the evolution of particle properties based on laboratory-determined parameters has a substantial influence on the evolution of some cloud systems. Radiatively-driven cirrus clouds show a broader range of competition between heterogeneous nucleation and homogeneous freezing when ice crystal properties are predicted. Even strongly convective squall

  19. A novel non-uniform control vector parameterization approach with time grid refinement for flight level tracking optimal control problems.

    Science.gov (United States)

    Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua

    2018-02-01

    High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Application of the dual Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    1999-01-01

    Different applications of the parameterization of all systems stabilized by a given controller, i.e. the dual Youla parameterization, are considered in this paper. It will be shown how the parameterization can be applied in connection with controller design, adaptive controllers, model validation...

  1. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  2. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    International Nuclear Information System (INIS)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J

    2008-01-01

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization

  3. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    Energy Technology Data Exchange (ETDEWEB)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J [Atmospheric Science and Global Change Division, Pacific Northwest National Laboratory, PO Box 999, MSIN K9-30, Richland, WA (United States)], E-mail: William.Gustafson@pnl.gov

    2008-04-15

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization.

  4. Space-time trajectories of wind power generation: Parameterized precision matrices under a Gaussian copula approach

    DEFF Research Database (Denmark)

    Tastu, Julija; Pinson, Pierre; Madsen, Henrik

    2015-01-01

    -correlations. Estimation is performed in a maximum likelihood framework. Based on a test case application in Denmark, with spatial dependencies over 15 areas and temporal ones for 43 hourly lead times (hence, for a dimension of n = 645), it is shown that accounting for space-time effects is crucial for generating skilful......Emphasis is placed on generating space-time trajectories of wind power generation, consisting of paths sampled from high-dimensional joint predictive densities, describing wind power generation at a number of contiguous locations and successive lead times. A modelling approach taking advantage...

  5. Tuning controllers using the dual Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2000-01-01

    This paper describes the application of the Youla parameterization of all stabilizing controllers and the dual Youla parameterization of all systems stabilized by a given controller in connection with tuning of controllers. In the uncertain case, it is shown that the use of the Youla parameteriza......This paper describes the application of the Youla parameterization of all stabilizing controllers and the dual Youla parameterization of all systems stabilized by a given controller in connection with tuning of controllers. In the uncertain case, it is shown that the use of the Youla...

  6. Radiological approach to systemic connective tissue diseases

    International Nuclear Information System (INIS)

    Wiesmann, W.; Schneider, M.

    1988-01-01

    Systemic lupus erythematosus (SLE) and progressive systemic sclerosis (PSS) represent the most frequent manifestations of systemic connective tissue diseases (collagen diseases). Radiological examinations are employed to estimate the extension and degree of the pathological process. In addition, progression of the disease can be verified. In both of the above collagen diseases, specific radiological findings can be observed that permit them to be differentiated from other entities. An algorithm for the adequate radiological work-up of collagen diseases is presented. (orig.) [de

  7. Radiological approach to systemic connective tissue diseases

    Energy Technology Data Exchange (ETDEWEB)

    Wiesmann, W; Schneider, M

    1988-07-01

    Systemic lupus erythematosus (SLE) and progressive systemic sclerosis (PSS) represent the most frequent manifestations of systemic connective tissue diseases (collagen diseases). Radiological examinations are employed to estimate the extension and degree of the pathological process. In addition, progression of the disease can be verified. In both of the above collagen diseases, specific radiological findings can be observed that permit them to be differentiated from other entities. An algorithm for the adequate radiological work-up of collagen diseases is presented.

  8. Infrared radiation parameterizations in numerical climate models

    Science.gov (United States)

    Chou, Ming-Dah; Kratz, David P.; Ridgway, William

    1991-01-01

    This study presents various approaches to parameterizing the broadband transmission functions for utilization in numerical climate models. One-parameter scaling is applied to approximate a nonhomogeneous path with an equivalent homogeneous path, and the diffuse transmittances are either interpolated from precomputed tables or fit by analytical functions. Two-parameter scaling is applied to parameterizing the carbon dioxide and ozone transmission functions in both the lower and middle atmosphere. Parameterizations are given for the nitrous oxide and methane diffuse transmission functions.

  9. Parameterized examination in econometrics

    Science.gov (United States)

    Malinova, Anna; Kyurkchiev, Vesselin; Spasov, Georgi

    2018-01-01

    The paper presents a parameterization of basic types of exam questions in Econometrics. This algorithm is used to automate and facilitate the process of examination, assessment and self-preparation of a large number of students. The proposed parameterization of testing questions reduces the time required to author tests and course assignments. It enables tutors to generate a large number of different but equivalent dynamic questions (with dynamic answers) on a certain topic, which are automatically assessed. The presented methods are implemented in DisPeL (Distributed Platform for e-Learning) and provide questions in the areas of filtering and smoothing of time-series data, forecasting, building and analysis of single-equation econometric models. Questions also cover elasticity, average and marginal characteristics, product and cost functions, measurement of monopoly power, supply, demand and equilibrium price, consumer and product surplus, etc. Several approaches are used to enable the required numerical computations in DisPeL - integration of third-party mathematical libraries, developing our own procedures from scratch, and wrapping our legacy math codes in order to modernize and reuse them.

  10. Transhepatic approach for extracardiac inferior cavopulmonary connection stent fenestration.

    LENUS (Irish Health Repository)

    Kenny, Damien

    2012-02-01

    We report on a 3-year-old male who underwent transcatheter stent fenestration of the inferior portion of an extracardiac total cavopulmonary connection in the setting of hypoplastic left heart syndrome. Transhepatic approach, following an unsuccessful attempt from the femoral vein facilitated delivery of a diabolo-shaped stent.

  11. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    Science.gov (United States)

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in

  12. Using graph approach for managing connectivity in integrative landscape modelling

    Science.gov (United States)

    Rabotin, Michael; Fabre, Jean-Christophe; Libres, Aline; Lagacherie, Philippe; Crevoisier, David; Moussa, Roger

    2013-04-01

    In cultivated landscapes, a lot of landscape elements such as field boundaries, ditches or banks strongly impact water flows, mass and energy fluxes. At the watershed scale, these impacts are strongly conditionned by the connectivity of these landscape elements. An accurate representation of these elements and of their complex spatial arrangements is therefore of great importance for modelling and predicting these impacts.We developped in the framework of the OpenFLUID platform (Software Environment for Modelling Fluxes in Landscapes) a digital landscape representation that takes into account the spatial variabilities and connectivities of diverse landscape elements through the application of the graph theory concepts. The proposed landscape representation consider spatial units connected together to represent the flux exchanges or any other information exchanges. Each spatial unit of the landscape is represented as a node of a graph and relations between units as graph connections. The connections are of two types - parent-child connection and up/downstream connection - which allows OpenFLUID to handle hierarchical graphs. Connections can also carry informations and graph evolution during simulation is possible (connections or elements modifications). This graph approach allows a better genericity on landscape representation, a management of complex connections and facilitate development of new landscape representation algorithms. Graph management is fully operational in OpenFLUID for developers or modelers ; and several graph tools are available such as graph traversal algorithms or graph displays. Graph representation can be managed i) manually by the user (for example in simple catchments) through XML-based files in easily editable and readable format or ii) by using methods of the OpenFLUID-landr library which is an OpenFLUID library relying on common open-source spatial libraries (ogr vector, geos topologic vector and gdal raster libraries). Open

  13. Consensus clustering approach to group brain connectivity matrices

    Directory of Open Access Journals (Sweden)

    Javier Rasero

    2017-10-01

    Full Text Available A novel approach rooted on the notion of consensus clustering, a strategy developed for community detection in complex networks, is proposed to cope with the heterogeneity that characterizes connectivity matrices in health and disease. The method can be summarized as follows: (a define, for each node, a distance matrix for the set of subjects by comparing the connectivity pattern of that node in all pairs of subjects; (b cluster the distance matrix for each node; (c build the consensus network from the corresponding partitions; and (d extract groups of subjects by finding the communities of the consensus network thus obtained. Different from the previous implementations of consensus clustering, we thus propose to use the consensus strategy to combine the information arising from the connectivity patterns of each node. The proposed approach may be seen either as an exploratory technique or as an unsupervised pretraining step to help the subsequent construction of a supervised classifier. Applications on a toy model and two real datasets show the effectiveness of the proposed methodology, which represents heterogeneity of a set of subjects in terms of a weighted network, the consensus matrix.

  14. Gain scheduling using the Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    1999-01-01

    Gain scheduling controllers are considered in this paper. The gain scheduling problem where the scheduling parameter vector cannot be measured directly, but needs to be estimated is considered. An estimation of the scheduling vector has been derived by using the Youla parameterization. The use...... in connection with H_inf gain scheduling controllers....

  15. Parameterized post-Newtonian cosmology

    International Nuclear Information System (INIS)

    Sanghai, Viraj A A; Clifton, Timothy

    2017-01-01

    Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC). (paper)

  16. Parameterized post-Newtonian cosmology

    Science.gov (United States)

    Sanghai, Viraj A. A.; Clifton, Timothy

    2017-03-01

    Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC).

  17. Approaches in highly parameterized inversion-PESTCommander, a graphical user interface for file and run management across networks

    Science.gov (United States)

    Karanovic, Marinko; Muffels, Christopher T.; Tonkin, Matthew J.; Hunt, Randall J.

    2012-01-01

    Models of environmental systems have become increasingly complex, incorporating increasingly large numbers of parameters in an effort to represent physical processes on a scale approaching that at which they occur in nature. Consequently, the inverse problem of parameter estimation (specifically, model calibration) and subsequent uncertainty analysis have become increasingly computation-intensive endeavors. Fortunately, advances in computing have made computational power equivalent to that of dozens to hundreds of desktop computers accessible through a variety of alternate means: modelers have various possibilities, ranging from traditional Local Area Networks (LANs) to cloud computing. Commonly used parameter estimation software is well suited to take advantage of the availability of such increased computing power. Unfortunately, logistical issues become increasingly important as an increasing number and variety of computers are brought to bear on the inverse problem. To facilitate efficient access to disparate computer resources, the PESTCommander program documented herein has been developed to provide a Graphical User Interface (GUI) that facilitates the management of model files ("file management") and remote launching and termination of "slave" computers across a distributed network of computers ("run management"). In version 1.0 described here, PESTCommander can access and ascertain resources across traditional Windows LANs: however, the architecture of PESTCommander has been developed with the intent that future releases will be able to access computing resources (1) via trusted domains established in Wide Area Networks (WANs) in multiple remote locations and (2) via heterogeneous networks of Windows- and Unix-based operating systems. The design of PESTCommander also makes it suitable for extension to other computational resources, such as those that are available via cloud computing. Version 1.0 of PESTCommander was developed primarily to work with the

  18. A Novel Synchronization-Based Approach for Functional Connectivity Analysis

    Directory of Open Access Journals (Sweden)

    Angela Lombardi

    2017-01-01

    Full Text Available Complex network analysis has become a gold standard to investigate functional connectivity in the human brain. Popular approaches for quantifying functional coupling between fMRI time series are linear zero-lag correlation methods; however, they might reveal only partial aspects of the functional links between brain areas. In this work, we propose a novel approach for assessing functional coupling between fMRI time series and constructing functional brain networks. A phase space framework is used to map couples of signals exploiting their cross recurrence plots (CRPs to compare the trajectories of the interacting systems. A synchronization metric is extracted from the CRP to assess the coupling behavior of the time series. Since the functional communities of a healthy population are expected to be highly consistent for the same task, we defined functional networks of task-related fMRI data of a cohort of healthy subjects and applied a modularity algorithm in order to determine the community structures of the networks. The within-group similarity of communities is evaluated to verify whether such new metric is robust enough against noise. The synchronization metric is also compared with Pearson’s correlation coefficient and the detected communities seem to better reflect the functional brain organization during the specific task.

  19. Inheritance versus parameterization

    DEFF Research Database (Denmark)

    Ernst, Erik

    2013-01-01

    This position paper argues that inheritance and parameterization differ in their fundamental structure, even though they may emulate each other in many ways. Based on this, we claim that certain mechanisms, e.g., final classes, are in conflict with the nature of inheritance, and hence causes...

  20. Remodeling Functional Connectivity in Multiple Sclerosis: A Challenging Therapeutic Approach.

    Science.gov (United States)

    Stampanoni Bassi, Mario; Gilio, Luana; Buttari, Fabio; Maffei, Pierpaolo; Marfia, Girolama A; Restivo, Domenico A; Centonze, Diego; Iezzi, Ennio

    2017-01-01

    Neurons in the central nervous system are organized in functional units interconnected to form complex networks. Acute and chronic brain damage disrupts brain connectivity producing neurological signs and/or symptoms. In several neurological diseases, particularly in Multiple Sclerosis (MS), structural imaging studies cannot always demonstrate a clear association between lesion site and clinical disability, originating the "clinico-radiological paradox." The discrepancy between structural damage and disability can be explained by a complex network perspective. Both brain networks architecture and synaptic plasticity may play important roles in modulating brain networks efficiency after brain damage. In particular, long-term potentiation (LTP) may occur in surviving neurons to compensate network disconnection. In MS, inflammatory cytokines dramatically interfere with synaptic transmission and plasticity. Importantly, in addition to acute and chronic structural damage, inflammation could contribute to reduce brain networks efficiency in MS leading to worse clinical recovery after a relapse and worse disease progression. These evidence suggest that removing inflammation should represent the main therapeutic target in MS; moreover, as synaptic plasticity is particularly altered by inflammation, specific strategies aimed at promoting LTP mechanisms could be effective for enhancing clinical recovery. Modulation of plasticity with different non-invasive brain stimulation (NIBS) techniques has been used to promote recovery of MS symptoms. Better knowledge of features inducing brain disconnection in MS is crucial to design specific strategies to promote recovery and use NIBS with an increasingly tailored approach.

  1. Remodeling Functional Connectivity in Multiple Sclerosis: A Challenging Therapeutic Approach

    Directory of Open Access Journals (Sweden)

    Mario Stampanoni Bassi

    2017-12-01

    Full Text Available Neurons in the central nervous system are organized in functional units interconnected to form complex networks. Acute and chronic brain damage disrupts brain connectivity producing neurological signs and/or symptoms. In several neurological diseases, particularly in Multiple Sclerosis (MS, structural imaging studies cannot always demonstrate a clear association between lesion site and clinical disability, originating the “clinico-radiological paradox.” The discrepancy between structural damage and disability can be explained by a complex network perspective. Both brain networks architecture and synaptic plasticity may play important roles in modulating brain networks efficiency after brain damage. In particular, long-term potentiation (LTP may occur in surviving neurons to compensate network disconnection. In MS, inflammatory cytokines dramatically interfere with synaptic transmission and plasticity. Importantly, in addition to acute and chronic structural damage, inflammation could contribute to reduce brain networks efficiency in MS leading to worse clinical recovery after a relapse and worse disease progression. These evidence suggest that removing inflammation should represent the main therapeutic target in MS; moreover, as synaptic plasticity is particularly altered by inflammation, specific strategies aimed at promoting LTP mechanisms could be effective for enhancing clinical recovery. Modulation of plasticity with different non-invasive brain stimulation (NIBS techniques has been used to promote recovery of MS symptoms. Better knowledge of features inducing brain disconnection in MS is crucial to design specific strategies to promote recovery and use NIBS with an increasingly tailored approach.

  2. Weighing neutrinos in the scenario of vacuum energy interacting with cold dark matter: application of the parameterized post-Friedmann approach

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Rui-Yun; Li, Yun-He; Zhang, Jing-Fei; Zhang, Xin, E-mail: guoruiyun110@163.com, E-mail: liyh19881206@126.com, E-mail: jfzhang@mail.neu.edu.cn, E-mail: zhangxin@mail.neu.edu.cn [Department of Physics, College of Sciences, Northeastern University, Shenyang 110004 (China)

    2017-05-01

    We constrain the neutrino mass in the scenario of vacuum energy interacting with cold dark matter by using current cosmological observations. To avoid the large-scale instability problem in interacting dark energy models, we employ the parameterized post-Friedmann (PPF) approach to do the calculation of perturbation evolution, for the Q = β H ρ{sub c} and Q = β H ρ{sub Λ} models. The current observational data sets used in this work include Planck (cosmic microwave background), BSH (baryon acoustic oscillations, type Ia supernovae, and Hubble constant), and LSS (redshift space distortions and weak lensing). According to the constraint results, we find that β > 0 at more than 1σ level for the Q = β H ρ{sub c} model, which indicates that cold dark matter decays into vacuum energy; while β = 0 is consistent with the current data at 1σ level for the Q = β H ρ{sub Λ} model. Taking the ΛCDM model as a baseline model, we find that a smaller upper limit, ∑ m {sub ν} < 0.11 eV (2σ), is induced by the latest BAO BOSS DR12 data and the Hubble constant measurement H {sub 0} = 73.00 ± 1.75 km s{sup −1} Mpc{sup −1}. For the Q = β H ρ{sub c} model, we obtain ∑ m {sub ν}<0.20 eV (2σ) from Planck+BSH. For the Q = β H ρ{sub Λ} model, ∑ m {sub ν}<0.10 eV (2σ) and ∑ m {sub ν}<0.14 eV (2σ) are derived from Planck+BSH and Planck+BSH+LSS, respectively. We show that these smaller upper limits on ∑ m {sub ν} are affected more or less by the tension between H {sub 0} and other observational data.

  3. ConnectViz: Accelerated Approach for Brain Structural Connectivity Using Delaunay Triangulation.

    Science.gov (United States)

    Adeshina, A M; Hashim, R

    2016-03-01

    Stroke is a cardiovascular disease with high mortality and long-term disability in the world. Normal functioning of the brain is dependent on the adequate supply of oxygen and nutrients to the brain complex network through the blood vessels. Stroke, occasionally a hemorrhagic stroke, ischemia or other blood vessel dysfunctions can affect patients during a cerebrovascular incident. Structurally, the left and the right carotid arteries, and the right and the left vertebral arteries are responsible for supplying blood to the brain, scalp and the face. However, a number of impairment in the function of the frontal lobes may occur as a result of any decrease in the flow of the blood through one of the internal carotid arteries. Such impairment commonly results in numbness, weakness or paralysis. Recently, the concepts of brain's wiring representation, the connectome, was introduced. However, construction and visualization of such brain network requires tremendous computation. Consequently, previously proposed approaches have been identified with common problems of high memory consumption and slow execution. Furthermore, interactivity in the previously proposed frameworks for brain network is also an outstanding issue. This study proposes an accelerated approach for brain connectomic visualization based on graph theory paradigm using compute unified device architecture, extending the previously proposed SurLens Visualization and computer aided hepatocellular carcinoma frameworks. The accelerated brain structural connectivity framework was evaluated with stripped brain datasets from the Department of Surgery, University of North Carolina, Chapel Hill, USA. Significantly, our proposed framework is able to generate and extract points and edges of datasets, displays nodes and edges in the datasets in form of a network and clearly maps data volume to the corresponding brain surface. Moreover, with the framework, surfaces of the dataset were simultaneously displayed with the

  4. Oral manifestations of connective tissue disease and novel therapeutic approaches.

    Science.gov (United States)

    Heath, Kenisha R; Rogers, Roy S; Fazel, Nasim

    2015-10-16

    Connective tissue diseases such as systemic lupus erythematosus (SLE), systemic sclerosis (SSc), and Sjögren syndrome (SS) have presented many difficulties both in their diagnosis and treatment. Known causes for this difficulty include uncertainty of disease etiology, the multitude of clinical presentations, the unpredictable disease course, and the variable cell types, soluble mediators, and tissue factors that are believed to play a role in the pathogenesis of connective tissue diseases. The characteristic oral findings seen with these specific connective tissue diseases may assist with more swift diagnostic capability. Additionally, the recent use of biologics may redefine the success rate in the treatment and management of the disease. In this review we describe the oral manifestations associated with SLE, SSc, and SS and review the novel biologic drugs used to treat these conditions.

  5. An IoT Based Predictive Connected Car Maintenance Approach

    Directory of Open Access Journals (Sweden)

    Rohit Dhall

    2017-03-01

    Full Text Available Internet of Things (IoT is fast emerging and becoming an almost basic necessity in general life. The concepts of using technology in our daily life is not new, but with the advancements in technology, the impact of technology in daily activities of a person can be seen in almost all the aspects of life. Today, all aspects of our daily life, be it health of a person, his location, movement, etc. can be monitored and analyzed using information captured from various connected devices. This paper discusses one such use case, which can be implemented by the automobile industry, using technological advancements in the areas of IoT and Analytics. ‘Connected Car’ is a terminology, often associated with cars and other passenger vehicles, which are capable of internet connectivity and sharing of various kinds of data with backend applications. The data being shared can be about the location and speed of the car, status of various parts/lubricants of the car, and if the car needs urgent service or not. Once data are transmitted to the backend services, various workflows can be created to take necessary actions, e.g. scheduling a service with the car service provider, or if large numbers of care are in the same location, then the traffic management system can take necessary action. ’Connected cars’ can also communicate with each other, and can send alerts to each other in certain scenarios like possible crash etc. This paper talks about how the concept of ‘connected cars’ can be used to perform ‘predictive car maintenance’. It also discusses how certain technology components, i.e., Eclipse Mosquito and Eclipse Paho can be used to implement a predictive connected car use case.

  6. Understanding ecohydrological connectivity in savannas: A system dynamics modeling approach

    Science.gov (United States)

    Ecohydrological connectivity is a system-level property that results from the linkages in the networks of water transport through ecosystems, by which feedback effects and other emergent system behaviors may be generated. We created a systems dynamic model that represents primary ecohydrological net...

  7. Capturing the Interplay of Dynamics and Networks through Parameterizations of Laplacian Operators

    Science.gov (United States)

    2016-08-24

    we describe an umbrella framework that unifies some of the well known measures, connecting the ideas of centrality , communities and dynamical processes...change of basis. Parameterized centrality also leads to the definition of parameterized volume for subsets of vertices. Parameterized conductance...behind this definition is to establish a direct connection between centrality and community measures, as we will later demonstrate with the notion of

  8. An IoT Based Predictive Connected Car Maintenance Approach

    OpenAIRE

    Rohit Dhall; Vijender Kumar Solanki

    2017-01-01

    Internet of Things (IoT) is fast emerging and becoming an almost basic necessity in general life. The concepts of using technology in our daily life is not new, but with the advancements in technology, the impact of technology in daily activities of a person can be seen in almost all the aspects of life. Today, all aspects of our daily life, be it health of a person, his location, movement, etc. can be monitored and analyzed using information captured from various connected devices. This pape...

  9. Focal species and landscape "naturalness" corridor models offer complementary approaches for connectivity conservation planning

    Science.gov (United States)

    Meade Krosby; Ian Breckheimer; D. John Pierce; Peter H. Singleton; Sonia A. Hall; Karl C. Halupka; William L. Gaines; Robert A. Long; Brad H. McRae; Brian L. Cosentino; Joanne P. Schuett-Hames

    2015-01-01

    Context   The dual threats of habitat fragmentation and climate change have led to a proliferation of approaches for connectivity conservation planning. Corridor analyses have traditionally taken a focal species approach, but the landscape ‘‘naturalness’’ approach of modeling connectivity among areas of low human modification has gained popularity...

  10. PHOTOGRAMMETRIC APPROACH IN DETERMINING BEAM-COLUMN CONNECTION DEFORMATIONS

    Directory of Open Access Journals (Sweden)

    Ali Koken

    Full Text Available In accordance with the advances in technology, displacement calculation techniques are ever developing. Photogrammetry has become preferable in some new disciplines with the advances in the image processing methods. In this study, the authors have used two different measurement techniques to determine the angles of rotation in beam-column connections that are subjected to reversible cyclic loading. The first of these is the method that is widely used, the conventional method in structural mechanics experiments, where Linear Variable Differential Transformers (LVDTs are utilized; and the second is the photogrammetric measurement technique. The rotation angles were determined using these techniques in a total of ten steel beam-column connection experiments. After discussing the test procedures of the aforementioned methods, the results were presented. It was observed that the rotation angles measured by each method were very close to each other. It was concluded that the photogrammetric measurement technique could be used as an alternative to conventional methods, where electronic LVDTs are used.

  11. Synaptic connectivity and spatial memory: a topological approach

    Science.gov (United States)

    Milton, Russell; Babichev, Andrey; Dabaghian, Yuri

    2015-03-01

    In the hippocampus, a network of place cells generates a cognitive map of space, in which each cell is responsive to a particular area of the environment - its place field. The peak response of each cell and the size of each place field have considerable variability. Experimental evidence suggests that place cells encode a topological map of space that serves as a basis of spatial memory and spatial awareness. Using a computational model based on Persistent Homology Theory we demonstrate that if the parameters of the place cells spiking activity fall inside of the physiological range, the network correctly encodes the topological features of the environment. We next introduce parameters of synaptic connectivity into the model and demonstrate that failures in synapses that detect coincident neuronal activity lead to spatial learning deficiencies similar to the ones that are observed in rodent models of neurodegenerative diseases. Moreover, we show that these learning deficiencies may be mitigated by increasing the number of active cells and/or by increasing their firing rate, suggesting the existence of a compensatory mechanism inherent to the cognitive map.

  12. The Mind-Body Connection - Complementary and Alternative Approaches to Health

    Science.gov (United States)

    ... Navigation Bar Home Current Issue Past Issues The Mind-Body Connection Complementary and Alternative Approaches to Health ... Also, a 2007 study found that Tai Chi boosts resistance to the shingles virus in older adults." ...

  13. Hydrological connectivity in the karst critical zone: an integrated approach

    Science.gov (United States)

    Chen, X.; Zhang, Z.; Soulsby, C.; Cheng, Q.; Binley, A. M.; Tao, M.

    2017-12-01

    Spatial heterogeneity in the subsurface is high, evidenced by specific landform features (sinkholes, caves etc.) and resulting in high variability of hydrological processes in space and time. This includes complex exchange of various flow sources (e.g. hillslope springs and depression aquifers) and fast conduit flow and slow fracture flow. In this paper we integrate various "state-of-the-art" methods to understand the structure and function of this understudied critical zone environment. Geophysical, hydrometric and hydrogeochemical tools are used to characterize the hydrological connectivity of the cockpit karst critical zone in a small catchment of Chenqi, Guizhou province, China. Geophysical surveys, using electrical resistivity tomography (ERT), identified the complex conduit networks that link flows between hillslopes and depressions. Statistical time series analysis of water tables and discharge responses at hillslope springs and in depression wells and underground channels showed different threshold responses of hillslope and depression flows. This reflected the differing relative contribution of fast and slow flow paths during rainfall events of varying magnitude in the hillslope epikarst and depression aquifer in dry and wet periods. This showed that the hillslope epikarst receives a high proportion of rainfall recharge and is thus a main water resource in the catchment during the drought period. In contrast, the depression aquifer receives fast, concentrated hillslope flows during large rainfall events during the wet period, resulting in the filling of depression conduits and frequent flooding. Hydrological tracer studies using water temperatures and stable water isotopes (δD and δ18O) corroborated this and provided quantitative information of the mixing proportions of various flow sources and insights into water travel times. This revealed how higher contributions of event "new" water (from hillslope springs and depression conduits displaces "old" pre

  14. Chebyshev-Taylor Parameterization of Stable/Unstable Manifolds for Periodic Orbits: Implementation and Applications

    Science.gov (United States)

    Mireles James, J. D.; Murray, Maxime

    2017-12-01

    This paper develops a Chebyshev-Taylor spectral method for studying stable/unstable manifolds attached to periodic solutions of differential equations. The work exploits the parameterization method — a general functional analytic framework for studying invariant manifolds. Useful features of the parameterization method include the fact that it can follow folds in the embedding, recovers the dynamics on the manifold through a simple conjugacy, and admits a natural notion of a posteriori error analysis. Our approach begins by deriving a recursive system of linear differential equations describing the Taylor coefficients of the invariant manifold. We represent periodic solutions of these equations as solutions of coupled systems of boundary value problems. We discuss the implementation and performance of the method for the Lorenz system, and for the planar circular restricted three- and four-body problems. We also illustrate the use of the method as a tool for computing cycle-to-cycle connecting orbits.

  15. Cell-based and biomaterial approaches to connective tissue repair

    Science.gov (United States)

    Stalling, Simone Suzette

    Connective tissue injuries of skin, tendon and ligament, heal by a reparative process in adults, filling the wound site with fibrotic, disorganized scar tissue that poorly reflects normal tissue architecture or function. Conversely, fetal skin and tendon have been shown to heal scarlessly. Complete regeneration is not intrinsically ubiquitous to all fetal tissues; fetal diaphragmatic and gastrointestinal injuries form scars. In vivo studies suggest that the presence of fetal fibroblasts is essential for scarless healing. In the orthopaedic setting, adult anterior cruciate ligament (ACL) heals poorly; however, little is known about the regenerative capacity of fetal ACL or fetal ACL fibroblasts. We characterized in vitro wound healing properties of fetal and adult ACL fibroblasts demonstrating that fetal ACL fibroblasts migrate faster and elaborate greater quantities of type I collagen, suggesting the healing potential of the fetal ACL may not be intrinsically poor. Similar to fetal ACL fibroblasts, fetal dermal fibroblasts also exhibit robust cellular properties. We investigated the age-dependent effects of dermal fibroblasts on tendon-to-bone healing in rat supraspinatus tendon injuries, a reparative injury model. We hypothesized delivery of fetal dermal fibroblasts would increase tissue organization and mechanical properties in comparison to adult dermal fibroblasts. However, at 1 and 8 weeks, the presence of dermal fibroblasts, either adult or fetal, had no significant effect on tissue histology or mechanical properties. There was a decreasing trend in cross-sectional area of repaired tendons treated with fetal dermal fibroblasts in comparison to adult, but this finding was not significant in comparison to controls. Finally, we synthesized a novel polysaccharide, methacrylated methylcellulose (MA-MC), and fabricated hydrogels using a well-established photopolymerization technique. We characterized the physical and mechanical properties of MA-MC hydrogels in

  16. Building bridges connections and challenges in modern approaches to numerical partial differential equations

    CERN Document Server

    Brezzi, Franco; Cangiani, Andrea; Georgoulis, Emmanuil

    2016-01-01

    This volume contains contributed survey papers from the main speakers at the LMS/EPSRC Symposium “Building bridges: connections and challenges in modern approaches to numerical partial differential equations”. This meeting took place in July 8-16, 2014, and its main purpose was to gather specialists in emerging areas of numerical PDEs, and explore the connections between the different approaches. The type of contributions ranges from the theoretical foundations of these new techniques, to the applications of them, to new general frameworks and unified approaches that can cover one, or more than one, of these emerging techniques.

  17. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

  18. Parameterized Linear Longitudinal Airship Model

    Science.gov (United States)

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  19. Dual Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2003-01-01

    A different aspect of using the parameterisation of all systems stabilised by a given controller, i.e. the dual Youla parameterisation, is considered. The relation between system change and the dual Youla parameter is derived in explicit form. A number of standard uncertain model descriptions...... are considered and the relation with the dual Youla parameter given. Some applications of the dual Youla parameterisation are considered in connection with the design of controllers and model/performance validation....

  20. A QCQP Approach for OPF in Multiphase Radial Networks with Wye and Delta Connections: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zamzam, Ahmed, S.; Zhaoy, Changhong; Dall' Anesey, Emiliano; Sidiropoulos, Nicholas D.

    2017-06-27

    This paper examines the AC Optimal Power Flow (OPF) problem for multiphase distribution networks featuring renewable energy resources (RESs). We start by outlining a power flow model for radial multiphase systems that accommodates wye-connected and delta-connected RESs and non-controllable energy assets. We then formalize an AC OPF problem that accounts for both types of connections. Similar to various AC OPF renditions, the resultant problem is a non convex quadratically-constrained quadratic program. However, the so-called Feasible Point Pursuit-Successive Convex Approximation algorithm is leveraged to obtain a feasible and yet locally-optimal solution. The merits of the proposed solution approach are demonstrated using two unbalanced multiphase distribution feeders with both wye and delta connections.

  1. Role of connected mobility concept for twenty-first-century cities—Trial approach for conceptualization of connected mobility through case studies

    Directory of Open Access Journals (Sweden)

    Fumihiko Nakamura

    2014-07-01

    The author defined the idea of “mobility design” in the scope of urban transportation and explored the concept of connected mobility through case studies that the author has been involved in or researched. Although many important connections in and approaches to urban transportation have come to light, the process of actually working on such projects has uncovered many issues to address such as sharing and social capital. The ability to design mobility as a connected entity and pursue our research topics from that perspective will be vital to overcoming the issues highlighted above and helping the concept of connected mobility flourish.

  2. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    Science.gov (United States)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  3. Constructing IGA-suitable planar parameterization from complex CAD boundary by domain partition and global/local optimization

    Science.gov (United States)

    Xu, Gang; Li, Ming; Mourrain, Bernard; Rabczuk, Timon; Xu, Jinlan; Bordas, Stéphane P. A.

    2018-01-01

    In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries consisting of a set of B-spline curves. Instead of forming the computational domain by a simple boundary, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including B\\'ezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior B\\'ezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C1=G1-continuity constraints on the interface of neighboring B\\'ezier patches with respect to each quad in the quadrangulation, the high-quality B\\'ezier patch parameterization is obtained by a C1-constrained local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.

  4. A Multimodal Approach for Determining Brain Networks by Jointly Modeling Functional and Structural Connectivity

    Directory of Open Access Journals (Sweden)

    Wenqiong eXue

    2015-02-01

    Full Text Available Recent innovations in neuroimaging technology have provided opportunities for researchers to investigate connectivity in the human brain by examining the anatomical circuitry as well as functional relationships between brain regions. Existing statistical approaches for connectivity generally examine resting-state or task-related functional connectivity (FC between brain regions or separately examine structural linkages. As a means to determine brain networks, we present a unified Bayesian framework for analyzing FC utilizing the knowledge of associated structural connections, which extends an approach by Patel et al.(2006a that considers only functional data. We introduce an FC measure that rests upon assessments of functional coherence between regional brain activity identified from functional magnetic resonance imaging (fMRI data. Our structural connectivity (SC information is drawn from diffusion tensor imaging (DTI data, which is used to quantify probabilities of SC between brain regions. We formulate a prior distribution for FC that depends upon the probability of SC between brain regions, with this dependence adhering to structural-functional links revealed by our fMRI and DTI data. We further characterize the functional hierarchy of functionally connected brain regions by defining an ascendancy measure that compares the marginal probabilities of elevated activity between regions. In addition, we describe topological properties of the network, which is composed of connected region pairs, by performing graph theoretic analyses. We demonstrate the use of our Bayesian model using fMRI and DTI data from a study of auditory processing. We further illustrate the advantages of our method by comparisons to methods that only incorporate functional information.

  5. A probabilistic approach to quantifying spatial patterns of flow regimes and network-scale connectivity

    Science.gov (United States)

    Garbin, Silvia; Alessi Celegon, Elisa; Fanton, Pietro; Botter, Gianluca

    2017-04-01

    The temporal variability of river flow regime is a key feature structuring and controlling fluvial ecological communities and ecosystem processes. In particular, streamflow variability induced by climate/landscape heterogeneities or other anthropogenic factors significantly affects the connectivity between streams with notable implication for river fragmentation. Hydrologic connectivity is a fundamental property that guarantees species persistence and ecosystem integrity in riverine systems. In riverine landscapes, most ecological transitions are flow-dependent and the structure of flow regimes may affect ecological functions of endemic biota (i.e., fish spawning or grazing of invertebrate species). Therefore, minimum flow thresholds must be guaranteed to support specific ecosystem services, like fish migration, aquatic biodiversity and habitat suitability. In this contribution, we present a probabilistic approach aiming at a spatially-explicit, quantitative assessment of hydrologic connectivity at the network-scale as derived from river flow variability. Dynamics of daily streamflows are estimated based on catchment-scale climatic and morphological features, integrating a stochastic, physically based approach that accounts for the stochasticity of rainfall with a water balance model and a geomorphic recession flow model. The non-exceedance probability of ecologically meaningful flow thresholds is used to evaluate the fragmentation of individual stream reaches, and the ensuing network-scale connectivity metrics. A multi-dimensional Poisson Process for the stochastic generation of rainfall is used to evaluate the impact of climate signature on reach-scale and catchment-scale connectivity. The analysis shows that streamflow patterns and network-scale connectivity are influenced by the topology of the river network and the spatial variability of climatic properties (rainfall, evapotranspiration). The framework offers a robust basis for the prediction of the impact of

  6. An individual-based modelling approach to estimate landscape connectivity for bighorn sheep (Ovis canadensis

    Directory of Open Access Journals (Sweden)

    Corrie H. Allen

    2016-05-01

    Full Text Available Background. Preserving connectivity, or the ability of a landscape to support species movement, is among the most commonly recommended strategies to reduce the negative effects of climate change and human land use development on species. Connectivity analyses have traditionally used a corridor-based approach and rely heavily on least cost path modeling and circuit theory to delineate corridors. Individual-based models are gaining popularity as a potentially more ecologically realistic method of estimating landscape connectivity. However, this remains a relatively unexplored approach. We sought to explore the utility of a simple, individual-based model as a land-use management support tool in identifying and implementing landscape connectivity. Methods. We created an individual-based model of bighorn sheep (Ovis canadensis that simulates a bighorn sheep traversing a landscape by following simple movement rules. The model was calibrated for bighorn sheep in the Okanagan Valley, British Columbia, Canada, a region containing isolated herds that are vital to conservation of the species in its northern range. Simulations were run to determine baseline connectivity between subpopulations in the study area. We then applied the model to explore two land management scenarios on simulated connectivity: restoring natural fire regimes and identifying appropriate sites for interventions that would increase road permeability for bighorn sheep. Results. This model suggests there are no continuous areas of good habitat between current subpopulations of sheep in the study area; however, a series of stepping-stones or circuitous routes could facilitate movement between subpopulations and into currently unoccupied, yet suitable, bighorn habitat. Restoring natural fire regimes or mimicking fire with prescribed burns and tree removal could considerably increase bighorn connectivity in this area. Moreover, several key road crossing sites that could benefit from

  7. Finite element analysis of an extended end-plate connection using the T-stub approach

    Energy Technology Data Exchange (ETDEWEB)

    Muresan, Ioana Cristina; Balc, Roxana [Technical University of Cluj-Napoca, Faculty of Civil Engineering. 15 C Daicoviciu Str., 400020, Cluj-Napoca (Romania)

    2015-03-10

    Beam-to-column end-plate bolted connections are usually used as moment-resistant connections in steel framed structures. For this joint type, the deformability is governed by the deformation capacity of the column flange and end-plate under tension and elongation of the bolts. All these elements around the beam tension flange form the tension region of the joint, which can be modeled by means of equivalent T-stubs. In this paper a beam-to-column end-plate bolted connection is substituted with a T-stub of appropriate effective length and it is analyzed using the commercially available finite element software ABAQUS. The performance of the model is validated by comparing the behavior of the T-stub from the numerical simulation with the behavior of the connection as a whole. The moment-rotation curve of the T-stub obtained from the numerical simulation is compared with the behavior of the whole extended end-plate connection, obtained by numerical simulation, experimental tests and analytical approach.

  8. Rural postman parameterized by the number of components of required edges

    DEFF Research Database (Denmark)

    Gutin, Gregory; Wahlström, Magnus; Yeo, Anders

    2017-01-01

    In the Directed Rural Postman Problem (DRPP), given a strongly connected directed multigraph D=(V,A) with nonnegative integral weights on the arcs, a subset R of required arcs and a nonnegative integer ℓ, decide whether D has a closed directed walk containing every arc of R and of weight at most ...... suppresses polynomial factors. Using an algebraic approach, we prove that DRPP has a randomized algorithm of running time O⁎(2k) when ℓ is bounded by a polynomial in the number of vertices in D. The same result holds for the undirected version of DRPP........ Let k be the number of weakly connected components in the subgraph of D induced by R. Sorge et al. [30] asked whether the DRPP is fixed-parameter tractable (FPT) when parameterized by k, i.e., whether there is an algorithm of running time O⁎(f(k)) where f is a function of k only and the O⁎ notation...

  9. Reliable control using the primary and dual Youla parameterizations

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, J.

    2002-01-01

    Different aspects of modeling faults in dynamic systems are considered in connection with reliable control (RC). The fault models include models with additive faults, multiplicative faults and structural changes in the models due to faults in the systems. These descriptions are considered...... in connection with reliable control and feedback control with fault rejection. The main emphasis is on fault modeling. A number of fault diagnosis problems, reliable control problems, and feedback control with fault rejection problems are formulated/considered, again, mainly from a fault modeling point of view....... Reliability is introduced by means of the (primary) Youla parameterization of all stabilizing controllers, where an additional loop is closed around a diagnostic signal. In order to quantify the level of reliability, the dual Youla parameterization is introduced which can be used to analyze how large faults...

  10. Characterizing the Surface Connectivity of Depressional Wetlands: Linking Remote Sensing and Hydrologic Modeling Approaches

    Science.gov (United States)

    Christensen, J.; Evenson, G. R.; Vanderhoof, M.; Wu, Q.; Golden, H. E.; Lane, C.

    2017-12-01

    Surface connectivity of wetlands in the 700,000 km2 Prairie Pothole Region of North America (PPR) can occur through fill-spill and fill-merge mechanisms, with some wetlands eventually spilling into stream/river systems. These wetland-to-wetland and wetland-to-stream connections vary both spatially and temporally in PPR watersheds and are important to understanding hydrologic and biogeochemical processes in the landscape. To explore how to best characterize spatial and temporal variability in aquatic connectivity, we compared three approaches, 1) hydrological modeling alone, 2) remotely-sensed data alone, and 3) integrating remotely-sensed data into a hydrological model. These approaches were tested in the Pipestem Creek Watershed, North Dakota across a drought to deluge cycle (1990-2011). A Soil and Water Assessment Tool (SWAT) model was modified to include the water storage capacity of individual non-floodplain wetlands identified in the National Wetland Inventory (NWI) dataset. The SWAT-NWI model simulated the water balance and storage of each wetland and the temporal variability of their hydrologic connections between wetlands during the 21-year study period. However, SWAT-NWI only accounted for fill-spill, and did not allow for the expansion and merging of wetlands situated within larger depressions. Alternatively, we assessed the occurrence of fill-merge mechanisms using inundation maps derived from Landsat images on 19 cloud-free days during the 21 years. We found fill-merge mechanisms to be prevalent across the Pipestem watershed during times of deluge. The SWAT-NWI model was then modified to use LiDAR-derived depressions that account for the potential maximum depression extent, including the merging of smaller wetlands. The inundation maps were used to evaluate the ability of the SWAT-depression model to simulate fill-merge dynamics in addition to fill-spill dynamics throughout the study watershed. Ultimately, using remote sensing to inform and validate

  11. A Unified Current Loop Tuning Approach for Grid-Connected Photovoltaic Inverters

    Directory of Open Access Journals (Sweden)

    Weiyi Zhang

    2016-09-01

    Full Text Available High level penetration of renewable energy sources has reshaped modern electrical grids. For the future grid, distributed renewable power generation plants can be integrated in a larger scale. Control of grid-connected converters is required to achieve fast power reference tracking and further to present grid-supporting and fault ride-through performance. Among all of the aspects for converter control, the inner current loop for grid-connected converters characterizes the system performance considerably. This paper proposes a unified current loop tuning approach for grid-connected converters that is generally applicable in different cases. A direct discrete-time domain tuning procedure is used, and particularly, the selection of the phase margin and crossover frequency is analyzed, which acts as the main difference compared with the existing studies. As a general method, the approximation in the modeling of the controller and grid filter is avoided. The effectiveness of the tuning approach is validated in both simulation and experimental results with respect to power reference tracking, frequency and voltage supporting.

  12. An approach to the conversion of the power generated by an offshore wind power farm connected into seawave power generator

    Energy Technology Data Exchange (ETDEWEB)

    Franzitta, Vicenzo; Messineo, Antonio; Trapanese, Marco

    2011-07-01

    The development of renewable energy systems has been undergoing for the past decades but sea wave's energy resource has been under-utilized. This under-utilization has several reasons: the energy concentration is low in sea waves, extraction of this energy requires leading edge technologies and conversion of the energy into electrical energy is difficult. This study compares two different methods to connect the sea waves' generator to the network and to the offshore wind power farm. The first method consists in a decentralized approach: each generator is connected to the grid through an AC converter. The second method is a partially centralized approach: a rectifier is connected to each generator, all of the generators are then connected together to a common DC bus and power is then converted in AC to be connected to the grid. This study has shown that the partially centralized approach is more reliable and efficient than the decentralized approach.

  13. Connected Green function approach to symmetry breaking in Φ1+14-theory

    International Nuclear Information System (INIS)

    Haeuser, J.M.; Cassing, W.; Peter, A.; Thoma, M.H.

    1995-01-01

    Using the cluster expansions for n-point Green functions we derive a closed set of dynamical equations of motion for connected equal-time Green functions by neglecting all connected functions higher than 4 th order for the λΦ 4 -theory in 1+1 dimensions. We apply the equations to the investigation of spontaneous symmetry breaking, i.e. to the evaluation of the effective potential at temperature T=0. Within our momentum space discretization we obtain a second order phase transition (in agreement with the Simon-Griffith theorem) and a critical coupling of λ crit /4m 2 =2.446 ascompared to a first order phase transition and λ crit /4m 2 =2.568 from the Gaussian effective potential approach. (orig.)

  14. Parameterization of mixing by secondary circulation in estuaries

    Science.gov (United States)

    Basdurak, N. B.; Huguenard, K. D.; Valle-Levinson, A.; Li, M.; Chant, R. J.

    2017-07-01

    Eddy viscosity parameterizations that depend on a gradient Richardson number Ri have been most pertinent to the open ocean. Parameterizations applicable to stratified coastal regions typically require implementation of a numerical model. Two novel parameterizations of the vertical eddy viscosity, based on Ri, are proposed here for coastal waters. One turbulence closure considers temporal changes in stratification and bottom stress and is coined the "regular fit." The alternative approach, named the "lateral fit," incorporates variability of lateral flows that are prevalent in estuaries. The two turbulence parameterization schemes are tested using data from a Self-Contained Autonomous Microstructure Profiler (SCAMP) and an Acoustic Doppler Current Profiler (ADCP) collected in the James River Estuary. The "regular fit" compares favorably to SCAMP-derived vertical eddy viscosity values but only at relatively small values of gradient Ri. On the other hand, the "lateral fit" succeeds at describing the lateral variability of eddy viscosity over a wide range of Ri. The modifications proposed to Ri-dependent eddy viscosity parameterizations allow applicability to stratified coastal regions, particularly in wide estuaries, without requiring implementation of a numerical model.

  15. Evaluating the effectiveness of restoring longitudinal connectivity for stream fish communities: towards a more holistic approach.

    Science.gov (United States)

    Tummers, Jeroen S; Hudson, Steve; Lucas, Martyn C

    2016-11-01

    A more holistic approach towards testing longitudinal connectivity restoration is needed in order to establish that intended ecological functions of such restoration are achieved. We illustrate the use of a multi-method scheme to evaluate the effectiveness of 'nature-like' connectivity restoration for stream fish communities in the River Deerness, NE England. Electric-fishing, capture-mark-recapture, PIT telemetry and radio-telemetry were used to measure fish community composition, dispersal, fishway efficiency and upstream migration respectively. For measuring passage and dispersal, our rationale was to evaluate a wide size range of strong swimmers (exemplified by brown trout Salmo trutta) and weak swimmers (exemplified by bullhead Cottus perifretum) in situ in the stream ecosystem. Radio-tracking of adult trout during the spawning migration showed that passage efficiency at each of five connectivity-restored sites was 81.3-100%. Unaltered (experimental control) structures on the migration route had a bottle-neck effect on upstream migration, especially during low flows. However, even during low flows, displaced PIT tagged juvenile trout (total n=153) exhibited a passage efficiency of 70.1-93.1% at two nature-like passes. In mark-recapture experiments juvenile brown trout and bullhead tagged (total n=5303) succeeded in dispersing upstream more often at most structures following obstacle modification, but not at the two control sites, based on a Laplace kernel modelling approach of observed dispersal distance and barrier traverses. Medium-term post-restoration data (2-3years) showed that the fish assemblage remained similar at five of six connectivity-restored sites and two control sites, but at one connectivity-restored headwater site previously inhabited by trout only, three native non-salmonid species colonized. We conclude that stream habitat reconnection should support free movement of a wide range of species and life stages, wherever retention of such

  16. Stable Kernel Representations and the Youla Parameterization for Nonlinear Systems

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    In this paper a general approach is taken to yield a characterization of the class of stable plant controller pairs, which is a generalization of the Youla parameterization for linear systems. This is based on the idea of representing the input-output pairs of the plant and controller as elements of

  17. Development of a parameterization scheme of mesoscale convective systems

    International Nuclear Information System (INIS)

    Cotton, W.R.

    1994-01-01

    The goal of this research is to develop a parameterization scheme of mesoscale convective systems (MCS) including diabatic heating, moisture and momentum transports, cloud formation, and precipitation. The approach is to: Perform explicit cloud-resolving simulation of MCSs; Perform statistical analyses of simulated MCSs to assist in fabricating a parameterization, calibrating coefficients, etc.; Test the parameterization scheme against independent field data measurements and in numerical weather prediction (NWP) models emulating general circulation model (GCM) grid resolution. Thus far we have formulated, calibrated, implemented and tested a deep convective engine against explicit Florida sea breeze convection and in coarse-grid regional simulations of mid-latitude and tropical MCSs. Several explicit simulations of MCSs have been completed, and several other are in progress. Analysis code is being written and run on the explicitly simulated data

  18. Estimating wetland connectivity to streams in the Prairie Pothole Region: An isotopic and remote sensing approach

    Science.gov (United States)

    Brooks, J. R.; Mushet, David M.; Vanderhoof, Melanie; Leibowitz, Scott G.; Neff, Brian; Christensen, J. R.; Rosenberry, Donald O.; Rugh, W. D.; Alexander, L.C.

    2018-01-01

    Understanding hydrologic connectivity between wetlands and perennial streams is critical to understanding the reliance of stream flow on inputs from wetlands. We used the isotopic evaporation signal in water and remote sensing to examine wetland‐stream hydrologic connectivity within the Pipestem Creek watershed, North Dakota, a watershed dominated by prairie‐pothole wetlands. Pipestem Creek exhibited an evaporated‐water signal that had approximately half the isotopic‐enrichment signal found in most evaporatively enriched prairie‐pothole wetlands. Groundwater adjacent to Pipestem Creek had isotopic values that indicated recharge from winter precipitation and had no significant evaporative enrichment, indicating that enriched surface water did not contribute significantly to groundwater discharging into Pipestem Creek. The estimated surface water area necessary to generate the evaporation signal within Pipestem Creek was highly dynamic, varied primarily with the amount of discharge, and was typically greater than the immediate Pipestem Creek surface water area, indicating that surficial flow from wetlands contributed to stream flow throughout the summer. We propose a dynamic range of spilling thresholds for prairie‐pothole wetlands across the watershed allowing for wetland inputs even during low‐flow periods. Combining Landsat estimates with the isotopic approach allowed determination of potential (Landsat) and actual (isotope) contributing areas in wetland‐dominated systems. This combined approach can give insights into the changes in location and magnitude of surface water and groundwater pathways over time. This approach can be used in other areas where evaporation from wetlands results in a sufficient evaporative isotopic signal.

  19. Heuristic Optimization Approach to Selecting a Transport Connection in City Public Transport

    Directory of Open Access Journals (Sweden)

    Kul’ka Jozef

    2017-02-01

    Full Text Available The article presents a heuristic optimization approach to select a suitable transport connection in the framework of a city public transport. This methodology was applied on a part of the public transport in Košice, because it is the second largest city in the Slovak Republic and its network of the public transport creates a complex transport system, which consists of three different transport modes, namely from the bus transport, tram transport and trolley-bus transport. This solution focused on examining the individual transport services and their interconnection in relevant interchange points.

  20. APPROACHES AND METHODS FOR OVERCOMING OF POSTTRAUMATIC STRESS IN CONNECTION WITH DISASTER SITUATIONS

    Directory of Open Access Journals (Sweden)

    Desislava Todorova

    2018-03-01

    Full Text Available The disaster situations have become more and more frequent for the last decade, and they have intensified the manifestation of fragmentation of the modern society, bringing about a sense of helplessness. In those conditions, the art therapeutic groups provide a sense of connection with the other people and interpersonal support. The aim of the current study is an examination of the way, by which the brain and body react to events causing an acute stress reaction. Assessment of art therapy applicability and helpfulness is done in connection with disaster situations, and comparative analysis of the main approaches and methods, used in practice of work with people who had suffered traumatic events. The results of the study show that the new type of art therapists may disclose an emotional problem, related to trauma sustained, that the client cannot cope with on his/her own. The focus - in connection with the choice of method - is concentrated on the therapeutic needs of the person. In terms of the particular individual, the different methods of art therapy create a medium for the achievement of alleviation from insurmountable emotions or traumas. In social terms, the latter methods help to be achieved an increase of the sense for social adaptation of people from all ages, and of whole families. In conclusion, it may be stated, that the use of different forms of support after disaster situations, has major significance for recovery and maintenance of the physical, emotional, and mental health of the population.

  1. An Integrated Approach for Non-Recursive Formulation of Connection-Coefficients of Orthogonal Functions

    Directory of Open Access Journals (Sweden)

    Monika GARG

    2012-08-01

    Full Text Available In this paper, an integrated approach is proposed for non-recursive formulation of connection coefficients of different orthogonal functions in terms of a generic orthogonal function. The application of these coefficients arises when the product of two orthogonal basis functions are to be expressed in terms of single basis functions. Two significant advantages are achieved; one, the non-recursive formulations avoid memory and stack overflows in computer implementations; two, the integrated approach provides for digital hardware once-designed can be used for different functions. Computational savings achieved with the proposed non-recursive formulation vis-à-vis recursive formulation, reported in the literature so far, have been demonstrated using MATLAB PROFILER.

  2. Comparison of different Kalman filter approaches in deriving time varying connectivity from EEG data.

    Science.gov (United States)

    Ghumare, Eshwar; Schrooten, Maarten; Vandenberghe, Rik; Dupont, Patrick

    2015-08-01

    Kalman filter approaches are widely applied to derive time varying effective connectivity from electroencephalographic (EEG) data. For multi-trial data, a classical Kalman filter (CKF) designed for the estimation of single trial data, can be implemented by trial-averaging the data or by averaging single trial estimates. A general linear Kalman filter (GLKF) provides an extension for multi-trial data. In this work, we studied the performance of the different Kalman filtering approaches for different values of signal-to-noise ratio (SNR), number of trials and number of EEG channels. We used a simulated model from which we calculated scalp recordings. From these recordings, we estimated cortical sources. Multivariate autoregressive model parameters and partial directed coherence was calculated for these estimated sources and compared with the ground-truth. The results showed an overall superior performance of GLKF except for low levels of SNR and number of trials.

  3. Advances in Domain Connectivity for Overset Grids Using the X-Rays Approach

    Science.gov (United States)

    Chan, William M.; Kim, Noah; Pandya, Shishir A.

    2012-01-01

    Advances in automation and robustness of the X-rays approach to domain connectivity for overset grids are presented. Given the surface definition for each component that makes up a complex configuration, the determination of hole points with appropriate hole boundaries is automatically and efficiently performed. Improvements made to the original X-rays approach for identifying the minimum hole include an automated closure scheme for hole-cutters with open boundaries, automatic determination of grid points to be considered for blanking by each hole-cutter, and an adaptive X-ray map to economically handle components in close proximity. Furthermore, an automated spatially varying offset of the hole boundary from the minimum hole is achieved using a dual wall-distance function and an orphan point removal iteration process. Results using the new scheme are presented for a number of static and relative motion test cases on a variety of aerospace applications.

  4. Improving social connection through a communities-of-practice-inspired cognitive work analysis approach.

    Science.gov (United States)

    Euerby, Adam; Burns, Catherine M

    2014-03-01

    Increasingly, people work in socially networked environments. With growing adoption of enterprise social network technologies, supporting effective social community is becoming an important factor in organizational success. Relatively few human factors methods have been applied to social connection in communities. Although team methods provide a contribution, they do not suit design for communities. Wenger's community of practice concept, combined with cognitive work analysis, provided one way of designing for community. We used a cognitive work analysis approach modified with principles for supporting communities of practice to generate a new website design. Over several months, the community using the site was studied to examine their degree of social connectedness and communication levels. Social network analysis and communications analysis, conducted at three different intervals, showed increases in connections between people and between people and organizations, as well as increased communication following the launch of the new design. In this work, we suggest that human factors approaches can be effective in social environments, when applied considering social community principles. This work has implications for the development of new human factors methods as well as the design of interfaces for sociotechnical systems that have community building requirements.

  5. Neutrosophic Parameterized Soft Relations and Their Applications

    Directory of Open Access Journals (Sweden)

    Irfan Deli

    2014-06-01

    Full Text Available The aim of this paper is to introduce the concept of relation on neutrosophic parameterized soft set (NP- soft sets theory. We have studied some related properties and also put forward some propositions on neutrosophic parameterized soft relation with proofs and examples. Finally the notions of symmetric, transitive, reflexive, and equivalence neutrosophic parameterized soft set relations have been established in our work. Finally a decision making method on NP-soft sets is presented.

  6. Parameterization Of Solar Radiation Using Neural Network

    International Nuclear Information System (INIS)

    Jiya, J. D.; Alfa, B.

    2002-01-01

    This paper presents a neural network technique for parameterization of global solar radiation. The available data from twenty-one stations is used for training the neural network and the data from other ten stations is used to validate the neural model. The neural network utilizes latitude, longitude, altitude, sunshine duration and period number to parameterize solar radiation values. The testing data was not used in the training to demonstrate the performance of the neural network in unknown stations to parameterize solar radiation. The results indicate a good agreement between the parameterized solar radiation values and actual measured values

  7. Hierarchical organization of functional connectivity in the mouse brain: a complex network approach.

    Science.gov (United States)

    Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano

    2016-08-18

    This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges.

  8. Parameterization of solar flare dose

    International Nuclear Information System (INIS)

    Lamarche, A.H.; Poston, J.W.

    1996-01-01

    A critical aspect of missions to the moon or Mars will be the safety and health of the crew. Radiation in space is a hazard for astronauts, especially high-energy radiation following certain types of solar flares. A solar flare event can be very dangerous if astronauts are not adequately shielded because flares can deliver a very high dose in a short period of time. The goal of this research was to parameterize solar flare dose as a function of time to see if it was possible to predict solar flare occurrence, thus providing a warning time. This would allow astronauts to take corrective action and avoid receiving a dose greater than the recommended limit set by the National Council on Radiation Protection and Measurements (NCRP)

  9. Integrated Care and Connected Health Approaches Leveraging Personalised Health through Big Data Analytics.

    Science.gov (United States)

    Maglaveras, Nicos; Kilintzis, Vassilis; Koutkias, Vassilis; Chouvarda, Ioanna

    2016-01-01

    Integrated care and connected health are two fast evolving concepts that have the potential to leverage personalised health. From the one side, the restructuring of care models and implementation of new systems and integrated care programs providing coaching and advanced intervention possibilities, enable medical decision support and personalized healthcare services. From the other side, the connected health ecosystem builds the means to follow and support citizens via personal health systems in their everyday activities and, thus, give rise to an unprecedented wealth of data. These approaches are leading to the deluge of complex data, as well as in new types of interactions with and among users of the healthcare ecosystem. The main challenges refer to the data layer, the information layer, and the output of information processing and analytics. In all the above mentioned layers, the primary concern is the quality both in data and information, thus, increasing the need for filtering mechanisms. Especially in the data layer, the big biodata management and analytics ecosystem is evolving, telemonitoring is a step forward for data quality leverage, with numerous challenges still left to address, partly due to the large number of micro-nano sensors and technologies available today, as well as the heterogeneity in the users' background and data sources. This leads to new R&D pathways as it concerns biomedical information processing and management, as well as to the design of new intelligent decision support systems (DSS) and interventions for patients. In this paper, we illustrate these issues through exemplar research targeting chronic patients, illustrating the current status and trends in PHS within the integrated care and connected care world.

  10. The constellation of dietary factors in adolescent acne: a semantic connectivity map approach.

    Science.gov (United States)

    Grossi, E; Cazzaniga, S; Crotti, S; Naldi, L; Di Landro, A; Ingordo, V; Cusano, F; Atzori, L; Tripodi Cutrì, F; Musumeci, M L; Pezzarossa, E; Bettoli, V; Caproni, M; Bonci, A

    2016-01-01

    Different lifestyle and dietetic factors have been linked with the onset and severity of acne. To assess the complex interconnection between dietetic variables and acne. This was a reanalysis of data from a case-control study by using a semantic connectivity map approach. 563 subjects, aged 10-24 years, involved in a case-control study of acne between March 2009 and February 2010, were considered in this study. The analysis evaluated the link between a moderate to severe acne and anthropometric variables, family history and dietetic factors. Analyses were conducted by relying on an artificial adaptive system, the Auto Semantic Connectivity Map (AutoCM). The AutoCM map showed that moderate-severe acne was closely associated with family history of acne in first degree relatives, obesity (BMI ≥ 30), and high consumption of milk, in particular skim milk, cheese/yogurt, sweets/cakes, chocolate, and a low consumption of fish, and limited intake of fruits/vegetables. Our analyses confirm the link between several dietetic items and acne. When providing care, dermatologists should also be aware of the complex interconnection between dietetic factors and acne. © 2014 European Academy of Dermatology and Venereology.

  11. Identification Of The Epileptogenic Zone From Stereo-EEG Signals: A Connectivity-Graph Theory Approach

    Directory of Open Access Journals (Sweden)

    Ferruccio ePanzica

    2013-11-01

    Full Text Available In the context of focal drug-resistant epilepsies, the surgical resection of the epileptogenic zone (EZ, the cortical region responsible for the onset, early seizures organization and propagation, may be the only therapeutic option for reducing or suppressing seizures. The rather high rate of failure in epilepsy surgery of extra-temporal epilepsies highlights that the precise identification of the EZ, mandatory objective to achieve seizure freedom, is still an unsolved problem that requires more sophisticated methods of investigation.Despite the wide range of non-invasive investigations, intracranial stereo-EEG (SEEG recordings still represent, in many patients, the gold standard for the EZ identification. In this contest, the EZ localization is still based on visual analysis of SEEG, inevitably affected by the drawback of subjectivity and strongly time-consuming. Over the last years, considerable efforts have been made to develop advanced signal analysis techniques able to improve the identification of the EZ. Particular attention has been paid to those methods aimed at quantifying and characterising the interactions and causal relationships between neuronal populations, since is nowadays well assumed that epileptic phenomena are associated with abnormal changes in brain synchronisation mechanisms, and initial evidence has shown the suitability of this approach for the EZ localisation. The aim of this review is to provide an overview of the different EEG signal processing methods applied to study connectivity between distinct brain cortical regions, namely in focal epilepsies. In addition, with the aim of localizing the EZ, the approach based on graph theory will be described, since the study of the topological properties of the networks has strongly improved the study of brain connectivity mechanisms.

  12. Tool-driven Design and Automated Parameterization for Real-time Generic Drivetrain Models

    Directory of Open Access Journals (Sweden)

    Schwarz Christina

    2015-01-01

    Full Text Available Real-time dynamic drivetrain modeling approaches have a great potential for development cost reduction in the automotive industry. Even though real-time drivetrain models are available, these solutions are specific to single transmission topologies. In this paper an environment for parameterization of a solution is proposed based on a generic method applicable to all types of gear transmission topologies. This enables tool-guided modeling by non- experts in the fields of mechanic engineering and control theory leading to reduced development and testing efforts. The approach is demonstrated for an exemplary automatic transmission using the environment for automated parameterization. Finally, the parameterization is validated via vehicle measurement data.

  13. Altered functional connectivity of the default mode network in Williams syndrome: a multimodal approach.

    Science.gov (United States)

    Sampaio, Adriana; Moreira, Pedro Silva; Osório, Ana; Magalhães, Ricardo; Vasconcelos, Cristiana; Férnandez, Montse; Carracedo, Angel; Alegria, Joana; Gonçalves, Óscar F; Soares, José Miguel

    2016-07-01

    Resting state brain networks are implicated in a variety of relevant brain functions. Importantly, abnormal patterns of functional connectivity (FC) have been reported in several neurodevelopmental disorders. In particular, the Default Mode Network (DMN) has been found to be associated with social cognition. We hypothesize that the DMN may be altered in Williams syndrome (WS), a neurodevelopmental genetic disorder characterized by an unique cognitive and behavioral phenotype. In this study, we assessed the architecture of the DMN using fMRI in WS patients and typically developing matched controls (sex and age) in terms of FC and volumetry of the DMN. Moreover, we complemented the analysis with a functional connectome approach. After excluding participants due to movement artifacts (n = 3), seven participants with WS and their respective matched controls were included in the analyses. A decreased FC between the DMN regions was observed in the WS group when compared with the typically developing group. Specifically, we found a decreased FC in a posterior hub of the DMN including the precuneus, calcarine and the posterior cingulate of the left hemisphere. The functional connectome approach showed a focalized and global increased FC connectome in the WS group. The reduced FC of the posterior hub of the DMN in the WS group is consistent with immaturity of the brain FC patterns and may be associated with the singularity of their visual spatial phenotype. © 2016 John Wiley & Sons Ltd.

  14. Complementary Network-Based Approaches for Exploring Genetic Structure and Functional Connectivity in Two Vulnerable, Endemic Ground Squirrels

    Directory of Open Access Journals (Sweden)

    Victoria H. Zero

    2017-06-01

    Full Text Available The persistence of small populations is influenced by genetic structure and functional connectivity. We used two network-based approaches to understand the persistence of the northern Idaho ground squirrel (Urocitellus brunneus and the southern Idaho ground squirrel (U. endemicus, two congeners of conservation concern. These graph theoretic approaches are conventionally applied to social or transportation networks, but here are used to study population persistence and connectivity. Population graph analyses revealed that local extinction rapidly reduced connectivity for the southern species, while connectivity for the northern species could be maintained following local extinction. Results from gravity models complemented those of population graph analyses, and indicated that potential vegetation productivity and topography drove connectivity in the northern species. For the southern species, development (roads and small-scale topography reduced connectivity, while greater potential vegetation productivity increased connectivity. Taken together, the results of the two network-based methods (population graph analyses and gravity models suggest the need for increased conservation action for the southern species, and that management efforts have been effective at maintaining habitat quality throughout the current range of the northern species. To prevent further declines, we encourage the continuation of management efforts for the northern species, whereas conservation of the southern species requires active management and additional measures to curtail habitat fragmentation. Our combination of population graph analyses and gravity models can inform conservation strategies of other species exhibiting patchy distributions.

  15. EEG sensorimotor rhythms' variation and functional connectivity measures during motor imagery: linear relations and classification approaches.

    Science.gov (United States)

    Stefano Filho, Carlos A; Attux, Romis; Castellano, Gabriela

    2017-01-01

    Hands motor imagery (MI) has been reported to alter synchronization patterns amongst neurons, yielding variations in the mu and beta bands' power spectral density (PSD) of the electroencephalography (EEG) signal. These alterations have been used in the field of brain-computer interfaces (BCI), in an attempt to assign distinct MI tasks to commands of such a system. Recent studies have highlighted that information may be missing if knowledge about brain functional connectivity is not considered. In this work, we modeled the brain as a graph in which each EEG electrode represents a node. Our goal was to understand if there exists any linear correlation between variations in the synchronization patterns-that is, variations in the PSD of mu and beta bands-induced by MI and alterations in the corresponding functional networks. Moreover, we (1) explored the feasibility of using functional connectivity parameters as features for a classifier in the context of an MI-BCI; (2) investigated three different types of feature selection (FS) techniques; and (3) compared our approach to a more traditional method using the signal PSD as classifier inputs. Ten healthy subjects participated in this study. We observed significant correlations ( p  < 0.05) with values ranging from 0.4 to 0.9 between PSD variations and functional network alterations for some electrodes, prominently in the beta band. The PSD method performed better for data classification, with mean accuracies of (90 ± 8)% and (87 ± 7)% for the mu and beta band, respectively, versus (83 ± 8)% and (83 ± 7)% for the same bands for the graph method. Moreover, the number of features for the graph method was considerably larger. However, results for both methods were relatively close, and even overlapped when the uncertainties of the accuracy rates were considered. Further investigation regarding a careful exploration of other graph metrics may provide better alternatives.

  16. A novel explicit approach to model bromide and pesticide transport in connected soil structures

    Directory of Open Access Journals (Sweden)

    J. Klaus

    2011-07-01

    Full Text Available The present study tests whether an explicit treatment of worm burrows and tile drains as connected structures is feasible for simulating water flow, bromide and pesticide transport in structured heterogeneous soils at hillslope scale. The essence is to represent worm burrows as morphologically connected paths of low flow resistance in a hillslope model. A recent Monte Carlo study (Klaus and Zehe, 2010, Hydrological Processes, 24, p. 1595–1609 revealed that this approach allowed successful reproduction of tile drain event discharge recorded during an irrigation experiment at a tile drained field site. However, several "hillslope architectures" that were all consistent with the available extensive data base allowed a good reproduction of tile drain flow response. Our second objective was thus to find out whether this "equifinality" in spatial model setups may be reduced when including bromide tracer data in the model falsification process. We thus simulated transport of bromide for the 13 spatial model setups that performed best with respect to reproduce tile drain event discharge, without any further calibration. All model setups allowed a very good prediction of the temporal dynamics of cumulated bromide leaching into the tile drain, while only four of them matched the accumulated water balance and accumulated bromide loss into the tile drain. The number of behavioural model architectures could thus be reduced to four. One of those setups was used for simulating transport of Isoproturon, using different parameter combinations to characterise adsorption according to the Footprint data base. Simulations could, however, only reproduce the observed leaching behaviour, when we allowed for retardation coefficients that were very close to one.

  17. A novel explicit approach to model bromide and pesticide transport in connected soil structures

    Science.gov (United States)

    Klaus, J.; Zehe, E.

    2011-07-01

    The present study tests whether an explicit treatment of worm burrows and tile drains as connected structures is feasible for simulating water flow, bromide and pesticide transport in structured heterogeneous soils at hillslope scale. The essence is to represent worm burrows as morphologically connected paths of low flow resistance in a hillslope model. A recent Monte Carlo study (Klaus and Zehe, 2010, Hydrological Processes, 24, p. 1595-1609) revealed that this approach allowed successful reproduction of tile drain event discharge recorded during an irrigation experiment at a tile drained field site. However, several "hillslope architectures" that were all consistent with the available extensive data base allowed a good reproduction of tile drain flow response. Our second objective was thus to find out whether this "equifinality" in spatial model setups may be reduced when including bromide tracer data in the model falsification process. We thus simulated transport of bromide for the 13 spatial model setups that performed best with respect to reproduce tile drain event discharge, without any further calibration. All model setups allowed a very good prediction of the temporal dynamics of cumulated bromide leaching into the tile drain, while only four of them matched the accumulated water balance and accumulated bromide loss into the tile drain. The number of behavioural model architectures could thus be reduced to four. One of those setups was used for simulating transport of Isoproturon, using different parameter combinations to characterise adsorption according to the Footprint data base. Simulations could, however, only reproduce the observed leaching behaviour, when we allowed for retardation coefficients that were very close to one.

  18. MINIMALLY INVASIVE SINGLE FLAP APPROACH WITH CONNECTIVE TISSUE WALL FOR PERIODONTAL REGENERATION

    Directory of Open Access Journals (Sweden)

    Kamen Kotsilkov

    2017-09-01

    Full Text Available INTRODUCTION: The destructive periodontal diseases are among the most prevalent in the human population. In some cases, bony defects are formed during the disease progression, thus sustaining deep periodontal pockets. The reconstruction of these defects is usually done with the classical techniques of bone substitutes placement and guided tissue regeneration. The clinical and histological data from the recent years, however, demonstrate the relatively low regenerative potential of these techniques. The contemporary approaches for periodontal regeneration rely on minimally invasive surgical protocols, aimed at complete tissue preservation in order to achieve and maintain primary closure and at stimulating the natural regenerative potential of the periodontal tissues. AIM: This presentation demonstrates the application of a new, minimally invasive, single flap surgical technique for periodontal regeneration in a clinical case with periodontitis and a residual deep intrabony defect. MATERIALS AND METHODS: A 37 years old patient presented with chronic generalised periodontitis. The initial therapy led to good control of the periodontal infection with a single residual deep periodontal pocket medially at 11 due to a deep intrabony defect. A single flap approach with an enamel matrix derivate application and a connective tissue wall technique were performed. The proper primary closure was obtained. RESULT: One month after surgery an initial mineralisation process in the defect was detected. At the third month, a complete clinical healing was observed. The radiographic control showed finished bone mineralisation and periodontal space recreation. CONCLUSION: In the limitation of the presented case, the minimally invasive surgical approach led to complete clinical healing and new bone formation, which could be proof for periodontal regeneration.

  19. Overland flow connectivity on planar patchy hillslopes - modified percolation theory approaches and combinatorial model of urns

    Science.gov (United States)

    Nezlobin, David; Pariente, Sarah; Lavee, Hanoch; Sachs, Eyal

    2017-04-01

    Source-sink systems are very common in hydrology; in particular, some land cover types often generate runoff (e.g. embedded rocks, bare soil) , while other obstruct it (e.g. vegetation, cracked soil). Surface runoff coefficients of patchy slopes/plots covered by runoff generating and obstructing covers (e.g., bare soil and vegetation) depend critically on the percentage cover (i.e. sources/sinks abundance) and decrease strongly with observation scale. The classic mathematical percolation theory provides a powerful apparatus for describing the runoff connectivity on patchy hillslopes, but it ignores strong effect of the overland flow directionality. To overcome this and other difficulties, modified percolation theory approaches can be considered, such as straight percolation (for the planar slopes), quasi-straight percolation and models with limited obstruction. These approaches may explain both the observed critical dependence of runoff coefficients on percentage cover and their scale decrease in systems with strong flow directionality (e.g. planar slopes). The contributing area increases sharply when the runoff generating percentage cover approaches the straight percolation threshold. This explains the strong increase of the surface runoff and erosion for relatively low values (normally less than 35%) of the obstructing cover (e.g., vegetation). Combinatorial models of urns with restricted occupancy can be applied for the analytic evaluation of meaningful straight percolation quantities, such as NOGA's (Non-Obstructed Generating Area) expected value and straight percolation probability. It is shown that the nature of the cover-related runoff scale decrease is combinatorial - the probability for the generated runoff to avoid obstruction in unit area decreases with scale for the non-trivial percentage cover values. The magnitude of the scale effect is found to be a skewed non-monotonous function of the percentage cover. It is shown that the cover-related scale

  20. [A population-targeted approach to connect prevention, care and welfare: visualising the trend].

    Science.gov (United States)

    Lemmens, L C; Drewes, H W; Lette, M; Baan, C A

    2017-01-01

    To map initiatives in the Netherlands using a population-targeted approach to link prevention, care and welfare. Descriptive investigation, based on conversations and structured interviews. We searched for initiatives in which providers in the areas of prevention, care and welfare together with health insurers and/or local authorities attempted to provide the 'triple aim': improving the health of the population and the quality of care, and managing costs. We found potential initiatives on the basis of interviews with key figures, project databases and congress programmes. We looked for additional information on websites and via contact persons to gather additional information to determine whether the initiative met the inclusion criteria. An initiative should link prevention, care and welfare with a minimum of three players actively pursuing a population-targeted goal through multiple interventions for a non-disease specific and district-transcending population. We described the goal, organisational structure, parties involved, activities and funding on the basis of interviews conducted in the period August-December 2015 with the managers of the initiatives included. We found 19 initiatives which met the criteria where there was experimentation with organisational forms, levels of participation, interventions and funding. It was noticeable that the interventions mostly concerned medical care. There was a lack of insight into the 'triple aim', mostly because data exchange between parties is generally difficult. There is an increasing number of initiatives that follow a population-targeted approach. Although the different parties strive to connect the three domains, they are still searching for an optimal collaboration, organisational form, data exchange and financing.

  1. Connecting Competences and Pedagogical Approaches for Sustainable Development in Higher Education: A Literature Review and Framework Proposal

    Directory of Open Access Journals (Sweden)

    Rodrigo Lozano

    2017-10-01

    Full Text Available Research into and practice of Higher Education for Sustainable Development (HESD have been increasing during the last two decades. These have focused on providing sustainability education to future generations of professionals. In this context, there has been considerable progress in the incorporation of SD in universities’ curricula. Most of these efforts have focussed on the design and delivery of sustainability-oriented competences. Some peer-reviewed articles have proposed different pedagogical approaches to better deliver SD in these courses; however, there has been limited research on the connection between how courses are delivered (pedagogical approaches and how they may affect sustainability competences. This paper analyses competences and pedagogical approaches, using hermeneutics to connect these in a framework based on twelve competences and twelve pedagogical approaches found in the literature. The framework connects the course aims to delivery in HESD by highlighting the connections between pedagogical approaches and competences in a matrix structure. The framework is aimed at helping educators in creating and updating their courses to provide a more complete, holistic, and systemic sustainability education to future leaders, decision makers, educators, and change agents. To better develop mind-sets and actions of future generations, we must provide students with a complete set of sustainability competences.

  2. Influence of ROI selection on Resting Functional Connectivity: An Individualized Approach for Resting fMRI Analysis

    Directory of Open Access Journals (Sweden)

    William Seunghyun Sohn

    2015-08-01

    Full Text Available The differences in how our brain is connected are often thought to reflect the differences in our individual personalities and cognitive abilities. Individual differences in brain connectivity has long been recognized in the neuroscience community however it has yet to manifest itself in the methodology of resting state analysis. This is evident as previous studies use the same region of interest (ROIs for all subjects. In this paper we demonstrate that the use of ROIs which are standardized across individuals leads to inaccurate calculations of functional connectivity. We also show that this problem can be addressed by taking an individualized approach by using subject-specific ROIs. Finally we show that ROI selection can affect the way we interpret our data by showing different changes in functional connectivity with ageing.

  3. Parameterization and measurements of helical magnetic fields

    International Nuclear Information System (INIS)

    Fischer, W.; Okamura, M.

    1997-01-01

    Magnetic fields with helical symmetry can be parameterized using multipole coefficients (a n , b n ). We present a parameterization that gives the familiar multipole coefficients (a n , b n ) for straight magnets when the helical wavelength tends to infinity. To measure helical fields all methods used for straight magnets can be employed. We show how to convert the results of those measurements to obtain the desired helical multipole coefficients (a n , b n )

  4. Menangkal Serangan SQL Injection Dengan Parameterized Query

    Directory of Open Access Journals (Sweden)

    Yulianingsih Yulianingsih

    2016-06-01

    Full Text Available Semakin meningkat pertumbuhan layanan informasi maka semakin tinggi pula tingkat kerentanan keamanan dari suatu sumber informasi. Melalui tulisan ini disajikan penelitian yang dilakukan secara eksperimen yang membahas tentang kejahatan penyerangan database secara SQL Injection. Penyerangan dilakukan melalui halaman autentikasi dikarenakan halaman ini merupakan pintu pertama akses yang seharusnya memiliki pertahanan yang cukup. Kemudian dilakukan eksperimen terhadap metode Parameterized Query untuk mendapatkan solusi terhadap permasalahan tersebut.   Kata kunci— Layanan Informasi, Serangan, eksperimen, SQL Injection, Parameterized Query.

  5. Parameterizing Size Distribution in Ice Clouds

    Energy Technology Data Exchange (ETDEWEB)

    DeSlover, Daniel; Mitchell, David L.

    2009-09-25

    cloud optical properties formulated in terms of PSD parameters in combination with remote measurements of thermal radiances to characterize the small mode. This is possible since the absorption efficiency (Qabs) of small mode crystals is larger at 12 µm wavelength relative to 11 µm wavelength due to the process of wave resonance or photon tunneling more active at 12 µm. This makes the 12/11 µm absorption optical depth ratio (or equivalently the 12/11 µm Qabs ratio) a means for detecting the relative concentration of small ice particles in cirrus. Using this principle, this project tested and developed PSD schemes that can help characterize cirrus clouds at each of the three ARM sites: SGP, NSA and TWP. This was the main effort of this project. These PSD schemes and ice sedimentation velocities predicted from them have been used to test the new cirrus microphysics parameterization in the GCM known as the Community Climate Systems Model (CCSM) as part of an ongoing collaboration with NCAR. Regarding the second problem, we developed and did preliminary testing on a passive thermal method for retrieving the total water path (TWP) of Arctic mixed phase clouds where TWPs are often in the range of 20 to 130 g m-2 (difficult for microwave radiometers to accurately measure). We also developed a new radar method for retrieving the cloud ice water content (IWC), which can be vertically integrated to yield the ice water path (IWP). These techniques were combined to determine the IWP and liquid water path (LWP) in Arctic clouds, and hence the fraction of ice and liquid water. We have tested this approach using a case study from the ARM field campaign called M-PACE (Mixed-Phase Arctic Cloud Experiment). This research led to a new satellite remote sensing method that appears promising for detecting low levels of liquid water in high clouds typically between -20 and -36 oC. We hope to develop this method in future research.

  6. A distribution-oriented approach to support landscape connectivity for ecologically distinct bird species.

    Science.gov (United States)

    Herrera, José M; Alagador, Diogo; Salgueiro, Pedro; Mira, António

    2018-01-01

    Managing landscape connectivity is a widely recognized overarching strategy for conserving biodiversity in human-impacted landscapes. However, planning the conservation and management of landscape connectivity of multiple and ecologically distinct species is still challenging. Here we provide a spatially-explicit framework which identifies and prioritizes connectivity conservation and restoration actions for species with distinct habitat affinities. Specifically, our study system comprised three groups of common bird species, forest-specialists, farmland-specialists, and generalists, populating a highly heterogeneous agricultural countryside in the southwestern Iberian Peninsula. We first performed a comprehensive analysis of the environmental variables underlying the distributional patterns of each bird species to reveal generalities in their guild-specific responses to landscape structure. Then, we identified sites which could be considered pivotal in maintaining current levels of landscape connectivity for the three bird guilds simultaneously, as well as the number and location of sites that need to be restored to maximize connectivity levels. Interestingly, we found that a small number of sites defined the shortest connectivity paths for the three bird guilds simultaneously, and were therefore considered key for conservation. Moreover, an even smaller number of sites were identified as critical to expand the landscape connectivity at maximum for the regional bird assemblage as a whole. Our spatially-explicit framework can provide valuable decision-making support to conservation practitioners aiming to identify key connectivity and restoration sites, a particularly urgent task in rapidly changing landscapes such as agroecosystems.

  7. Facebook and the engineering of connectivity: a multi-layered approach to social media platforms

    NARCIS (Netherlands)

    van Dijck, J.

    2013-01-01

    This article aims to explain how Web 2.0 platforms in general, and Facebook in particular, engineers online connections. Connectivity has become the material and metaphorical wiring of our culture, a culture in which technologies shape and are shaped not only by economic and legal frames, but also

  8. A Serviced-based Approach to Connect Seismological Infrastructures: Current Efforts at the IRIS DMC

    Science.gov (United States)

    Ahern, Tim; Trabant, Chad

    2014-05-01

    As part of the COOPEUS initiative to build infrastructure that connects European and US research infrastructures, IRIS has advocated for the development of Federated services based upon internationally recognized standards using web services. By deploying International Federation of Digital Seismograph Networks (FDSN) endorsed web services at multiple data centers in the US and Europe, we have shown that integration within seismological domain can be realized. By deploying identical methods to invoke the web services at multiple centers this approach can significantly ease the methods through which a scientist can access seismic data (time series, metadata, and earthquake catalogs) from distributed federated centers. IRIS has developed an IRIS federator that helps a user identify where seismic data from global seismic networks can be accessed. The web services based federator can build the appropriate URLs and return them to client software running on the scientists own computer. These URLs are then used to directly pull data from the distributed center in a very peer-based fashion. IRIS is also involved in deploying web services across horizontal domains. As part of the US National Science Foundation's (NSF) EarthCube effort, an IRIS led EarthCube Building Block's project is underway. When completed this project will aid in the discovery, access, and usability of data across multiple geoscienece domains. This presentation will summarize current IRIS efforts in building vertical integration infrastructure within seismology working closely with 5 centers in Europe and 2 centers in the US, as well as how we are taking first steps toward horizontal integration of data from 14 different domains in the US, in Europe, and around the world.

  9. Monitoring Effective Connectivity in the Preterm Brain: A Graph Approach to Study Maturation

    Directory of Open Access Journals (Sweden)

    M. Lavanga

    2017-01-01

    Full Text Available In recent years, functional connectivity in the developmental science received increasing attention. Although it has been reported that the anatomical connectivity in the preterm brain develops dramatically during the last months of pregnancy, little is known about how functional and effective connectivity change with maturation. The present study investigated how effective connectivity in premature infants evolves. To assess it, we use EEG measurements and graph-theory methodologies. We recorded data from 25 preterm babies, who underwent long-EEG monitoring at least twice during their stay in the NICU. The recordings took place from 27 weeks postmenstrual age (PMA until 42 weeks PMA. Results showed that the EEG-connectivity, assessed using graph-theory indices, moved from a small-world network to a random one, since the clustering coefficient increases and the path length decreases. This shift can be due to the development of the thalamocortical connections and long-range cortical connections. Based on the network indices, we developed different age-prediction models. The best result showed that it is possible to predict the age of the infant with a root mean-squared error (MSE equal to 2.11 weeks. These results are similar to the ones reported in the literature for age prediction in preterm babies.

  10. Differential approach to planning of training loads in person with connective tissue dysplasia symptoms

    Directory of Open Access Journals (Sweden)

    Олег Борисович Неханевич

    2015-05-01

    Full Text Available Introduction. When dealing with issues of access and planning of training and competitive pressures special interest cause the person with signs of connective tissue dysplasia.Aim. Improvement of medical support of training process of athletes with signs of connective tissue dysplasia.Materials and methods. 188 athletes are examined, including 59 with signs of connective tissue dysplasia. There are made the basic group. Signs of systemic involvement of connective tissue are determined using anthropometry and somatoscopy. An echocardiographic examination is conducted for all athletes at rest and during bicycle ergometry in regenerative period conducted.Results. Underweight body, acromacria, hypermobility of joints and flat feet are often observed with signs of systemic involvement of connective tissue.During veloergometry it was established deterioration of myocardial relaxation during diastole core group of athletes while performing load average power, which led to a drop in ejection fraction at submaximal levels of exertion.Conclusions. Existence of connective tissue dysplasia in athletes with different prognosis states requires sports physicians an in-depth analysis and differential diagnosis of clinical forms in order to prevent complications during training and competitive pressures. Early signs of cardiac strain while performing physical activity in athletes with signs of connective tissue dysplasia were symptoms of myocardial relaxation on indicators of diastolic heart function. Ejection fraction at rest remained at normal levels

  11. Anticipation-related brain connectivity in bipolar and unipolar depression: a graph theory approach.

    Science.gov (United States)

    Manelis, Anna; Almeida, Jorge R C; Stiffler, Richelle; Lockovich, Jeanette C; Aslam, Haris A; Phillips, Mary L

    2016-09-01

    Bipolar disorder is often misdiagnosed as major depressive disorder, which leads to inadequate treatment. Depressed individuals versus healthy control subjects, show increased expectation of negative outcomes. Due to increased impulsivity and risk for mania, however, depressed individuals with bipolar disorder may differ from those with major depressive disorder in neural mechanisms underlying anticipation processes. Graph theory methods for neuroimaging data analysis allow the identification of connectivity between multiple brain regions without prior model specification, and may help to identify neurobiological markers differentiating these disorders, thereby facilitating development of better therapeutic interventions. This study aimed to compare brain connectivity among regions involved in win/loss anticipation in depressed individuals with bipolar disorder (BDD) versus depressed individuals with major depressive disorder (MDD) versus healthy control subjects using graph theory methods. The study was conducted at the University of Pittsburgh Medical Center and included 31 BDD, 39 MDD, and 36 healthy control subjects. Participants were scanned while performing a number guessing reward task that included the periods of win and loss anticipation. We first identified the anticipatory network across all 106 participants by contrasting brain activation during all anticipation periods (win anticipation + loss anticipation) versus baseline, and win anticipation versus loss anticipation. Brain connectivity within the identified network was determined using the Independent Multiple sample Greedy Equivalence Search (IMaGES) and Linear non-Gaussian Orientation, Fixed Structure (LOFS) algorithms. Density of connections (the number of connections in the network), path length, and the global connectivity direction ('top-down' versus 'bottom-up') were compared across groups (BDD/MDD/healthy control subjects) and conditions (win/loss anticipation). These analyses showed that

  12. Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone

    Science.gov (United States)

    Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo

    2017-12-01

    The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.

  13. Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone

    Science.gov (United States)

    Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.

    2017-12-01

    The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.

  14. A Thermal Infrared Radiation Parameterization for Atmospheric Studies

    Science.gov (United States)

    Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)

    2001-01-01

    This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.

  15. Normalization of the parameterized Courant-Snyder matrix for symplectic factorization of a parameterized Taylor map

    International Nuclear Information System (INIS)

    Yan, Y.T.

    1991-01-01

    The transverse motion of charged particles in a circular accelerator can be well represented by a one-turn high-order Taylor map. For particles without energy deviation, the one-turn Taylor map is a 4-dimensional polynomials of four variables. The four variables are the transverse canonical coordinates and their conjugate momenta. To include the energy deviation (off-momentum) effects, the map has to be parameterized with a smallness factor representing the off-momentum and so the Taylor map becomes a 4-dimensional polynomials of five variables. It is for this type of parameterized Taylor map that a mehtod is presented for converting it into a parameterized Dragt-Finn factorization map. Parameterized nonlinear normal form and parameterized kick factorization can thus be obtained with suitable modification of the existing technique

  16. New grid-planning and certification approaches for the large-scale offshore-wind farm grid-connection systems

    Energy Technology Data Exchange (ETDEWEB)

    Heising, C.; Bartelt, R. [Avasition GmbH, Dortmund (Germany); Zadeh, M. Koochack; Lebioda, T.J.; Jung, J. [TenneT Offshore GmbH, Bayreuth (Germany)

    2012-07-01

    Stable operation of the offshore-wind farms (OWF) and stable grid connection under stationary and dynamic conditions are essential to achieve a stable public power supply. To reach this aim, adequate grid-planning and certification approaches are a major advantage. Within this paper, the fundamental characteristics of the offshore-wind farms and their grid-connection systems are given. The main goal of this research project is to study the stability of the offshore grid especially in terms of subharmonic stability for the likely future extension stage of the offshore grids i.e. having parallel connection of two or more HVDC links and for certain operating scenarios e.g. overload scenario. The current requirements according to the grid code are not the focus of this research project. The goal is to study and define potential additional grid code requirements, simulations, tests and grid planning methods for the future. (orig.)

  17. GIS-based approach for quantifying landscape connectivity of Javan Hawk-Eagle habitat

    Science.gov (United States)

    Nurfatimah, C.; Syartinilia; Mulyani, Y. A.

    2018-05-01

    Javan Hawk-Eagle (Nisaetus bartelsi; JHE) is a law-protected endemic raptor which currently faced the decreased in number and size of habitat patches that will lead to patch isolation and species extinction. This study assessed the degree of connectivity between remnant habitat patches in central part of Java by utilizing Conefor Sensinode software as an additional tool for ArcGIS. The connectivity index was determined by three fractions which are infra, flux and connector. Using connectivity indices successfully identified 4 patches as core habitat, 9 patches as stepping-stone habitat and 6 patches as isolated habitat were derived from those connectivity indices. Those patches then being validated with land cover map derived from Landsat 8 of August 2014. 36% of core habitat covered by natural forest, meanwhile stepping stone habitat has 55% natural forest and isolated habitat covered by 59% natural forest. Isolated patches were caused by zero connectivity (PCcon = 0) and the patch size which too small to support viable JHE population. Yet, the condition of natural forest and the surrounding matrix landscape in isolated patches actually support the habitat need. Thus, it is very important to conduct the right conservation management system based on the condition of each patches.

  18. Modeling the regional impact of ship emissions on NOx and ozone levels over the Eastern Atlantic and Western Europe using ship plume parameterization

    Directory of Open Access Journals (Sweden)

    P. Pisoft

    2010-07-01

    Full Text Available In general, regional and global chemistry transport models apply instantaneous mixing of emissions into the model's finest resolved scale. In case of a concentrated source, this could result in erroneous calculation of the evolution of both primary and secondary chemical species. Several studies discussed this issue in connection with emissions from ships and aircraft. In this study, we present an approach to deal with the non-linear effects during dispersion of NOx emissions from ships. It represents an adaptation of the original approach developed for aircraft NOx emissions, which uses an exhaust tracer to trace the amount of the emitted species in the plume and applies an effective reaction rate for the ozone production/destruction during the plume's dilution into the background air. In accordance with previous studies examining the impact of international shipping on the composition of the troposphere, we found that the contribution of ship induced surface NOx to the total reaches 90% over remote ocean and makes 10–30% near coastal regions. Due to ship emissions, surface ozone increases by up to 4–6 ppbv making 10% contribution to the surface ozone budget. When applying the ship plume parameterization, we show that the large scale NOx decreases and the ship NOx contribution is reduced by up to 20–25%. A similar decrease was found in the case of O3. The plume parameterization suppressed the ship induced ozone production by 15–30% over large areas of the studied region. To evaluate the presented parameterization, nitrogen monoxide measurements over the English Channel were compared with modeled values and it was found that after activating the parameterization the model accuracy increases.

  19. A generative modeling approach to connectivity-Electrical conduction in vascular networks

    DEFF Research Database (Denmark)

    Hald, Bjørn Olav

    2016-01-01

    The physiology of biological structures is inherently dynamic and emerges from the interaction and assembly of large collections of small entities. The extent of coupled entities complicates modeling and increases computational load. Here, microvascular networks are used to present a novel...... to synchronize vessel tone across the vast distances within a network. We hypothesize that electrical conduction capacity is delimited by the size of vascular structures and connectivity of the network. Generation and simulation of series of dynamical models of electrical spread within vascular networks...... of different size and composition showed that (1) Conduction is enhanced in models harboring long and thin endothelial cells that couple preferentially along the longitudinal axis. (2) Conduction across a branch point depends on endothelial connectivity between branches. (3) Low connectivity sub...

  20. Systems Pharmacology-Based Approach of Connecting Disease Genes in Genome-Wide Association Studies with Traditional Chinese Medicine.

    Science.gov (United States)

    Kim, Jihye; Yoo, Minjae; Shin, Jimin; Kim, Hyunmin; Kang, Jaewoo; Tan, Aik Choon

    2018-01-01

    Traditional Chinese medicine (TCM) originated in ancient China has been practiced over thousands of years for treating various symptoms and diseases. However, the molecular mechanisms of TCM in treating these diseases remain unknown. In this study, we employ a systems pharmacology-based approach for connecting GWAS diseases with TCM for potential drug repurposing and repositioning. We studied 102 TCM components and their target genes by analyzing microarray gene expression experiments. We constructed disease-gene networks from 2558 GWAS studies. We applied a systems pharmacology approach to prioritize disease-target genes. Using this bioinformatics approach, we analyzed 14,713 GWAS disease-TCM-target gene pairs and identified 115 disease-gene pairs with q value < 0.2. We validated several of these GWAS disease-TCM-target gene pairs with literature evidence, demonstrating that this computational approach could reveal novel indications for TCM. We also develop TCM-Disease web application to facilitate the traditional Chinese medicine drug repurposing efforts. Systems pharmacology is a promising approach for connecting GWAS diseases with TCM for potential drug repurposing and repositioning. The computational approaches described in this study could be easily expandable to other disease-gene network analysis.

  1. Making Connections

    Science.gov (United States)

    Pien, Cheng Lu; Dongsheng, Zhao

    2011-01-01

    Effective teaching includes enabling learners to make connections within mathematics. It is easy to accord with this statement, but how often is it a reality in the mathematics classroom? This article describes an approach in "connecting equivalent" fractions and whole number operations. The authors illustrate how a teacher can combine a common…

  2. A New Theoretical Approach to Postsecondary Student Disability: Disability-Diversity (Dis)Connect Model

    Science.gov (United States)

    Aquino, Katherine C.

    2016-01-01

    Disability is often viewed as an obstacle to postsecondary inclusion, but not a characteristic of student diversity. Additionally, current theoretical frameworks isolate disability from other student diversity characteristics. In response, a new conceptual framework, the Disability-Diversity (Dis)Connect Model (DDDM), was created to address…

  3. Creating a Culture of Connection: A Postmodern Punk Rock Approach to Art Therapy

    Science.gov (United States)

    Drass, Jessica Masino

    2016-01-01

    Punk culture is based on an ideology that emphasizes questioning conformity and creating a space for individuality within community. It has inspired fans to create their own music and art as part of their quest for authenticity. Art therapy informed by punk culture can be a way to create a culture of connection while also building resiliency and…

  4. Integrated approach for power quality requirements at the point of connection

    NARCIS (Netherlands)

    Cobben, J.F.G.; Bhattacharyya, S.; Myrzik, J.M.A.; Kling, W.L.

    2007-01-01

    Given the nature of electricity, every party connected to the power system influences voltage quality, which means that every party also should meet requirements. In this field, a sound coordination among technical standards (system-related, installation-related and product-related) is of paramount

  5. A study on a tangible interaction approach to managing wireless connections in a smart home environment

    NARCIS (Netherlands)

    Peeters, M.M.R.; Vlist, van der B.J.J.; Niezen, G.; Hu, J.; Feijs, L.M.G.

    2012-01-01

    Technological advances in computational, networking and sensing abilities are leading towards a future in which our daily lives are immersed with interactive devices that are networked and interoperable. Design has an important role in facilitating users to make sense of the many connections between

  6. An integrative approach to the design methodology for 3-phase power conditioners in Photovoltaic Grid-Connected systems

    International Nuclear Information System (INIS)

    Rey-Boué, Alexis B.; García-Valverde, Rafael; Ruz-Vila, Francisco de A.; Torrelo-Ponce, José M.

    2012-01-01

    Highlights: ► A design methodology for Photovoltaic grid-connected systems is presented. ► Models of the Photovoltaic Generator and the 3-phase Inverter are described. ► The power factor and the power quality are regulated with vector control. ► Simulation and experimental results validate the design methodology. ► The proposed methodology can be extended to any Renewable or Industrial System. - Abstract: A novel methodology is presented in this paper, for the design of the Power and Control Subsystems of a 3-phase Photovoltaic Grid-Connected system in an easy and comprehensive way, as an integrative approach. At the DC side of the Power Subsystem, the Photovoltaic Generator modeling is revised and a simple model is proposed, whereas at the AC side, a vector analysis is done to deal with the instantaneous 3-phase variables of the grid-connected Voltage Source Inverter. A d–q control approach is established in the Control Subsystem, along with its specific tuned parameters, as a vector control alternative which will allow the decoupled control of the instantaneous active and reactive powers. A particular Case of Study is presented to illustrate the behavior of the design methodology regarding the fulfillment of the Photovoltaic plant specifications. Some simulations are run to study the performance of the Photovoltaic Generator together with the exerted d–q control to the grid-connected 3-phase inverter, and some experimental results, obtained from a built flexible platform, are also shown. The simulations and the experimental results validate the overall performance of the 3-phase Photovoltaic Grid-Connected system due to the attained unitary power factor operation together with good power quality. The final validation of the proposed design methodology is also achieved.

  7. Active Subspaces of Airfoil Shape Parameterizations

    Science.gov (United States)

    Grey, Zachary J.; Constantine, Paul G.

    2018-05-01

    Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.

  8. Parameterization of the dielectric function of semiconductor nanocrystals

    Energy Technology Data Exchange (ETDEWEB)

    Petrik, P., E-mail: petrik@mfa.kfki.hu

    2014-11-15

    Optical methods like spectroscopic ellipsometry are sensitive to the structural properties of semiconductor films such as crystallinity or grain size. The imaginary part of the dielectric function is proportional to the joint density of electronic states. Consequently, the analysis of the dielectric function around the critical point energies provides useful information about the electron band structure and all related parameters like the grain structure, band gap, temperature, composition, phase structure, and carrier mobility. In this work an attempt is made to present a selection of the approaches to parameterize and analyze the dielectric function of semiconductors, as well as some applications.

  9. Parameterization analysis and inversion for orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2018-05-01

    Accounting for azimuthal anisotropy is necessary for the processing and inversion of wide-azimuth and wide-aperture seismic data because wave speeds naturally depend on the wave propagation direction. Orthorhombic anisotropy is considered the most effective anisotropic model that approximates the azimuthal anisotropy we observe in seismic data. In the framework of full wave form inversion (FWI), the large number of parameters describing orthorhombic media exerts a considerable trade-off and increases the non-linearity of the inversion problem. Choosing a suitable parameterization for the model, and identifying which parameters in that parameterization could be well resolved, are essential to a successful inversion. In this thesis, I derive the radiation patterns for different acoustic orthorhombic parameterization. Analyzing the angular dependence of the scattering of the parameters of different parameterizations starting with the conventionally used notation, I assess the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. In order to build practical inversion strategies, I suggest new parameters (called deviation parameters) for a new parameterization style in orthorhombic media. The novel parameters denoted ∈d, ƞd and δd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. The main feature of the deviation parameters consists of keeping the scattering of the vertical transversely isotropic (VTI) parameters stationary with azimuth. Using these scattering features, we can condition FWI to invert for the parameters which the data are sensitive to, at different stages, scales, and locations in the model. With this parameterization, the data are mainly sensitive to the scattering of 3 parameters (out of six that describe an acoustic orthorhombic medium): the horizontal velocity in the x1 direction, ∈1 which provides scattering mainly near

  10. Analysis of sensitivity to different parameterization schemes for a subtropical cyclone

    Science.gov (United States)

    Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.

    2018-05-01

    A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.

  11. Lateral force resisting mechanisms in slab-column connections: An analytical approach

    OpenAIRE

    Drakatos, Iaonnis; Beyer, Katrin; Muttoni, Aurelio

    2014-01-01

    In many countries reinforced concrete (RC) flat slabs supported on columns is one of the most commonly used structural systems for office and industrial buildings. To increase the lateral stiffness and strength of the structure, RC walls are typically added and carry the largest portion of the horizontal loads generated during earthquakes. While the slab-column system is typically not relevant with regard to the lateral stiffness and strength of the structure, each slab-column connection has ...

  12. A spectroscopic approach toward depression diagnosis: local metabolism meets functional connectivity.

    Science.gov (United States)

    Demenescu, Liliana Ramona; Colic, Lejla; Li, Meng; Safron, Adam; Biswal, B; Metzger, Coraline Danielle; Li, Shijia; Walter, Martin

    2017-03-01

    Abnormal anterior insula (AI) response and functional connectivity (FC) is associated with depression. In addition to clinical features, such as severity, AI FC and its metabolism further predicted therapeutic response. Abnormal FC between anterior cingulate and AI covaried with reduced glutamate level within cingulate cortex. Recently, deficient glial glutamate conversion was found in AI in major depression disorder (MDD). We therefore postulate a local glutamatergic mechanism in insula cortex of depressive patients, which is correlated with symptoms severity and itself influences AI's network connectivity in MDD. Twenty-five MDD patients and 25 healthy controls (HC) matched on age and sex underwent resting state functional magnetic resonance imaging and magnetic resonance spectroscopy scans. To determine the role of local glutamate-glutamine complex (Glx) ratio on whole brain AI FC, we conducted regression analysis with Glx relative to creatine (Cr) ratio as factor of interest and age, sex, and voxel tissue composition as nuisance factors. We found that in MDD, but not in HC, AI Glx/Cr ratio correlated positively with AI FC to right supramarginal gyrus and negatively with AI FC toward left occipital cortex (p family wise error). AI Glx/Cr level was negatively correlated with HAMD score (p disintegration of insula toward low level and supramodal integration areas, in MDD. While causality cannot directly be inferred from such correlation, our finding helps to define a multilevel network of response-predicting regions based on local metabolism and connectivity strength.

  13. Linear Approach for Synchronous State Stability in Fully Connected PLL Networks

    Directory of Open Access Journals (Sweden)

    José R. C. Piqueira

    2008-01-01

    Full Text Available Synchronization is an essential feature for the use of digital systems in telecommunication networks, integrated circuits, and manufacturing automation. Formerly, master-slave (MS architectures, with precise master clock generators sending signals to phase-locked loops (PLLs working as slave oscillators, were considered the best solution. Nowadays, the development of wireless networks with dynamical connectivity and the increase of the size and the operation frequency of integrated circuits suggest that the distribution of clock signals could be more efficient if distributed solutions with fully connected oscillators are used. Here, fully connected networks with second-order PLLs as nodes are considered. In previous work, how the synchronous state frequency for this type of network depends on the node parameters and delays was studied and an expression for the long-term frequency was derived (Piqueira, 2006. Here, by taking the first term of the Taylor series expansion for the dynamical system description, it is shown that for a generic network with N nodes, the synchronous state is locally asymptotically stable.

  14. Connective tissue graft vs. emdogain: A new approach to compare the outcomes.

    Science.gov (United States)

    Sayar, Ferena; Akhundi, Nasrin; Gholami, Sanaz

    2013-01-01

    The aim of this clinical trial study was to clinically evaluate the use of enamel matrix protein derivative combined with the coronally positioned flap to treat gingival recession compared to the subepithelial connective tissue graft by a new method to obtain denuded root surface area. Thirteen patients, each with two or more similar bilateral Miller class I or II gingival recession (40 recessions) were randomly assigned to the test (enamel matrix protein derivative + coronally positioned flap) or control group (subepithelial connective tissue graft). Recession depth, width, probing depth, keratinized gingival, and plaque index were recorded at baseline and at one, three, and six months after treatment. A stent was used to measure the denuded root surface area at each examination session. Results were analyzed using Kolmogorov-Smirnov, Wilcoxon, Friedman, paired-sample t test. The average percentages of root coverage for control and test groups were 63.3% and 55%, respectively. Both groups showed significant keratinized gingival increase (P 0.05). The results of Friedman test were significant for clinical indices (P < 0.05), except for probing depth in control group (P = 0.166). Enamel matrix protein derivative showed the same results as subepithelial connective tissue graft with relatively easy procedure to perform and low patient morbidity.

  15. A stochastic parameterization for deep convection using cellular automata

    Science.gov (United States)

    Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.

    2012-12-01

    Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in

  16. A Connection Entropy Approach to Water Resources Vulnerability Analysis in a Changing Environment

    Directory of Open Access Journals (Sweden)

    Zhengwei Pan

    2017-11-01

    Full Text Available This paper establishes a water resources vulnerability framework based on sensitivity, natural resilience and artificial adaptation, through the analyses of the four states of the water system and its accompanying transformation processes. Furthermore, it proposes an analysis method for water resources vulnerability based on connection entropy, which extends the concept of contact entropy. An example is given of the water resources vulnerability in Anhui Province of China, which analysis illustrates that, overall, vulnerability levels fluctuated and showed apparent improvement trends from 2001 to 2015. Some suggestions are also provided for the improvement of the level of water resources vulnerability in Anhui Province, considering the viewpoint of the vulnerability index.

  17. Connecting the Dots Downunder Towards An Integrated Institutional Approach To Digital Content Management

    CERN Document Server

    Harboe-Ree, Cathrine

    2004-01-01

    There is a growing interest among academic institutions in managing institutional digital content produced in and for research, teaching and learning. This article argues for an integrated approach to this activity and examines the role of libraries in facilitating this. It then considers some existing approaches to providing software to support this. Australian initiatives consistent with this integrated model are then discussed.

  18. A universal approach to electrically connecting nanowire arrays using nanoparticles—application to a novel gas sensor architecture

    Science.gov (United States)

    Parthangal, Prahalad M.; Cavicchi, Richard E.; Zachariah, Michael R.

    2006-08-01

    We report on a novel, in situ approach toward connecting and electrically contacting vertically aligned nanowire arrays using conductive nanoparticles. The utility of the approach is demonstrated by development of a gas sensing device employing this nano-architecture. Well-aligned, single-crystalline zinc oxide nanowires were grown through a direct thermal evaporation process at 550 °C on gold catalyst layers. Electrical contact to the top of the nanowire array was established by creating a contiguous nanoparticle film through electrostatic attachment of conductive gold nanoparticles exclusively onto the tips of nanowires. A gas sensing device was constructed using such an arrangement and the nanowire assembly was found to be sensitive to both reducing (methanol) and oxidizing (nitrous oxides) gases. This assembly approach is amenable to any nanowire array for which a top contact electrode is needed.

  19. A universal approach to electrically connecting nanowire arrays using nanoparticles-application to a novel gas sensor architecture

    International Nuclear Information System (INIS)

    Parthangal, Prahalad M; Cavicchi, Richard E; Zachariah, Michael R

    2006-01-01

    We report on a novel, in situ approach toward connecting and electrically contacting vertically aligned nanowire arrays using conductive nanoparticles. The utility of the approach is demonstrated by development of a gas sensing device employing this nano-architecture. Well-aligned, single-crystalline zinc oxide nanowires were grown through a direct thermal evaporation process at 550 deg. C on gold catalyst layers. Electrical contact to the top of the nanowire array was established by creating a contiguous nanoparticle film through electrostatic attachment of conductive gold nanoparticles exclusively onto the tips of nanowires. A gas sensing device was constructed using such an arrangement and the nanowire assembly was found to be sensitive to both reducing (methanol) and oxidizing (nitrous oxides) gases. This assembly approach is amenable to any nanowire array for which a top contact electrode is needed

  20. Connecting Inquiry and Values in Science Education - An Approach Based on John Dewey's Philosophy

    Science.gov (United States)

    Lee, Eun Ah; Brown, Matthew J.

    2018-01-01

    Conducting scientific inquiry is expected to help students make informed decisions; however, how exactly it can help is rarely explained in science education standards. According to classroom studies, inquiry that students conduct in science classes seems to have little effect on their decision-making. Predetermined values play a large role in students' decision-making, but students do not explore these values or evaluate whether they are appropriate to the particular issue they are deciding, and they often ignore relevant scientific information. We explore how to connect inquiry and values, and how this connection can contribute to informed decision-making based on John Dewey's philosophy. Dewey argues that scientific inquiry should include value judgments and that conducting inquiry can improve the ability to make good value judgments. Value judgment is essential to informed, rational decision-making, and Dewey's ideas can explain how conducting inquiry can contribute to make an informed decision through value judgment. According to Dewey, each value judgment during inquiry is a practical judgment guiding action, and students can improve their value judgments by evaluating their actions during scientific inquiry. Thus, we suggest that students need an opportunity to explore values through scientific inquiry and that practicing value judgment will help informed decision-makings.

  1. Connecting Inquiry and Values in Science Education. An Approach Based on John Dewey's Philosophy

    Science.gov (United States)

    Lee, Eun Ah; Brown, Matthew J.

    2018-03-01

    Conducting scientific inquiry is expected to help students make informed decisions; however, how exactly it can help is rarely explained in science education standards. According to classroom studies, inquiry that students conduct in science classes seems to have little effect on their decision-making. Predetermined values play a large role in students' decision-making, but students do not explore these values or evaluate whether they are appropriate to the particular issue they are deciding, and they often ignore relevant scientific information. We explore how to connect inquiry and values, and how this connection can contribute to informed decision-making based on John Dewey's philosophy. Dewey argues that scientific inquiry should include value judgments and that conducting inquiry can improve the ability to make good value judgments. Value judgment is essential to informed, rational decision-making, and Dewey's ideas can explain how conducting inquiry can contribute to make an informed decision through value judgment. According to Dewey, each value judgment during inquiry is a practical judgment guiding action, and students can improve their value judgments by evaluating their actions during scientific inquiry. Thus, we suggest that students need an opportunity to explore values through scientific inquiry and that practicing value judgment will help informed decision-makings.

  2. An adaptive state of charge estimation approach for lithium-ion series-connected battery system

    Science.gov (United States)

    Peng, Simin; Zhu, Xuelai; Xing, Yinjiao; Shi, Hongbing; Cai, Xu; Pecht, Michael

    2018-07-01

    Due to the incorrect or unknown noise statistics of a battery system and its cell-to-cell variations, state of charge (SOC) estimation of a lithium-ion series-connected battery system is usually inaccurate or even divergent using model-based methods, such as extended Kalman filter (EKF) and unscented Kalman filter (UKF). To resolve this problem, an adaptive unscented Kalman filter (AUKF) based on a noise statistics estimator and a model parameter regulator is developed to accurately estimate the SOC of a series-connected battery system. An equivalent circuit model is first built based on the model parameter regulator that illustrates the influence of cell-to-cell variation on the battery system. A noise statistics estimator is then used to attain adaptively the estimated noise statistics for the AUKF when its prior noise statistics are not accurate or exactly Gaussian. The accuracy and effectiveness of the SOC estimation method is validated by comparing the developed AUKF and UKF when model and measurement statistics noises are inaccurate, respectively. Compared with the UKF and EKF, the developed method shows the highest SOC estimation accuracy.

  3. Invariant box-parameterization of neutrino oscillations

    International Nuclear Information System (INIS)

    Weiler, Thomas J.; Wagner, DJ

    1998-01-01

    The model-independent 'box' parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing-matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n≥3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements

  4. Parameterized Concurrent Multi-Party Session Types

    Directory of Open Access Journals (Sweden)

    Minas Charalambides

    2012-08-01

    Full Text Available Session types have been proposed as a means of statically verifying implementations of communication protocols. Although prior work has been successful in verifying some classes of protocols, it does not cope well with parameterized, multi-actor scenarios with inherent asynchrony. For example, the sliding window protocol is inexpressible in previously proposed session type systems. This paper describes System-A, a new typing language which overcomes many of the expressiveness limitations of prior work. System-A explicitly supports asynchrony and parallelism, as well as multiple forms of parameterization. We define System-A and show how it can be used for the static verification of a large class of asynchronous communication protocols.

  5. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.

    Science.gov (United States)

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-10-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.

  6. Invariant box parameterization of neutrino oscillations

    International Nuclear Information System (INIS)

    Weiler, T.J.; Wagner, D.

    1998-01-01

    The model-independent 'box' parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n≥3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements. copyright 1998 American Institute of Physics

  7. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations

    OpenAIRE

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-01-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or ...

  8. On parameterization of the inverse problem for estimating aquifer properties using tracer data

    International Nuclear Information System (INIS)

    Kowalsky, M. B.; Finsterle, Stefan A.; Williams, Kenneth H.; Murray, Christopher J.; Commer, Michael; Newcomer, Darrell R.; Englert, Andreas L.; Steefel, Carl I.; Hubbard, Susan

    2012-01-01

    We consider a field-scale tracer experiment conducted in 2007 in a shallow uranium-contaminated aquifer at Rifle, Colorado. In developing a reliable approach for inferring hydrological properties at the site through inverse modeling of the tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance. We present an approach for hydrological inversion of the tracer data and explore, using a 2D synthetic example at first, how parameterization affects the solution, and how additional characterization data could be incorporated to reduce uncertainty. Specifically, we examine sensitivity of the results to the configuration of pilot points used in a geostatistical parameterization, and to the sampling frequency and measurement error of the concentration data. A reliable solution of the inverse problem is found when the pilot point configuration is carefully implemented. In addition, we examine the use of a zonation parameterization, in which the geometry of the geological facies is known (e.g., from geophysical data or core data), to reduce the non-uniqueness of the solution and the number of unknown parameters to be estimated. When zonation information is only available for a limited region, special treatment in the remainder of the model is necessary, such as using a geostatistical parameterization. Finally, inversion of the actual field data is performed using 2D and 3D models, and results are compared with slug test data.

  9. Introductory quantum mechanics a traditional approach emphasizing connections with classical physics

    CERN Document Server

    Berman, Paul R

    2018-01-01

    This book presents a basic introduction to quantum mechanics at the undergraduate level. Depending on the choice of topics, it can be used for a one-semester or two-semester course. An attempt has been made to anticipate the conceptual problems students encounter when they first study quantum mechanics. Wherever possible, examples are given to illustrate the underlying physics associated with the mathematical equations of quantum mechanics. To this end, connections are made with corresponding phenomena in classical mechanics and electromagnetism. The problems at the end of each chapter are intended to help students master the course material and to explore more advanced topics. Many calculations exploit the extraordinary capabilities of computer programs such as Mathematica, MatLab, and Maple. Students are urged to use these programs, just as they had been urged to use calculators in the past. The treatment of various topics is rather complete, in that most steps in derivations are included. Several of the ch...

  10. Connecting Palau's marine protected areas: a population genetic approach to conservation

    Science.gov (United States)

    Cros, Annick; Toonen, Robert J.; Donahue, Megan J.; Karl, Stephen A.

    2017-09-01

    Bleaching events are becoming more frequent and are projected to become annual in Micronesia by 2040. To prepare for this threat, the Government of Palau is reviewing its marine protected area network to increase the resilience of the reefs by integrating connectivity into the network design. To support their effort, we used high-throughput sequencing of microsatellites to create genotypes of colonies of the coral Acropora hyacinthus to characterize population genetic structure and dispersal patterns that led to the recovery of Palau's reefs from a 1998 bleaching event. We found no evidence of a founder effect or refugium where colonies may have survived to recolonize the reef. Instead, we found significant pairwise F' st values, indicating population structure and low connectivity among most of the 25 sites around Palau. We used kinship to measure genetic differences at the individual level among sites and found that differences were best explained by the degree of exposure to the ocean [ F 1,20 = 3.015, Pr(> F) = 0.01], but with little of the total variation explained. A permutation test of the pairwise kinship coefficients revealed that there was self-seeding within sites. Overall, the data point to the population of A. hyacinthus in Palau recovering from a handful of surviving colonies with population growth primarily from self-seeding and little exchange among sites. This finding has significant implications for the management strategies for the reefs of Palau, and we recommend increasing the number and distribution of management areas around Palau to capture the genetic architecture and increase the chances of protecting potential refuges in the future.

  11. Cortical Signatures of Dyslexia and Remediation: An Intrinsic Functional Connectivity Approach

    Science.gov (United States)

    Koyama, Maki S.; Di Martino, Adriana; Kelly, Clare; Jutagir, Devika R.; Sunshine, Jessica; Schwartz, Susan J.; Castellanos, Francisco X.; Milham, Michael P.

    2013-01-01

    This observational, cross-sectional study investigates cortical signatures of developmental dyslexia, particularly from the perspective of behavioral remediation. We employed resting-state fMRI, and compared intrinsic functional connectivity (iFC) patterns of known reading regions (seeds) among three dyslexia groups characterized by (a) no remediation (current reading and spelling deficits), (b) partial remediation (only reading deficit remediated), and (c) full remediation (both reading and spelling deficits remediated), and a group of age- and IQ-matched typically developing children (TDC) (total N = 44, age range = 7–15 years). We observed significant group differences in iFC of two seeds located in the left posterior reading network – left intraparietal sulcus (L.IPS) and left fusiform gyrus (L.FFG). Specifically, iFC between L.IPS and left middle frontal gyrus was significantly weaker in all dyslexia groups, irrespective of remediation status/literacy competence, suggesting that persistent dysfunction in the fronto-parietal attention network characterizes dyslexia. Additionally, relative to both TDC and the no remediation group, the remediation groups exhibited stronger iFC between L.FFG and right middle occipital gyrus (R.MOG). The full remediation group also exhibited stronger negative iFC between the same L.FFG seed and right medial prefrontal cortex (R.MPFC), a core region of the default network These results suggest that behavioral remediation may be associated with compensatory changes anchored in L.FFG, which reflect atypically stronger coupling between posterior visual regions (L.FFG-R.MOG) and greater functional segregation between task-positive and task-negative regions (L.FFG-R.MPFC). These findings were bolstered by significant relationships between the strength of the identified functional connections and literacy scores. We conclude that examining iFC can reveal cortical signatures of dyslexia with particular promise for monitoring

  12. Merging Approaches to Explore Connectivity in the Anemonefish, Amphiprion bicinctus, along the Saudi Arabian Coast of the Red Sea

    KAUST Repository

    Nanninga, Gerrit B.

    2013-09-01

    The field of marine population connectivity is receiving growing attention from ecologists worldwide. The degree to which metapopulations are connected via larval dispersal has vital ramifications for demographic and evolutionary dynamics and largely determines the way we manage threatened coastal ecosystems. Here we addressed different questions relating to connectivity by integrating direct and indirect genetic approaches over different spatial and ecological scales in a coral reef fish in the Red Sea. We developed 35 novel microsatellite loci for our study organism the two-band anemonefish Amphiprion bicinctus (Rüppel 1830), which served as the basis of the following approaches. First, we collected nearly one thousand samples of A. bicinctus from 19 locations across 1500 km along the Saudi Arabian coast to infer population genetic structure. Genetic variability along the northern and central coast was weak, but showed a significant break at approximately 20°N. Implementing a model of isolation by environment with chlorophyll-a concentrations and geographic distance as predictors we were able to explain over 90% of the genetic variability in the data (R2 = 0.92). For the second approach we sampled 311 (c. 99%) putative parents and 172 juveniles at an isolated reef, Quita al Girsh (QG), to estimate self-recruitment using genetic parentage analysis. Additionally we collected 176 juveniles at surrounding locations to estimate larval dispersal from QG and ran a biophysical dispersal model of the system with real5 time climatological forcing. In concordance with model predictions, we found a complete lack (c. 0.5%) of self-recruitment over two sampling periods within our study system, thus presenting the first empirical evidence for a largely open reef fish population. Lastly, to conceptualize different hypotheses regarding the underlying processes and mechanisms of self-recruitment versus long-distance dispersal in marine organisms with pelagic larval stages, I

  13. Modeling the Galaxy-Halo Connection: An open-source approach with Halotools

    Science.gov (United States)

    Hearin, Andrew

    2016-03-01

    Although the modern form of galaxy-halo modeling has been in place for over ten years, there exists no common code base for carrying out large-scale structure calculations. Considering, for example, the advances in CMB science made possible by Boltzmann-solvers such as CMBFast, CAMB and CLASS, there are clear precedents for how theorists working in a well-defined subfield can mutually benefit from such a code base. Motivated by these and other examples, I present Halotools: an open-source, object-oriented python package for building and testing models of the galaxy-halo connection. Halotools is community-driven, and already includes contributions from over a dozen scientists spread across numerous universities. Designed with high-speed performance in mind, the package generates mock observations of synthetic galaxy populations with sufficient speed to conduct expansive MCMC likelihood analyses over a diverse and highly customizable set of models. The package includes an automated test suite and extensive web-hosted documentation and tutorials (halotools.readthedocs.org). I conclude the talk by describing how Halotools can be used to analyze existing datasets to obtain robust and novel constraints on galaxy evolution models, and by outlining the Halotools program to prepare the field of cosmology for the arrival of Stage IV dark energy experiments.

  14. Connected Vehicle Pilot Deployment Program phase 1 : comprehensive deployment plan : New York City : volume 1 : technical application : part I : technical and management approach.

    Science.gov (United States)

    2016-08-01

    This document describes the Deployment Plan for the New York City Department of Transportation (NYC) Connected Vehicle Pilot Deployment (CVPD) Project. This plan describes the approach to complete Phase 2 Design/Build/Test, and Phase 3 Operate and Ma...

  15. An Implementation Of Icare Approach (Introduction, Connection, Application, Reflection, Extension) to Improve The Creative Thinking Skills

    Science.gov (United States)

    Carni; Maknun, J.; Siahaan, P.

    2017-02-01

    This study is aimed to get an overview about the increase of creative thinking skills in ten grades high school students as the impact of the implementation of the ICARE approach to the dynamic electrical material. This study is using pre-experimental method. And, the research design is one-group-pretest-posttest. In this case, the participants of this study are students in ten grades in one senior high school in West Java which is randomly selected. The data is collected from the students by doing pretest and posttest in order to measure the increase of students’ creative thinking skills. In the final analysis, the results of this study presents that the implementation of the ICARE approach generally increase the students’ creative thinking skills. The result of the N-Gain showed that the students’ creative thinking skills increased by the average score of 0.52, categorized as medium. This is caused by the implementation of ICARE approach to the application stage.

  16. A Solar Radiation Parameterization for Atmospheric Studies. Volume 15

    Science.gov (United States)

    Chou, Ming-Dah; Suarez, Max J. (Editor)

    1999-01-01

    The solar radiation parameterization (CLIRAD-SW) developed at the Goddard Climate and Radiation Branch for application to atmospheric models are described. It includes the absorption by water vapor, O3, O2, CO2, clouds, and aerosols and the scattering by clouds, aerosols, and gases. Depending upon the nature of absorption, different approaches are applied to different absorbers. In the ultraviolet and visible regions, the spectrum is divided into 8 bands, and single O3 absorption coefficient and Rayleigh scattering coefficient are used for each band. In the infrared, the spectrum is divided into 3 bands, and the k-distribution method is applied for water vapor absorption. The flux reduction due to O2 is derived from a simple function, while the flux reduction due to CO2 is derived from precomputed tables. Cloud single-scattering properties are parameterized, separately for liquid drops and ice, as functions of water amount and effective particle size. A maximum-random approximation is adopted for the overlapping of clouds at different heights. Fluxes are computed using the Delta-Eddington approximation.

  17. Determining Connections between the Daily Lives of Zoo Elephants and Their Welfare: An Epidemiological Approach.

    Directory of Open Access Journals (Sweden)

    Cheryl L Meehan

    Full Text Available Concerns about animal welfare increasingly shape people's views about the acceptability of keeping animals for food production, biomedical research, and in zoos. The field of animal welfare science has developed over the past 50 years as a method of investigating these concerns via research that assesses how living in human-controlled environments influences the behavior, health and affective states of animals. Initially, animal welfare research focused on animals in agricultural settings, but the field has expanded to zoos because good animal welfare is essential to zoos' mission of promoting connections between animals and visitors and raising awareness of conservation issues. A particular challenge for zoos is ensuring good animal welfare for long-lived, highly social species like elephants. Our main goal in conducting an epidemiological study of African (Loxodonta africana and Asian (Elephas maximus elephant welfare in 68 accredited North American zoos was to understand the prevalence of welfare indicators in the population and determine the aspects of an elephant's zoo environment, social life and management that are most important to prevent and reduce a variety of welfare problems. In this overview, we provide a summary of the findings of the nine papers in the collection titled: Epidemiological Investigations of North American Zoo Elephant Welfare with a focus on the life history, social, housing, and management factors found to be associated with particular aspects of elephant welfare, including the performance of abnormal behavior, foot and joint problems, recumbence, walking rates, and reproductive health issues. Social and management factors were found to be important for multiple indicators of welfare, while exhibit space was found to be less influential than expected. This body of work results from the largest prospective zoo-based animal welfare study conducted to date and sets in motion the process of using science-based welfare

  18. Impact of model structure and parameterization on Penman-Monteith type evaporation models

    KAUST Repository

    Ershadi, A.; McCabe, Matthew; Evans, J.P.; Wood, E.F.

    2015-01-01

    Overall, the results illustrate the sensitivity of Penman-Monteith type models to model structure, parameterization choice and biome type. A particular challenge in flux estimation relates to developing robust and broadly applicable model formulations. With many choices available for use, providing guidance on the most appropriate scheme to employ is required to advance approaches for routine global scale flux estimates, undertake hydrometeorological assessments or develop hydrological forecasting tools, amongst many other applications. In such cases, a multi-model ensemble or biome-specific tiled evaporation product may be an appropriate solution, given the inherent variability in model and parameterization choice that is observed within single product estimates.

  19. Using human rights to improve maternal and neonatal health: history, connections and a proposed practical approach.

    Science.gov (United States)

    Gruskin, Sofia; Cottingham, Jane; Hilber, Adriane Martin; Kismodi, Eszter; Lincetto, Ornella; Roseman, Mindy Jane

    2008-08-01

    We describe the historical development of how maternal and neonatal mortality in the developing world came to be seen as a public-health concern, a human rights concern, and ultimately as both, leading to the development of approaches using human rights concepts and methods to advance maternal and neonatal health. We describe the different contributions of the international community, women's health advocates and human rights activists. We briefly present a recent effort, developed by WHO with the Harvard Program on International Health and Human Rights, that applies a human rights framework to reinforce current efforts to reduce maternal and neonatal mortality.

  20. Better models are more effectively connected models

    Science.gov (United States)

    Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John

    2016-04-01

    can be represented in models: either by allowing it to emerge from model behaviour or by parameterizing it inside model structures; and on the appropriate scale at which processes should be represented explicitly or implicitly. It will also explore how modellers themselves approach connectivity through the results of a community survey. Finally, it will present the outline of an international modelling exercise aimed at assessing how different modelling concepts can capture connectivity in real catchments.

  1. Parameterized and resolved Southern Ocean eddy compensation

    Science.gov (United States)

    Poulsen, Mads B.; Jochum, Markus; Nuterman, Roman

    2018-04-01

    The ability to parameterize Southern Ocean eddy effects in a forced coarse resolution ocean general circulation model is assessed. The transient model response to a suite of different Southern Ocean wind stress forcing perturbations is presented and compared to identical experiments performed with the same model in 0.1° eddy-resolving resolution. With forcing of present-day wind stress magnitude and a thickness diffusivity formulated in terms of the local stratification, it is shown that the Southern Ocean residual meridional overturning circulation in the two models is different in structure and magnitude. It is found that the difference in the upper overturning cell is primarily explained by an overly strong subsurface flow in the parameterized eddy-induced circulation while the difference in the lower cell is mainly ascribed to the mean-flow overturning. With a zonally constant decrease of the zonal wind stress by 50% we show that the absolute decrease in the overturning circulation is insensitive to model resolution, and that the meridional isopycnal slope is relaxed in both models. The agreement between the models is not reproduced by a 50% wind stress increase, where the high resolution overturning decreases by 20%, but increases by 100% in the coarse resolution model. It is demonstrated that this difference is explained by changes in surface buoyancy forcing due to a reduced Antarctic sea ice cover, which strongly modulate the overturning response and ocean stratification. We conclude that the parameterized eddies are able to mimic the transient response to altered wind stress in the high resolution model, but partly misrepresent the unperturbed Southern Ocean meridional overturning circulation and associated heat transports.

  2. Resting state cortico-cerebellar functional connectivity networks: A comparison of anatomical and self-organizing map approaches

    Directory of Open Access Journals (Sweden)

    Jessica A Bernard

    2012-08-01

    Full Text Available The cerebellum plays a role in a wide variety of complex behaviors. In order to better understand the role of the cerebellum in human behavior, it is important to know how this structure interacts with cortical and other subcortical regions of the brain. To date, several studies have investigated the cerebellum using resting-state functional connectivity magnetic resonance imaging (fcMRI; Buckner et al., 2011; Krienen & Buckner, 2009; O’Reilly et al., 2009. However, none of this work has taken an anatomically-driven approach. Furthermore, though detailed maps of cerebral cortex and cerebellum networks have been proposed using different network solutions based on the cerebral cortex (Buckner et al., 2011, it remains unknown whether or not an anatomical lobular breakdown best encompasses the networks of the cerebellum. Here, we used fcMRI to create an anatomically-driven cerebellar connectivity atlas. Timecourses were extracted from the lobules of the right hemisphere and vermis. We found distinct networks for the individual lobules with a clear division into motor and non-motor regions. We also used a self-organizing map algorithm to parcellate the cerebellum. This allowed us to investigate redundancy and independence of the anatomically identified cerebellar networks. We found that while anatomical boundaries in the anterior cerebellum provide functional subdivisions of a larger motor grouping defined using our self-organizing map algorithm, in the posterior cerebellum, the lobules were made up of sub-regions associated with distinct functional networks. Together, our results indicate that the lobular boundaries of the human cerebellum are not indicative of functional boundaries, though anatomical divisions can be useful, as is the case of the anterior cerebellum. Additionally, driving the analyses from the cerebellum is key to determining the complete picture of functional connectivity within the structure.

  3. Collaborative Project. 3D Radiative Transfer Parameterization Over Mountains/Snow for High-Resolution Climate Models. Fast physics and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liou, Kuo-Nan [Univ. of California, Los Angeles, CA (United States)

    2016-02-09

    Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracing computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS) for the

  4. A structural equation modeling approach to understanding pathways that connect socioeconomic status and smoking.

    Science.gov (United States)

    Martinez, Sydney A; Beebe, Laura A; Thompson, David M; Wagener, Theodore L; Terrell, Deirdra R; Campbell, Janis E

    2018-01-01

    The inverse association between socioeconomic status and smoking is well established, yet the mechanisms that drive this relationship are unclear. We developed and tested four theoretical models of the pathways that link socioeconomic status to current smoking prevalence using a structural equation modeling (SEM) approach. Using data from the 2013 National Health Interview Survey, we selected four indicator variables (poverty ratio, personal earnings, educational attainment, and employment status) that we hypothesize underlie a latent variable, socioeconomic status. We measured direct, indirect, and total effects of socioeconomic status on smoking on four pathways through four latent variables representing social cohesion, financial strain, sleep disturbance, and psychological distress. Results of the model indicated that the probability of being a smoker decreased by 26% of a standard deviation for every one standard deviation increase in socioeconomic status. The direct effects of socioeconomic status on smoking accounted for the majority of the total effects, but the overall model also included significant indirect effects. Of the four mediators, sleep disturbance and psychological distress had the largest total effects on current smoking. We explored the use of structural equation modeling in epidemiology to quantify effects of socioeconomic status on smoking through four social and psychological factors to identify potential targets for interventions. A better understanding of the complex relationship between socioeconomic status and smoking is critical as we continue to reduce the burden of tobacco and eliminate health disparities related to smoking.

  5. Multi-omics approach to elucidate the gut microbiota activity: Metaproteomics and metagenomics connection.

    Science.gov (United States)

    Guirro, Maria; Costa, Andrea; Gual-Grau, Andreu; Mayneris-Perxachs, Jordi; Torrell, Helena; Herrero, Pol; Canela, Núria; Arola, Lluís

    2018-02-10

    Over the last few years, the application of high-throughput meta-omics methods has provided great progress in improving the knowledge of the gut ecosystem and linking its biodiversity to host health conditions, offering complementary support to classical microbiology. Gut microbiota plays a crucial role in relevant diseases such as obesity or cardiovascular disease (CVD), and its regulation is closely influenced by several factors, such as dietary composition. In fact, polyphenol-rich diets are the most palatable treatment to prevent hypertension associated with CVD, although the polyphenol-microbiota interactions have not been completely elucidated. For this reason, the aim of this study was to evaluate microbiota effect in obese rats supplemented by hesperidin, after being fed with cafeteria or standard diet, using a multi meta-omics approaches combining strategy of metagenomics and metaproteomics analysis. We reported that cafeteria diet induces obesity, resulting in changes in the microbiota composition, which are related to functional alterations at proteome level. In addition, hesperidin supplementation alters microbiota diversity and also proteins involved in important metabolic pathways. Overall, going deeper into strategies to integrate omics sciences is necessary to understand the complex relationships between the host, gut microbiota, and diet. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Pacific connections for health, ecosystems and society: new approaches to the land-water-health nexus.

    Science.gov (United States)

    Parkes, Margot W

    2016-03-01

    Renewed effort to understand the social-ecological context of health is drawing attention to the dynamics of land and water resources and their combined influence on the determinants of health. A new area of research, education and policy is emerging that focuses on the land-water-health nexus: this orientation is applicable from small wetlands through to large-scale watersheds or river basins, and draws attention to the benefits of combined land and water governance, as well as the interrelated implications for health, ecological and societal concerns. Informed by research precedents, imperatives and collaborations emerging in Canada and parts of Oceania, this review profiles three integrative, applied approaches that are bringing attention to the importance the land-water-health nexus within the Pacific Basin: wetlands and watersheds as intersectoral settings to address land-water-health dynamics; tools to integrate health, ecological and societal dynamics at the land-water-health nexus; and indigenous leadership that is linking health and well-being with land and water governance. Emphasis is given to key characteristics of a new generation of inquiry and action at the land-water-health nexus, as well as capacity-building, practice and policy opportunities to address converging environmental, social and health objectives linked to the management and governance of land and water resources.

  7. A subgrid parameterization scheme for precipitation

    Directory of Open Access Journals (Sweden)

    S. Turner

    2012-04-01

    Full Text Available With increasing computing power, the horizontal resolution of numerical weather prediction (NWP models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation formation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed probability density function (PDF of relative humidity spatial variability within the model grid box, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II and fair weather cumulus (RICO and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.

  8. Connectivity of the intracytoplasmic membrane of Rhodobacter sphaeroides: a functional approach.

    Science.gov (United States)

    Verméglio, André; Lavergne, Jérôme; Rappaport, Fabrice

    2016-01-01

    The photosynthetic apparatus in the bacterium Rhodobacter sphaeroides is mostly present in intracytoplasmic membrane invaginations. It has long been debated whether these invaginations remain in topological continuity with the cytoplasmic membrane, or form isolated chromatophore vesicles. This issue is revisited here by functional approaches. The ionophore gramicidin was used as a probe of the relative size of the electro-osmotic units in isolated chromatophores, spheroplasts, or intact cells. The decay of the membrane potential was monitored from the electrochromic shift of carotenoids. The half-time of the decay induced by a single channel in intact cells was about 6 ms, thus three orders of magnitude slower than in isolated chromatophores. In spheroplasts obtained by lysis of the cell wall, the single channel decay was still slower (~23 ms) and the sensitivity toward the gramicidin concentration was enhanced 1,000-fold with respect to isolated chromatophores. These results indicate that the area of the functional membrane in cells or spheroplasts is about three orders of magnitude larger than that of isolated chromatophores. Intracytoplasmic vesicles, if present, could contribute to at most 10% of the photosynthetic apparatus in intact cells of Rba. sphaeroides. Similar conclusions were obtained from the effect of a ∆pH-induced diffusion potential in intact cells. This caused a large electrochromic response of carotenoids, of similar amplitude as the light-induced change, indicating that most of the system is sensitive to a pH change of the external medium. A single internal membrane and periplasmic space may offer significant advantages concerning renewal of the photosynthetic apparatus and reallocation of the components shared with other bioenergetic pathways.

  9. Systems biological approach of molecular descriptors connectivity: optimal descriptors for oral bioavailability prediction.

    Science.gov (United States)

    Ahmed, Shiek S S J; Ramakrishnan, V

    2012-01-01

    Poor oral bioavailability is an important parameter accounting for the failure of the drug candidates. Approximately, 50% of developing drugs fail because of unfavorable oral bioavailability. In silico prediction of oral bioavailability (%F) based on physiochemical properties are highly needed. Although many computational models have been developed to predict oral bioavailability, their accuracy remains low with a significant number of false positives. In this study, we present an oral bioavailability model based on systems biological approach, using a machine learning algorithm coupled with an optimal discriminative set of physiochemical properties. The models were developed based on computationally derived 247 physicochemical descriptors from 2279 molecules, among which 969, 605 and 705 molecules were corresponds to oral bioavailability, intestinal absorption (HIA) and caco-2 permeability data set, respectively. The partial least squares discriminate analysis showed 49 descriptors of HIA and 50 descriptors of caco-2 are the major contributing descriptors in classifying into groups. Of these descriptors, 47 descriptors were commonly associated to HIA and caco-2, which suggests to play a vital role in classifying oral bioavailability. To determine the best machine learning algorithm, 21 classifiers were compared using a bioavailability data set of 969 molecules with 47 descriptors. Each molecule in the data set was represented by a set of 47 physiochemical properties with the functional relevance labeled as (+bioavailability/-bioavailability) to indicate good-bioavailability/poor-bioavailability molecules. The best-performing algorithm was the logistic algorithm. The correlation based feature selection (CFS) algorithm was implemented, which confirms that these 47 descriptors are the fundamental descriptors for oral bioavailability prediction. The logistic algorithm with 47 selected descriptors correctly predicted the oral bioavailability, with a predictive accuracy

  10. Computational Approach for Securing Radiology-Diagnostic Data in Connected Health Network using High-Performance GPU-Accelerated AES.

    Science.gov (United States)

    Adeshina, A M; Hashim, R

    2017-03-01

    Diagnostic radiology is a core and integral part of modern medicine, paving ways for the primary care physicians in the disease diagnoses, treatments and therapy managements. Obviously, all recent standard healthcare procedures have immensely benefitted from the contemporary information technology revolutions, apparently revolutionizing those approaches to acquiring, storing and sharing of diagnostic data for efficient and timely diagnosis of diseases. Connected health network was introduced as an alternative to the ageing traditional concept in healthcare system, improving hospital-physician connectivity and clinical collaborations. Undoubtedly, the modern medicinal approach has drastically improved healthcare but at the expense of high computational cost and possible breach of diagnosis privacy. Consequently, a number of cryptographical techniques are recently being applied to clinical applications, but the challenges of not being able to successfully encrypt both the image and the textual data persist. Furthermore, processing time of encryption-decryption of medical datasets, within a considerable lower computational cost without jeopardizing the required security strength of the encryption algorithm, still remains as an outstanding issue. This study proposes a secured radiology-diagnostic data framework for connected health network using high-performance GPU-accelerated Advanced Encryption Standard. The study was evaluated with radiology image datasets consisting of brain MR and CT datasets obtained from the department of Surgery, University of North Carolina, USA, and the Swedish National Infrastructure for Computing. Sample patients' notes from the University of North Carolina, School of medicine at Chapel Hill were also used to evaluate the framework for its strength in encrypting-decrypting textual data in the form of medical report. Significantly, the framework is not only able to accurately encrypt and decrypt medical image datasets, but it also

  11. A multi-scale qualitative approach to assess the impact of urbanization on natural habitats and their connectivity

    Energy Technology Data Exchange (ETDEWEB)

    Scolozzi, Rocco, E-mail: rocco.scolozzi@fmach.it [Sustainable Agro-ecosystems and Bioresources Department, IASMA Research and Innovation Centre, Fondazione Edmund Mach, Via E. Mach 1, 38010 San Michele all& #x27; Adige, (Italy); Geneletti, Davide, E-mail: geneletti@ing.unitn.it [Department of Civil and Environmental Engineering, University of Trento, Trento (Italy)

    2012-09-15

    Habitat loss and fragmentation are often concurrent to land conversion and urbanization. Simple application of GIS-based landscape pattern indicators may be not sufficient to support meaningful biodiversity impact assessment. A review of the literature reveals that habitat definition and habitat fragmentation are frequently inadequately considered in environmental assessment, notwithstanding the increasing number of tools and approaches reported in the landscape ecology literature. This paper presents an approach for assessing impacts on habitats on a local scale, where availability of species data is often limited, developed for an alpine valley in northern Italy. The perspective of the methodology is multiple scale and species-oriented, and provides both qualitative and quantitative definitions of impact significance. A qualitative decision model is used to assess ecological values in order to support land-use decisions at the local level. Building on recent studies in the same region, the methodology integrates various approaches, such as landscape graphs, object-oriented rule-based habitat assessment and expert knowledge. The results provide insights into future habitat loss and fragmentation caused by land-use changes, and aim at supporting decision-making in planning and suggesting possible ecological compensation. - Highlights: Black-Right-Pointing-Pointer Many environmental assessments inadequately consider habitat loss and fragmentation. Black-Right-Pointing-Pointer Species-perspective for defining habitat quality and connectivity is claimed. Black-Right-Pointing-Pointer Species-based tools are difficult to be applied with limited availability of data. Black-Right-Pointing-Pointer We propose a species-oriented and multiple scale-based qualitative approach. Black-Right-Pointing-Pointer Advantages include being species-oriented and providing value-based information.

  12. A multi-scale qualitative approach to assess the impact of urbanization on natural habitats and their connectivity

    International Nuclear Information System (INIS)

    Scolozzi, Rocco; Geneletti, Davide

    2012-01-01

    Habitat loss and fragmentation are often concurrent to land conversion and urbanization. Simple application of GIS-based landscape pattern indicators may be not sufficient to support meaningful biodiversity impact assessment. A review of the literature reveals that habitat definition and habitat fragmentation are frequently inadequately considered in environmental assessment, notwithstanding the increasing number of tools and approaches reported in the landscape ecology literature. This paper presents an approach for assessing impacts on habitats on a local scale, where availability of species data is often limited, developed for an alpine valley in northern Italy. The perspective of the methodology is multiple scale and species-oriented, and provides both qualitative and quantitative definitions of impact significance. A qualitative decision model is used to assess ecological values in order to support land-use decisions at the local level. Building on recent studies in the same region, the methodology integrates various approaches, such as landscape graphs, object-oriented rule-based habitat assessment and expert knowledge. The results provide insights into future habitat loss and fragmentation caused by land-use changes, and aim at supporting decision-making in planning and suggesting possible ecological compensation. - Highlights: ► Many environmental assessments inadequately consider habitat loss and fragmentation. ► Species-perspective for defining habitat quality and connectivity is claimed. ► Species-based tools are difficult to be applied with limited availability of data. ► We propose a species-oriented and multiple scale-based qualitative approach. ► Advantages include being species-oriented and providing value-based information.

  13. A general framework for thermodynamically consistent parameterization and efficient sampling of enzymatic reactions.

    Directory of Open Access Journals (Sweden)

    Pedro Saa

    2015-04-01

    Full Text Available Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol, a transition region (-2> ΔGr >-20 kJ/mol and a constant elasticity region (ΔGr <-20 kJ/mol. We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only

  14. A general Euclidean connection for so(n,m) lie algebra and the algebraic approach to scattering

    International Nuclear Information System (INIS)

    Ionescu, R.A.

    1994-11-01

    We obtain a general Euclidean connection for so(n,m). This Euclidean connection allows an algebraic derivation of the S matrix and it reduces to the known one in suitable circumstances. (author). 8 refs

  15. Connected Traveler

    Energy Technology Data Exchange (ETDEWEB)

    2016-06-01

    The Connected Traveler framework seeks to boost the energy efficiency of personal travel and the overall transportation system by maximizing the accuracy of predicted traveler behavior in response to real-time feedback and incentives. It is anticipated that this approach will establish a feedback loop that 'learns' traveler preferences and customizes incentives to meet or exceed energy efficiency targets by empowering individual travelers with information needed to make energy-efficient choices and reducing the complexity required to validate transportation system energy savings. This handout provides an overview of NREL's Connected Traveler project, including graphics, milestones, and contact information.

  16. Climate impacts of parameterized Nordic Sea overflows

    Science.gov (United States)

    Danabasoglu, Gokhan; Large, William G.; Briegleb, Bruce P.

    2010-11-01

    A new overflow parameterization (OFP) of density-driven flows through ocean ridges via narrow, unresolved channels has been developed and implemented in the ocean component of the Community Climate System Model version 4. It represents exchanges from the Nordic Seas and the Antarctic shelves, associated entrainment, and subsequent injection of overflow product waters into the abyssal basins. We investigate the effects of the parameterized Denmark Strait (DS) and Faroe Bank Channel (FBC) overflows on the ocean circulation, showing their impacts on the Atlantic Meridional Overturning Circulation and the North Atlantic climate. The OFP is based on the Marginal Sea Boundary Condition scheme of Price and Yang (1998), but there are significant differences that are described in detail. Two uncoupled (ocean-only) and two fully coupled simulations are analyzed. Each pair consists of one case with the OFP and a control case without this parameterization. In both uncoupled and coupled experiments, the parameterized DS and FBC source volume transports are within the range of observed estimates. The entrainment volume transports remain lower than observational estimates, leading to lower than observed product volume transports. Due to low entrainment, the product and source water properties are too similar. The DS and FBC overflow temperature and salinity properties are in better agreement with observations in the uncoupled case than in the coupled simulation, likely reflecting surface flux differences. The most significant impact of the OFP is the improved North Atlantic Deep Water penetration depth, leading to a much better comparison with the observational data and significantly reducing the chronic, shallow penetration depth bias in level coordinate models. This improvement is due to the deeper penetration of the southward flowing Deep Western Boundary Current. In comparison with control experiments without the OFP, the abyssal ventilation rates increase in the North

  17. A parameterization of cloud droplet nucleation

    International Nuclear Information System (INIS)

    Ghan, S.J.; Chuang, C.; Penner, J.E.

    1993-01-01

    Droplet nucleation is a fundamental cloud process. The number of aerosols activated to form cloud droplets influences not only the number of aerosols scavenged by clouds but also the size of the cloud droplets. Cloud droplet size influences the cloud albedo and the conversion of cloud water to precipitation. Global aerosol models are presently being developed with the intention of coupling with global atmospheric circulation models to evaluate the influence of aerosols and aerosol-cloud interactions on climate. If these and other coupled models are to address issues of aerosol-cloud interactions, the droplet nucleation process must be adequately represented. Here we introduce a droplet nucleation parametrization that offers certain advantages over the popular Twomey (1959) parameterization

  18. Cumulus parameterizations in chemical transport models

    Science.gov (United States)

    Mahowald, Natalie M.; Rasch, Philip J.; Prinn, Ronald G.

    1995-12-01

    Global three-dimensional chemical transport models (CTMs) are valuable tools for studying processes controlling the distribution of trace constituents in the atmosphere. A major uncertainty in these models is the subgrid-scale parametrization of transport by cumulus convection. This study seeks to define the range of behavior of moist convective schemes and point toward more reliable formulations for inclusion in chemical transport models. The emphasis is on deriving convective transport from meteorological data sets (such as those from the forecast centers) which do not routinely include convective mass fluxes. Seven moist convective parameterizations are compared in a column model to examine the sensitivity of the vertical profile of trace gases to the parameterization used in a global chemical transport model. The moist convective schemes examined are the Emanuel scheme [Emanuel, 1991], the Feichter-Crutzen scheme [Feichter and Crutzen, 1990], the inverse thermodynamic scheme (described in this paper), two versions of a scheme suggested by Hack [Hack, 1994], and two versions of a scheme suggested by Tiedtke (one following the formulation used in the ECMWF (European Centre for Medium-Range Weather Forecasting) and ECHAM3 (European Centre and Hamburg Max-Planck-Institut) models [Tiedtke, 1989], and one formulated as in the TM2 (Transport Model-2) model (M. Heimann, personal communication, 1992). These convective schemes vary in the closure used to derive the mass fluxes, as well as the cloud model formulation, giving a broad range of results. In addition, two boundary layer schemes are compared: a state-of-the-art nonlocal boundary layer scheme [Holtslag and Boville, 1993] and a simple adiabatic mixing scheme described in this paper. Three tests are used to compare the moist convective schemes against observations. Although the tests conducted here cannot conclusively show that one parameterization is better than the others, the tests are a good measure of the

  19. Parameterization of MARVELS Spectra Using Deep Learning

    Science.gov (United States)

    Gilda, Sankalp; Ge, Jian; MARVELS

    2018-01-01

    Like many large-scale surveys, the Multi-Object APO Radial Velocity Exoplanet Large-area Survey (MARVELS) was designed to operate at a moderate spectral resolution ($\\sim$12,000) for efficiency in observing large samples, which makes the stellar parameterization difficult due to the high degree of blending of spectral features. Two extant solutions to deal with this issue are to utilize spectral synthesis, and to utilize spectral indices [Ghezzi et al. 2014]. While the former is a powerful and tested technique, it can often yield strongly coupled atmospheric parameters, and often requires high spectral resolution (Valenti & Piskunov 1996). The latter, though a promising technique utilizing measurements of equivalent widths of spectral indices, has only been employed with respect to FKG dwarfs and sub-giants and not red-giant branch stars, which constitute ~30% of MARVELS targets. In this work, we tackle this problem using a convolution neural network (CNN). In particular, we train a one-dimensional CNN on appropriately processed PHOENIX synthetic spectra using supervised training to automatically distinguish the features relevant for the determination of each of the three atmospheric parameters – T_eff, log(g), [Fe/H] – and use the knowledge thus gained by the network to parameterize 849 MARVELS giants. When tested on the synthetic spectra themselves, our estimates of the parameters were consistent to within 11 K, .02 dex, and .02 dex (in terms of mean absolute errors), respectively. For MARVELS dwarfs, the accuracies are 80K, .16 dex and .10 dex, respectively.

  20. Connecting programmable logic controllers (PLC) to control and data acquisition a comparison of the JET and Wendelstein 7-X approach

    International Nuclear Information System (INIS)

    Hennig, Christine; Kneupner, Klaus; Kinna, David

    2012-01-01

    Highlights: ► We describe 2 ways connecting PLCs to fusion control and data acquisition software. ► At W7-X standardization of the PLC type eases the maintenance of the software. ► At JET PLCs are interfaced with a daemon that hides the PLC specific part. ► There is potential to unify the approaches towards a common fusion PLC interface. - Abstract: The use of programmable logic controllers (PLC) for automation of electromechanical processes is an industrial control system technology. It is more and more in use within the fusion community. Traditionally PLC based systems are operated and maintained using proprietary SCADA systems (supervisory control and data acquisition). They are hardly ever integrated with the fusion control and data acquisition systems. An overview of the state of the art in fusion is given in the article. At JET an inhouse “black box protocol” approach has been developed to communicate with any external system via a dedicated http based protocol. However, a PLC usually cannot be modified to implement this special protocol. Hence, a software layer has been developed that interfaces a PLC by implementing the PLC specific communication part on one side and the black box protocol part on the other side. The software is completely data driven i.e. editing the data structure changes the logic accordingly. It can be tested using the web capability of the black box protocol. Multiple PLC types from different vendors are supported, thus multiple protocols to interface the PLC are in use. Depending on the PLC type and available tools it can be necessary to program the PLC accordingly. Wendelstein 7-X uses another approach. For every single PLC a dedicated communication from and to CoDaC is implemented. This communication is projected (programmed) in the PLC and configurable (data driven) on the CoDaC side. The protocol is UDP based and observed via timeout mechanisms. The use of PLCs for Wendelstein 7-X is standardized. Therefore a single

  1. Connecting programmable logic controllers (PLC) to control and data acquisition a comparison of the JET and Wendelstein 7-X approach

    Energy Technology Data Exchange (ETDEWEB)

    Hennig, Christine, E-mail: Christine.Hennig@ipp.mpg.de [Max-Planck-Institut fuer Plasmaphysik, Wendelsteinstrasse 1, 17491 Greifswald (Germany); Kneupner, Klaus; Kinna, David [JET-EFDA, Culham Science Centre, OX14 3DB Abingdon (United Kingdom)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer We describe 2 ways connecting PLCs to fusion control and data acquisition software. Black-Right-Pointing-Pointer At W7-X standardization of the PLC type eases the maintenance of the software. Black-Right-Pointing-Pointer At JET PLCs are interfaced with a daemon that hides the PLC specific part. Black-Right-Pointing-Pointer There is potential to unify the approaches towards a common fusion PLC interface. - Abstract: The use of programmable logic controllers (PLC) for automation of electromechanical processes is an industrial control system technology. It is more and more in use within the fusion community. Traditionally PLC based systems are operated and maintained using proprietary SCADA systems (supervisory control and data acquisition). They are hardly ever integrated with the fusion control and data acquisition systems. An overview of the state of the art in fusion is given in the article. At JET an inhouse 'black box protocol' approach has been developed to communicate with any external system via a dedicated http based protocol. However, a PLC usually cannot be modified to implement this special protocol. Hence, a software layer has been developed that interfaces a PLC by implementing the PLC specific communication part on one side and the black box protocol part on the other side. The software is completely data driven i.e. editing the data structure changes the logic accordingly. It can be tested using the web capability of the black box protocol. Multiple PLC types from different vendors are supported, thus multiple protocols to interface the PLC are in use. Depending on the PLC type and available tools it can be necessary to program the PLC accordingly. Wendelstein 7-X uses another approach. For every single PLC a dedicated communication from and to CoDaC is implemented. This communication is projected (programmed) in the PLC and configurable (data driven) on the CoDaC side. The protocol is UDP based and

  2. Modelling heterogeneous ice nucleation on mineral dust and soot with parameterizations based on laboratory experiments

    Science.gov (United States)

    Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.

    2016-12-01

    Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.

  3. A test harness for accelerating physics parameterization advancements into operations

    Science.gov (United States)

    Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.

    2017-12-01

    The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a

  4. The Effectiveness of Problem-Based Learning Approach Based on Multiple Intelligences in Terms of Student’s Achievement, Mathematical Connection Ability, and Self-Esteem

    Science.gov (United States)

    Kartikasari, A.; Widjajanti, D. B.

    2017-02-01

    The aim of this study is to explore the effectiveness of learning approach using problem-based learning based on multiple intelligences in developing student’s achievement, mathematical connection ability, and self-esteem. This study is experimental research with research sample was 30 of Grade X students of MIA III MAN Yogyakarta III. Learning materials that were implemented consisting of trigonometry and geometry. For the purpose of this study, researchers designed an achievement test made up of 44 multiple choice questions with respectively 24 questions on the concept of trigonometry and 20 questions for geometry. The researcher also designed a connection mathematical test and self-esteem questionnaire that consisted of 7 essay questions on mathematical connection test and 30 items of self-esteem questionnaire. The learning approach said that to be effective if the proportion of students who achieved KKM on achievement test, the proportion of students who achieved a minimum score of high category on the results of both mathematical connection test and self-esteem questionnaire were greater than or equal to 70%. Based on the hypothesis testing at the significance level of 5%, it can be concluded that the learning approach using problem-based learning based on multiple intelligences was effective in terms of student’s achievement, mathematical connection ability, and self-esteem.

  5. Parameterized combinatorial geometry modeling in Moritz

    International Nuclear Information System (INIS)

    Van Riper, K.A.

    2005-01-01

    We describe the use of named variables as surface and solid body coefficients in the Moritz geometry editing program. Variables can also be used as material numbers, cell densities, and transformation values. A variable is defined as a constant or an arithmetic combination of constants and other variables. A variable reference, such as in a surface coefficient, can be a single variable or an expression containing variables and constants. Moritz can read and write geometry models in MCNP and ITS ACCEPT format; support for other codes will be added. The geometry can be saved with either the variables in place, for modifying the models in Moritz, or with the variables evaluated for use in the transport codes. A program window shows a list of variables and provides fields for editing them. Surface coefficients and other values that use a variable reference are shown in a distinctive style on object property dialogs; associated buttons show fields for editing the reference. We discuss our use of variables in defining geometry models for shielding studies in PET clinics. When a model is parameterized through the use of variables, changes such as room dimensions, shielding layer widths, and cell compositions can be quickly achieved by changing a few numbers without requiring knowledge of the input syntax for the transport code or the tedious and error prone work of recalculating many surface or solid body coefficients. (author)

  6. Phenomenology of convection-parameterization closure

    Directory of Open Access Journals (Sweden)

    J.-I. Yano

    2013-04-01

    Full Text Available Closure is a problem of defining the convective intensity in a given parameterization. In spite of many years of efforts and progress, it is still considered an overall unresolved problem. The present article reviews this problem from phenomenological perspectives. The physical variables that may contribute in defining the convective intensity are listed, and their statistical significances identified by observational data analyses are reviewed. A possibility is discussed for identifying a correct closure hypothesis by performing a linear stability analysis of tropical convectively coupled waves with various different closure hypotheses. Various individual theoretical issues are considered from various different perspectives. The review also emphasizes that the dominant physical factors controlling convection differ between the tropics and extra-tropics, as well as between oceanic and land areas. Both observational as well as theoretical analyses, often focused on the tropics, do not necessarily lead to conclusions consistent with our operational experiences focused on midlatitudes. Though we emphasize the importance of the interplays between these observational, theoretical and operational perspectives, we also face challenges for establishing a solid research framework that is universally applicable. An energy cycle framework is suggested as such a candidate.

  7. Stellar Atmospheric Parameterization Based on Deep Learning

    Science.gov (United States)

    Pan, Ru-yang; Li, Xiang-ru

    2017-07-01

    Deep learning is a typical learning method widely studied in the fields of machine learning, pattern recognition, and artificial intelligence. This work investigates the problem of stellar atmospheric parameterization by constructing a deep neural network with five layers, and the node number in each layer of the network is respectively 3821-500-100-50-1. The proposed scheme is verified on both the real spectra measured by the Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with the Kurucz's New Opacity Distribution Function (NEWODF) model, to make an automatic estimation for three physical parameters: the effective temperature (Teff), surface gravitational acceleration (lg g), and metallic abundance (Fe/H). The results show that the stacked autoencoder deep neural network has a better accuracy for the estimation. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for Teff/K, 0.0058 for (lg Teff/K), 0.1706 for lg (g/(cm·s-2)), and 0.1294 dex for the [Fe/H], respectively; On the theoretic spectra, the MAEs are 15.34 for Teff/K, 0.0011 for lg (Teff/K), 0.0214 for lg(g/(cm · s-2)), and 0.0121 dex for [Fe/H], respectively.

  8. On Euclidean connections for su(1,1), suq(1,1) and the algebraic approach to scattering

    International Nuclear Information System (INIS)

    Ionescu, R.A.

    1994-11-01

    We obtain a general Euclidean connection for su(1,1) and suq(1,1) algebras. Our Euclidean connection allows an algebraic derivation for the S matrix. These algebraic S matrices reduce to the known ones in suitable circumstances. Also, we obtain a map between su(1,1) and su q (1,1) representations. (author). 8 refs

  9. Carbody structural lightweighting based on implicit parameterized model

    Science.gov (United States)

    Chen, Xin; Ma, Fangwu; Wang, Dengfeng; Xie, Chen

    2014-05-01

    Most of recent research on carbody lightweighting has focused on substitute material and new processing technologies rather than structures. However, new materials and processing techniques inevitably lead to higher costs. Also, material substitution and processing lightweighting have to be realized through body structural profiles and locations. In the huge conventional workload of lightweight optimization, model modifications involve heavy manual work, and it always leads to a large number of iteration calculations. As a new technique in carbody lightweighting, the implicit parameterization is used to optimize the carbody structure to improve the materials utilization rate in this paper. The implicit parameterized structural modeling enables the use of automatic modification and rapid multidisciplinary design optimization (MDO) in carbody structure, which is impossible in the traditional structure finite element method (FEM) without parameterization. The structural SFE parameterized model is built in accordance with the car structural FE model in concept development stage, and it is validated by some structural performance data. The validated SFE structural parameterized model can be used to generate rapidly and automatically FE model and evaluate different design variables group in the integrated MDO loop. The lightweighting result of body-in-white (BIW) after the optimization rounds reveals that the implicit parameterized model makes automatic MDO feasible and can significantly improve the computational efficiency of carbody structural lightweighting. This paper proposes the integrated method of implicit parameterized model and MDO, which has the obvious practical advantage and industrial significance in the carbody structural lightweighting design.

  10. On quaternion based parameterization of orientation in computer vision and robotics

    Directory of Open Access Journals (Sweden)

    G. Terzakis

    2014-04-01

    Full Text Available The problem of orientation parameterization for applications in computer vision and robotics is examined in detail herein. The necessary intuition and formulas are provided for direct practical use in any existing algorithm that seeks to minimize a cost function in an iterative fashion. Two distinct schemes of parameterization are analyzed: The first scheme concerns the traditional axis-angle approach, while the second employs stereographic projection from unit quaternion sphere to the 3D real projective space. Performance measurements are taken and a comparison is made between the two approaches. Results suggests that there exist several benefits in the use of stereographic projection that include rational expressions in the rotation matrix derivatives, improved accuracy, robustness to random starting points and accelerated convergence.

  11. Exploring the potential of machine learning to break deadlock in convection parameterization

    Science.gov (United States)

    Pritchard, M. S.; Gentine, P.

    2017-12-01

    We explore the potential of modern machine learning tools (via TensorFlow) to replace parameterization of deep convection in climate models. Our strategy begins by generating a large ( 1 Tb) training dataset from time-step level (30-min) output harvested from a one-year integration of a zonally symmetric, uniform-SST aquaplanet integration of the SuperParameterized Community Atmosphere Model (SPCAM). We harvest the inputs and outputs connecting each of SPCAM's 8,192 embedded cloud-resolving model (CRM) arrays to its host climate model's arterial thermodynamic state variables to afford 143M independent training instances. We demonstrate that this dataset is sufficiently large to induce preliminary convergence for neural network prediction of desired outputs of SP, i.e. CRM-mean convective heating and moistening profiles. Sensitivity of the machine learning convergence to the nuances of the TensorFlow implementation are discussed, as well as results from pilot tests from the neural network operating inline within the SPCAM as a replacement to the (super)parameterization of convection.

  12. An Evaluation of Lightning Flash Rate Parameterizations Based on Observations of Colorado Storms during DC3

    Science.gov (United States)

    Basarab, B.; Fuchs, B.; Rutledge, S. A.

    2013-12-01

    to observed flash rates. For the 6 June storm, a preliminary analysis of aircraft observations of storm inflow and outflow is presented in order to place flash rates (and other lightning statistics) in the context of storm chemistry. An approach to a possibly improved LNOx parameterization scheme using different lightning metrics such as flash area will be discussed.

  13. Resting-State Functional Connectivity in Generalized Anxiety Disorder and Social Anxiety Disorder: Evidence for a Dimensional Approach.

    Science.gov (United States)

    Rabany, Liron; Diefenbach, Gretchen J; Bragdon, Laura B; Pittman, Brian P; Zertuche, Luis; Tolin, David F; Goethe, John W; Assaf, Michal

    2017-06-01

    Generalized anxiety disorder (GAD) and social anxiety disorder (SAD) are currently considered distinct diagnostic categories. Accumulating data suggest the study of anxiety disorders may benefit from the use of dimensional conceptualizations. One such dimension of shared dysfunction is emotion regulation (ER). The current study evaluated dimensional (ER) and categorical (diagnosis) neurocorrelates of resting-state functional connectivity (rsFC) in participants with GAD and SAD and healthy controls (HC). Functional magnetic resonance imaging (fMRI) rsFC was estimated between all regions of the default mode network (DMN), salience network (SN), and bilateral amygdala (N = 37: HC-19; GAD-10; SAD-8). Thereafter, rsFC was predicted by both ER, (using the Difficulties in Emotion Regulation Scale [DERS]), and diagnosis (DSM-5) within a single unified analysis of covariance (ANCOVA). For the ER dimension, there was a significant association between impaired ER abilities and anticorrelated rsFC of amygdala and DMN (L.amygdala-ACC: p = 0.011, beta = -0.345), as well as amygdala and SN (L.amygdala-posterior cingulate cortex [PCC]: p = 0.032, beta = -0.409). Diagnostic status was significantly associated with rsFC differences between the SAD and HC groups, both within the DMN (PCC-MPFC: p = 0.009) and between the DMN and SN (R.LP-ACC: p = 0.010). Although preliminary, our results exemplify the potential contribution of the dimensional approach to the study of GAD and SAD and support a combined categorical and dimensional model of rsFC of anxiety disorders.

  14. An explicit parameterization for casting constraints in gradient driven topology optimization

    DEFF Research Database (Denmark)

    Gersborg, Allan Roulund; Andreasen, Casper Schousboe

    2011-01-01

    From a practical point of view it is often desirable to limit the complexity of a topology optimization design such that casting/milling type manufacturing techniques can be applied. In the context of gradient driven topology optimization this work studies how castable designs can be obtained...... by use of a Heaviside design parameterization in a specified casting direction. This reduces the number of design variables considerably and the approach is simple to implement....

  15. An energetically consistent vertical mixing parameterization in CCSM4

    DEFF Research Database (Denmark)

    Nielsen, Søren Borg; Jochum, Markus; Eden, Carsten

    2018-01-01

    An energetically consistent stratification-dependent vertical mixing parameterization is implemented in the Community Climate System Model 4 and forced with energy conversion from the barotropic tides to internal waves. The structures of the resulting dissipation and diffusivity fields are compared......, however, depends greatly on the details of the vertical mixing parameterizations, where the new energetically consistent parameterization results in low thermocline diffusivities and a sharper and shallower thermocline. It is also investigated if the ocean state is more sensitive to a change in forcing...

  16. Automatic Generation of Symbolic Model for Parameterized Synchronous Systems

    Institute of Scientific and Technical Information of China (English)

    Wei-Wen Xu

    2004-01-01

    With the purpose of making the verification of parameterized system more general and easier, in this paper, a new and intuitive language PSL (Parameterized-system Specification Language) is proposed to specify a class of parameterized synchronous systems. From a PSL script, an automatic method is proposed to generate a constraint-based symbolic model. The model can concisely symbolically represent the collections of global states by counting the number of processes in a given state. Moreover, a theorem has been proved that there is a simulation relation between the original system and its symbolic model. Since the abstract and symbolic techniques are exploited in the symbolic model, state-explosion problem in traditional verification methods is efficiently avoided. Based on the proposed symbolic model, a reachability analysis procedure is implemented using ANSI C++ on UNIX platform. Thus, a complete tool for verifying the parameterized synchronous systems is obtained and tested for some cases. The experimental results show that the method is satisfactory.

  17. New Parameterizations for Neutral and Ion-Induced Sulfuric Acid-Water Particle Formation in Nucleation and Kinetic Regimes

    Science.gov (United States)

    Määttänen, Anni; Merikanto, Joonas; Henschel, Henning; Duplissy, Jonathan; Makkonen, Risto; Ortega, Ismael K.; Vehkamäki, Hanna

    2018-01-01

    We have developed new parameterizations of electrically neutral homogeneous and ion-induced sulfuric acid-water particle formation for large ranges of environmental conditions, based on an improved model that has been validated against a particle formation rate data set produced by Cosmics Leaving OUtdoor Droplets (CLOUD) experiments at European Organization for Nuclear Research (CERN). The model uses a thermodynamically consistent version of the Classical Nucleation Theory normalized using quantum chemical data. Unlike the earlier parameterizations for H2SO4-H2O nucleation, the model is applicable to extreme dry conditions where the one-component sulfuric acid limit is approached. Parameterizations are presented for the critical cluster sulfuric acid mole fraction, the critical cluster radius, the total number of molecules in the critical cluster, and the particle formation rate. If the critical cluster contains only one sulfuric acid molecule, a simple formula for kinetic particle formation can be used: this threshold has also been parameterized. The parameterization for electrically neutral particle formation is valid for the following ranges: temperatures 165-400 K, sulfuric acid concentrations 104-1013 cm-3, and relative humidities 0.001-100%. The ion-induced particle formation parameterization is valid for temperatures 195-400 K, sulfuric acid concentrations 104-1016 cm-3, and relative humidities 10-5-100%. The new parameterizations are thus applicable for the full range of conditions in the Earth's atmosphere relevant for binary sulfuric acid-water particle formation, including both tropospheric and stratospheric conditions. They are also suitable for describing particle formation in the atmosphere of Venus.

  18. Elastic orthorhombic anisotropic parameter inversion: An analysis of parameterization

    KAUST Repository

    Oh, Juwon; Alkhalifah, Tariq Ali

    2016-01-01

    The resolution of a multiparameter full-waveform inversion (FWI) is highly influenced by the parameterization used in the inversion algorithm, as well as the data quality and the sensitivity of the data to the elastic parameters because the scattering patterns of the partial derivative wavefields (PDWs) vary with parameterization. For this reason, it is important to identify an optimal parameterization for elastic orthorhombic FWI by analyzing the radiation patterns of the PDWs for many reasonable model parameterizations. We have promoted a parameterization that allows for the separation of the anisotropic properties in the radiation patterns. The central parameter of this parameterization is the horizontal P-wave velocity, with an isotropic scattering potential, influencing the data at all scales and directions. This parameterization decouples the influence of the scattering potential given by the P-wave velocity perturbation fromthe polar changes described by two dimensionless parameter perturbations and from the azimuthal variation given by three additional dimensionless parameters perturbations. In addition, the scattering potentials of the P-wave velocity perturbation are also decoupled from the elastic influences given by one S-wave velocity and two additional dimensionless parameter perturbations. The vertical S-wave velocity is chosen with the best resolution obtained from S-wave reflections and converted waves, little influence on P-waves in conventional surface seismic acquisition. The influence of the density on observed data can be absorbed by one anisotropic parameter that has a similar radiation pattern. The additional seven dimensionless parameters describe the polar and azimuth variations in the P- and S-waves that we may acquire, with some of the parameters having distinct influences on the recorded data on the earth's surface. These characteristics of the new parameterization offer the potential for a multistage inversion from high symmetry

  19. Elastic orthorhombic anisotropic parameter inversion: An analysis of parameterization

    KAUST Repository

    Oh, Juwon

    2016-09-15

    The resolution of a multiparameter full-waveform inversion (FWI) is highly influenced by the parameterization used in the inversion algorithm, as well as the data quality and the sensitivity of the data to the elastic parameters because the scattering patterns of the partial derivative wavefields (PDWs) vary with parameterization. For this reason, it is important to identify an optimal parameterization for elastic orthorhombic FWI by analyzing the radiation patterns of the PDWs for many reasonable model parameterizations. We have promoted a parameterization that allows for the separation of the anisotropic properties in the radiation patterns. The central parameter of this parameterization is the horizontal P-wave velocity, with an isotropic scattering potential, influencing the data at all scales and directions. This parameterization decouples the influence of the scattering potential given by the P-wave velocity perturbation fromthe polar changes described by two dimensionless parameter perturbations and from the azimuthal variation given by three additional dimensionless parameters perturbations. In addition, the scattering potentials of the P-wave velocity perturbation are also decoupled from the elastic influences given by one S-wave velocity and two additional dimensionless parameter perturbations. The vertical S-wave velocity is chosen with the best resolution obtained from S-wave reflections and converted waves, little influence on P-waves in conventional surface seismic acquisition. The influence of the density on observed data can be absorbed by one anisotropic parameter that has a similar radiation pattern. The additional seven dimensionless parameters describe the polar and azimuth variations in the P- and S-waves that we may acquire, with some of the parameters having distinct influences on the recorded data on the earth\\'s surface. These characteristics of the new parameterization offer the potential for a multistage inversion from high symmetry

  20. Impact of model structure and parameterization on Penman-Monteith type evaporation models

    KAUST Repository

    Ershadi, A.

    2015-04-12

    sites, where the simpler aerodynamic resistance approach of Mu et al. (2011) showed improved performance. Overall, the results illustrate the sensitivity of Penman-Monteith type models to model structure, parameterization choice and biome type. A particular challenge in flux estimation relates to developing robust and broadly applicable model formulations. With many choices available for use, providing guidance on the most appropriate scheme to employ is required to advance approaches for routine global scale flux estimates, undertake hydrometeorological assessments or develop hydrological forecasting tools, amongst many other applications. In such cases, a multi-model ensemble or biome-specific tiled evaporation product may be an appropriate solution, given the inherent variability in model and parameterization choice that is observed within single product estimates.

  1. Spectral cumulus parameterization based on cloud-resolving model

    Science.gov (United States)

    Baba, Yuya

    2018-02-01

    We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.

  2. Test Driven Development of a Parameterized Ice Sheet Component

    Science.gov (United States)

    Clune, T.

    2011-12-01

    Test driven development (TDD) is a software development methodology that offers many advantages over traditional approaches including reduced development and maintenance costs, improved reliability, and superior design quality. Although TDD is widely accepted in many software communities, the suitability to scientific software is largely undemonstrated and warrants a degree of skepticism. Indeed, numerical algorithms pose several challenges to unit testing in general, and TDD in particular. Among these challenges are the need to have simple, non-redundant closed-form expressions to compare against the results obtained from the implementation as well as realistic error estimates. The necessity for serial and parallel performance raises additional concerns for many scientific applicaitons. In previous work I demonstrated that TDD performed well for the development of a relatively simple numerical model that simulates the growth of snowflakes, but the results were anecdotal and of limited relevance to far more complex software components typical of climate models. This investigation has now been extended by successfully applying TDD to the implementation of a substantial portion of a new parameterized ice sheet component within a full climate model. After a brief introduction to TDD, I will present techniques that address some of the obstacles encountered with numerical algorithms. I will conclude with some quantitative and qualitative comparisons against climate components developed in a more traditional manner.

  3. The nuisance of nuisance regression: spectral misspecification in a common approach to resting-state fMRI preprocessing reintroduces noise and obscures functional connectivity.

    Science.gov (United States)

    Hallquist, Michael N; Hwang, Kai; Luna, Beatriz

    2013-11-15

    Recent resting-state functional connectivity fMRI (RS-fcMRI) research has demonstrated that head motion during fMRI acquisition systematically influences connectivity estimates despite bandpass filtering and nuisance regression, which are intended to reduce such nuisance variability. We provide evidence that the effects of head motion and other nuisance signals are poorly controlled when the fMRI time series are bandpass-filtered but the regressors are unfiltered, resulting in the inadvertent reintroduction of nuisance-related variation into frequencies previously suppressed by the bandpass filter, as well as suboptimal correction for noise signals in the frequencies of interest. This is important because many RS-fcMRI studies, including some focusing on motion-related artifacts, have applied this approach. In two cohorts of individuals (n=117 and 22) who completed resting-state fMRI scans, we found that the bandpass-regress approach consistently overestimated functional connectivity across the brain, typically on the order of r=.10-.35, relative to a simultaneous bandpass filtering and nuisance regression approach. Inflated correlations under the bandpass-regress approach were associated with head motion and cardiac artifacts. Furthermore, distance-related differences in the association of head motion and connectivity estimates were much weaker for the simultaneous filtering approach. We recommend that future RS-fcMRI studies ensure that the frequencies of nuisance regressors and fMRI data match prior to nuisance regression, and we advocate a simultaneous bandpass filtering and nuisance regression strategy that better controls nuisance-related variability. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Parameterization of a Hydrological Model for a Large, Ungauged Urban Catchment

    Directory of Open Access Journals (Sweden)

    Gerald Krebs

    2016-10-01

    Full Text Available Urbanization leads to the replacement of natural areas by impervious surfaces and affects the catchment hydrological cycle with adverse environmental impacts. Low impact development tools (LID that mimic hydrological processes of natural areas have been developed and applied to mitigate these impacts. Hydrological simulations are one possibility to evaluate the LID performance but the associated small-scale processes require a highly spatially distributed and explicit modeling approach. However, detailed data for model development are often not available for large urban areas, hampering the model parameterization. In this paper we propose a methodology to parameterize a hydrological model to a large, ungauged urban area by maintaining at the same time a detailed surface discretization for direct parameter manipulation for LID simulation and a firm reliance on available data for model conceptualization. Catchment delineation was based on a high-resolution digital elevation model (DEM and model parameterization relied on a novel model regionalization approach. The impact of automated delineation and model regionalization on simulation results was evaluated for three monitored study catchments (5.87–12.59 ha. The simulated runoff peak was most sensitive to accurate catchment discretization and calibration, while both the runoff volume and the fit of the hydrograph were less affected.

  5. Modeling dispersal and spatial connectivity of macro-invertebrates in Danish waters: An agent-based approach

    DEFF Research Database (Denmark)

    Pastor Rollan, Ane; Mariani, Patrizio; Erichsen, Anders Chr.

    2018-01-01

    behavior) and simulate dispersal processes within the muddy bottom habitats to derive recruitment rates and potential donor populations leading to population connectivity patterns on each site, one bay and two Danish fjords. We then use our recruitment results in the bay to compare them with field data...... and connectivity in marine populations as they can have multiple uses in the conservation and management of marine ecosystems. Here we investigate whether the open Kattegat, at the entrance to Baltic Sea, is the main source of recruitment to the benthos in associated estuaries and coastal sites through export...... in the study area. Our results show the importance of an integrated modeling tool combining ocean circulation and biological traits to obtain a detailed description of dispersal and connectivity of macro-invertebrate community in the area, which can provide a more accurate baseline to manage marine...

  6. Modelling runoff and erosion for a semi-arid catchment using a multi-scale approach based on hydrological connectivity

    NARCIS (Netherlands)

    Lesschen, J.P.; Schoorl, J.M.; Cammeraat, L.H.

    2009-01-01

    Runoff and erosion processes are often non-linear and scale dependent, which complicate runoff and erosion modelling at the catchment scale. One of the reasons for scale dependency is the influence of sinks, i.e. areas of infiltration and sedimentation, which lower hydrological connectivity and

  7. The package PAKPDF 1.1 of parameterizations of parton distribution functions in the proton

    International Nuclear Information System (INIS)

    Charchula, K.

    1992-01-01

    A FORTRAN package containing parameterizations of parton distribution functions (PDFs) in the proton is described, allows an easy access to PDFs provided by several recent parameterizations and to some parameters characterizing particular parameterization. Some comments about the use of various parameterizations are also included. (orig.)

  8. Initial Validation for the Estimation of Resting-State fMRI Effective Connectivity by a Generalization of the Correlation Approach

    Directory of Open Access Journals (Sweden)

    Nan Xu

    2017-05-01

    Full Text Available Resting-state functional MRI (rs-fMRI is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD signal from different regions of interest (ROIs. However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1 Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2 On simulated data designed to display the “common driver” problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3 On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain.

  9. Seizure-Onset Mapping Based on Time-Variant Multivariate Functional Connectivity Analysis of High-Dimensional Intracranial EEG: A Kalman Filter Approach.

    Science.gov (United States)

    Lie, Octavian V; van Mierlo, Pieter

    2017-01-01

    The visual interpretation of intracranial EEG (iEEG) is the standard method used in complex epilepsy surgery cases to map the regions of seizure onset targeted for resection. Still, visual iEEG analysis is labor-intensive and biased due to interpreter dependency. Multivariate parametric functional connectivity measures using adaptive autoregressive (AR) modeling of the iEEG signals based on the Kalman filter algorithm have been used successfully to localize the electrographic seizure onsets. Due to their high computational cost, these methods have been applied to a limited number of iEEG time-series (Kalman filter implementations, a well-known multivariate adaptive AR model (Arnold et al. 1998) and a simplified, computationally efficient derivation of it, for their potential application to connectivity analysis of high-dimensional (up to 192 channels) iEEG data. When used on simulated seizures together with a multivariate connectivity estimator, the partial directed coherence, the two AR models were compared for their ability to reconstitute the designed seizure signal connections from noisy data. Next, focal seizures from iEEG recordings (73-113 channels) in three patients rendered seizure-free after surgery were mapped with the outdegree, a graph-theory index of outward directed connectivity. Simulation results indicated high levels of mapping accuracy for the two models in the presence of low-to-moderate noise cross-correlation. Accordingly, both AR models correctly mapped the real seizure onset to the resection volume. This study supports the possibility of conducting fully data-driven multivariate connectivity estimations on high-dimensional iEEG datasets using the Kalman filter approach.

  10. A whole-brain computational modeling approach to explain the alterations in resting-state functional connectivity during progression of Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Murat Demirtaş

    2017-01-01

    Full Text Available Alzheimer's disease (AD is the most common dementia with dramatic consequences. The research in structural and functional neuroimaging showed altered brain connectivity in AD. In this study, we investigated the whole-brain resting state functional connectivity (FC of the subjects with preclinical Alzheimer's disease (PAD, mild cognitive impairment due to AD (MCI and mild dementia due to Alzheimer's disease (AD, the impact of APOE4 carriership, as well as in relation to variations in core AD CSF biomarkers. The synchronization in the whole-brain was monotonously decreasing during the course of the disease progression. Furthermore, in AD patients we found widespread significant decreases in functional connectivity (FC strengths particularly in the brain regions with high global connectivity. We employed a whole-brain computational modeling approach to study the mechanisms underlying these alterations. To characterize the causal interactions between brain regions, we estimated the effective connectivity (EC in the model. We found that the significant EC differences in AD were primarily located in left temporal lobe. Then, we systematically manipulated the underlying dynamics of the model to investigate simulated changes in FC based on the healthy control subjects. Furthermore, we found distinct patterns involving CSF biomarkers of amyloid-beta (Aβ1−42 total tau (t-tau and phosphorylated tau (p-tau. CSF Aβ1−42 was associated to the contrast between healthy control subjects and clinical groups. Nevertheless, tau CSF biomarkers were associated to the variability in whole-brain synchronization and sensory integration regions. These associations were robust across clinical groups, unlike the associations that were found for CSF Aβ1−42. APOE4 carriership showed no significant correlations with the connectivity measures.

  11. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    Science.gov (United States)

    Donahue, Aaron S.; Caldwell, Peter M.

    2018-02-01

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effect of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.

  12. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    Science.gov (United States)

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Amplification of intrinsic emittance due to rough metal cathodes: Formulation of a parameterization model

    Energy Technology Data Exchange (ETDEWEB)

    Charles, T.K. [School of Physics and Astronomy, Monash University, Clayton, Victoria, 3800 (Australia); Australian Synchrotron, 800 Blackburn Road, Clayton, Victoria, 3168 (Australia); Paganin, D.M. [School of Physics and Astronomy, Monash University, Clayton, Victoria, 3800 (Australia); Dowd, R.T. [Australian Synchrotron, 800 Blackburn Road, Clayton, Victoria, 3168 (Australia)

    2016-08-21

    Intrinsic emittance is often the limiting factor for brightness in fourth generation light sources and as such, a good understanding of the factors affecting intrinsic emittance is essential in order to be able to decrease it. Here we present a parameterization model describing the proportional increase in emittance induced by cathode surface roughness. One major benefit behind the parameterization approach presented here is that it takes the complexity of a Monte Carlo model and reduces the results to a straight-forward empirical model. The resulting models describe the proportional increase in transverse momentum introduced by surface roughness, and are applicable to various metal types, photon wavelengths, applied electric fields, and cathode surface terrains. The analysis includes the increase in emittance due to changes in the electric field induced by roughness as well as the increase in transverse momentum resultant from the spatially varying surface normal. We also compare the results of the Parameterization Model to an Analytical Model which employs various approximations to produce a more compact expression with the cost of a reduction in accuracy.

  14. A practical approach to harmonic compensation in power systems-series connection of passive and active filters

    OpenAIRE

    Fujita, Hideaki; Akagi, Hirofumi

    1991-01-01

    The authors present a combined system with a passive filter and a small-rated active filter, both connected in series with each other. The passive filter removes load produced harmonics just as a conventional filter does. The active filter plays a role in improving the filtering characteristics of the passive filter. This results in a great reduction of the required rating of the active filter and in eliminating all the limitations faced by using only the passive filter, leading to a practica...

  15. Inequalities and Duality in Gene Coexpression Networks of HIV-1 Infection Revealed by the Combination of the Double-Connectivity Approach and the Gini's Method

    Directory of Open Access Journals (Sweden)

    Chuang Ma

    2011-01-01

    Full Text Available The symbiosis (Sym and pathogenesis (Pat is a duality problem of microbial infection, including HIV/AIDS. Statistical analysis of inequalities and duality in gene coexpression networks (GCNs of HIV-1 infection may gain novel insights into AIDS. In this study, we focused on analysis of GCNs of uninfected subjects and HIV-1-infected patients at three different stages of viral infection based on data deposited in the GEO database of NCBI. The inequalities and duality in these GCNs were analyzed by the combination of the double-connectivity (DC approach and the Gini's method. DC analysis reveals that there are significant differences between positive and negative connectivity in HIV-1 stage-specific GCNs. The inequality measures of negative connectivity and edge weight are changed more significantly than those of positive connectivity and edge weight in GCNs from the HIV-1 uninfected to the AIDS stages. With the permutation test method, we identified a set of genes with significant changes in the inequality and duality measure of edge weight. Functional analysis shows that these genes are highly enriched for the immune system, which plays an essential role in the Sym-Pat duality (SPD of microbial infections. Understanding of the SPD problems of HIV-1 infection may provide novel intervention strategies for AIDS.

  16. Multimodel Uncertainty Changes in Simulated River Flows Induced by Human Impact Parameterizations

    Science.gov (United States)

    Liu, Xingcai; Tang, Qiuhong; Cui, Huijuan; Mu, Mengfei; Gerten Dieter; Gosling, Simon; Masaki, Yoshimitsu; Satoh, Yusuke; Wada, Yoshihide

    2017-01-01

    Human impacts increasingly affect the global hydrological cycle and indeed dominate hydrological changes in some regions. Hydrologists have sought to identify the human-impact-induced hydrological variations via parameterizing anthropogenic water uses in global hydrological models (GHMs). The consequently increased model complexity is likely to introduce additional uncertainty among GHMs. Here, using four GHMs, between-model uncertainties are quantified in terms of the ratio of signal to noise (SNR) for average river flow during 1971-2000 simulated in two experiments, with representation of human impacts (VARSOC) and without (NOSOC). It is the first quantitative investigation of between-model uncertainty resulted from the inclusion of human impact parameterizations. Results show that the between-model uncertainties in terms of SNRs in the VARSOC annual flow are larger (about 2 for global and varied magnitude for different basins) than those in the NOSOC, which are particularly significant in most areas of Asia and northern areas to the Mediterranean Sea. The SNR differences are mostly negative (-20 to 5, indicating higher uncertainty) for basin-averaged annual flow. The VARSOC high flow shows slightly lower uncertainties than NOSOC simulations, with SNR differences mostly ranging from -20 to 20. The uncertainty differences between the two experiments are significantly related to the fraction of irrigation areas of basins. The large additional uncertainties in VARSOC simulations introduced by the inclusion of parameterizations of human impacts raise the urgent need of GHMs development regarding a better understanding of human impacts. Differences in the parameterizations of irrigation, reservoir regulation and water withdrawals are discussed towards potential directions of improvements for future GHM development. We also discuss the advantages of statistical approaches to reduce the between-model uncertainties, and the importance of calibration of GHMs for not only

  17. Polynomial parameterized representation of macroscopic cross section for PWR reactor

    International Nuclear Information System (INIS)

    Fiel, Joao Claudio B.

    2015-01-01

    The purpose of this work is to describe, by means of Tchebychev polynomial, a parameterized representation of the homogenized macroscopic cross section for PWR fuel element as a function of soluble boron concentration, moderator temperature, fuel temperature, moderator density and 235 U 92 enrichment. Analyzed cross sections are: fission, scattering, total, transport, absorption and capture. This parameterization enables a quick and easy determination of the problem-dependent cross-sections to be used in few groups calculations. The methodology presented here will enable to provide cross-sections values to perform PWR core calculations without the need to generate them based on computer code calculations using standard steps. The results obtained by parameterized cross-sections functions, when compared with the cross-section generated by SCALE code calculations, or when compared with K inf , generated by MCNPX code calculations, show a difference of less than 0.7 percent. (author)

  18. Parameterized Analysis of Paging and List Update Algorithms

    DEFF Research Database (Denmark)

    Dorrigiv, Reza; Ehmsen, Martin R.; López-Ortiz, Alejandro

    2015-01-01

    that a larger cache leads to a better performance. We also apply the parameterized analysis framework to list update and show that certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results....... set model and express the performance of well known algorithms in terms of this parameter. This explicitly introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly...... express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their experimentally observed relative strengths. It also reflects the intuition...

  19. A multiresolution spatial parameterization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions

    Directory of Open Access Journals (Sweden)

    J. Ray

    2014-09-01

    Full Text Available The characterization of fossil-fuel CO2 (ffCO2 emissions is paramount to carbon cycle studies, but the use of atmospheric inverse modeling approaches for this purpose has been limited by the highly heterogeneous and non-Gaussian spatiotemporal variability of emissions. Here we explore the feasibility of capturing this variability using a low-dimensional parameterization that can be implemented within the context of atmospheric CO2 inverse problems aimed at constraining regional-scale emissions. We construct a multiresolution (i.e., wavelet-based spatial parameterization for ffCO2 emissions using the Vulcan inventory, and examine whether such a~parameterization can capture a realistic representation of the expected spatial variability of actual emissions. We then explore whether sub-selecting wavelets using two easily available proxies of human activity (images of lights at night and maps of built-up areas yields a low-dimensional alternative. We finally implement this low-dimensional parameterization within an idealized inversion, where a sparse reconstruction algorithm, an extension of stagewise orthogonal matching pursuit (StOMP, is used to identify the wavelet coefficients. We find that (i the spatial variability of fossil-fuel emission can indeed be represented using a low-dimensional wavelet-based parameterization, (ii that images of lights at night can be used as a proxy for sub-selecting wavelets for such analysis, and (iii that implementing this parameterization within the described inversion framework makes it possible to quantify fossil-fuel emissions at regional scales if fossil-fuel-only CO2 observations are available.

  20. Analyses of the stratospheric dynamics simulated by a GCM with a stochastic nonorographic gravity wave parameterization

    Science.gov (United States)

    Serva, Federico; Cagnazzo, Chiara; Riccio, Angelo

    2016-04-01

    version of the model, the default and a new stochastic version, in which the value of the perturbation field at launching level is not constant and uniform, but extracted at each time-step and grid-point from a given PDF. With this approach we are trying to add further variability to the effects given by the deterministic NOGW parameterization: the impact on the simulated climate will be assessed focusing on the Quasi-Biennial Oscillation of the equatorial stratosphere (known to be driven also by gravity waves) and on the variability of the mid-to-high latitudes atmosphere. The different characteristics of the circulation will be compared with recent reanalysis products in order to determine the advantages of the stochastic approach over the traditional deterministic scheme.

  1. Droplet Nucleation: Physically-Based Parameterizations and Comparative Evaluation

    Directory of Open Access Journals (Sweden)

    Steve Ghan

    2011-10-01

    Full Text Available One of the greatest sources of uncertainty in simulations of climate and climate change is the influence of aerosols on the optical properties of clouds. The root of this influence is the droplet nucleation process, which involves the spontaneous growth of aerosol into cloud droplets at cloud edges, during the early stages of cloud formation, and in some cases within the interior of mature clouds. Numerical models of droplet nucleation represent much of the complexity of the process, but at a computational cost that limits their application to simulations of hours or days. Physically-based parameterizations of droplet nucleation are designed to quickly estimate the number nucleated as a function of the primary controlling parameters: the aerosol number size distribution, hygroscopicity and cooling rate. Here we compare and contrast the key assumptions used in developing each of the most popular parameterizations and compare their performances under a variety of conditions. We find that the more complex parameterizations perform well under a wider variety of nucleation conditions, but all parameterizations perform well under the most common conditions. We then discuss the various applications of the parameterizations to cloud-resolving, regional and global models to study aerosol effects on clouds at a wide range of spatial and temporal scales. We compare estimates of anthropogenic aerosol indirect effects using two different parameterizations applied to the same global climate model, and find that the estimates of indirect effects differ by only 10%. We conclude with a summary of the outstanding challenges remaining for further development and application.

  2. A new parameterization for waveform inversion in acoustic orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2016-05-26

    Orthorhombic anisotropic model inversion is extra challenging because of the multiple parameter nature of the inversion problem. The high number of parameters required to describe the medium exerts considerable trade-off and additional nonlinearity to a full-waveform inversion (FWI) application. Choosing a suitable set of parameters to describe the model and designing an effective inversion strategy can help in mitigating this problem. Using the Born approximation, which is the central ingredient of the FWI update process, we have derived radiation patterns for the different acoustic orthorhombic parameterizations. Analyzing the angular dependence of scattering (radiation patterns) of the parameters of different parameterizations starting with the often used Thomsen-Tsvankin parameterization, we have assessed the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. The analysis led us to introduce new parameters ϵd, δd, and ηd, which have azimuthally dependent radiation patterns, but keep the scattering potential of the transversely isotropic parameters stationary with azimuth (azimuth independent). The novel parameters ϵd, δd, and ηd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. Therefore, these deviation parameters offer a new parameterization style for an acoustic orthorhombic medium described by six parameters: three vertical transversely isotropic (VTI) parameters, two deviation parameters, and one parameter describing the anisotropy in the horizontal symmetry plane. The main feature of any parameterization based on the deviation parameters, is the azimuthal independency of the modeled data with respect to the VTI parameters, which allowed us to propose practical inversion strategies based on our experience with the VTI parameters. This feature of the new parameterization style holds for even the long-wavelength components of

  3. Bioavailability of radiocaesium in soil: parameterization using soil characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Syssoeva, A.A.; Konopleva, I.V. [Russian Institute of Agricultural Radiology and Agroecology, Obninsk (Russian Federation)

    2004-07-01

    It has been shown that radiocaesium availability to plants strongly influenced by soil properties. For the best evaluation of TFs it necessary to use mechanistic models that predict radionuclide uptake by plants based on consideration of sorption-desorption and fixation-remobilization of the radionuclide in the soil as well as root uptake processes controlled by the plant. The aim of the research was to characterise typical Russian soils on the basis of the radiocaesium availability. The parameter of the radiocaesium availability in soils (A) has been developed which consist on radiocaesium exchangeability; CF -concentration factor which is the ratio of the radiocaesium in plant to that in soil solution; K{sub Dex} - exchangeable solid-liquid distribution coefficient of radiocaesium. The approach was tested for a wide range of Russian soils using radiocaesium uptake data from a barley pot trial and parameters of the radiocaesium bioavailability. Soils were collected from the arable horizons in different soil climatic zones of Russia and artificially contaminated by {sup 137}Cs. The classification of soils in terms of the radiocaesium availability corresponds quite well to observed linear relationship between {sup 137}Cs TF for barley and A. K{sub Dex} is related to the soil radiocaesium interception potential (RIP), which was found to be positively and strongly related to clay and physical clay (<0,01 mm) content. The {sup 137}Cs exchangeability were found to be in close relation to the soil vermiculite content, which was estimated by the method of Cs{sup +} fixation. It's shown radiocaesium availability to plants in soils under study can be parameterized through mineralogical soil characteristics: % clay and the soil vermiculite content. (author)

  4. Parameterization of radiocaesium soil-plant transfer using soil characteristics

    International Nuclear Information System (INIS)

    Konoplev, A. V.; Drissner, J.; Klemt, E.; Konopleva, I. V.; Zibold, G.

    1996-01-01

    A model of radionuclide soil-plant transfer is proposed to parameterize the transfer factor by soil and soil solution characteristics. The model is tested with experimental data on the aggregated transfer factor T ag and soil parameters for 8 forest sites in Baden-Wuerttemberg. It is shown that the integral soil-plant transfer factor can be parameterized through radiocaesium exchangeability, capacity of selective sorption sites and ion composition of the soil solution or the water extract. A modified technique of (FES) measurement for soils with interlayer collapse is proposed. (author)

  5. Use of a hydrogeochemical approach in determining hydraulic connection between porous heat reservoirs in Kaifeng area, Henan, China

    International Nuclear Information System (INIS)

    Lin Xueyu; Taboure, Aboubacar; Wang Xinyi; Liao Zisheng

    2007-01-01

    In this paper a case study of hydraulic connectivity in a 300-1600 m deep, low temperature, sedimentary geothermal system in Kaifeng area, Henan province, China is presented. Based on lithologic data from 52 geothermal wells and chemical data on geothermal water (GW) from six depth-specific and representative wells, the system was chemically grouped into two main hot reservoirs (300-1300 m and 1300-1600 m deep), which were in turn, divided into six sub-reservoirs (SRs). Data on stable isotope ( 2 H and 18 O) ratios, radioactive isotope ( 14 C) radiation in conjunction with computation of mineral-fluid chemical equilibria were used to establish the recharge source (a mountainous region in the southwestern part of Zhengzhou, 60 km away); evaluate groundwater age which varied with well depth from 15630 ± 310 a to 24970 ± 330 a; and assess the chemical equilibrium state within the system. The results of different analysis did not suggest an obvious hydraulic connection between the two main hot reservoirs. The location of the recharge zone and the geohydrologic characteristics of the study area demonstrate that the GW utilized from the system is mainly derived from confined waters of meteoric origin

  6. Analytical Approach to Circulating Current Mitigation in Hexagram Converter-Based Grid-Connected Photovoltaic Systems Using Multiwinding Coupled Inductors

    Directory of Open Access Journals (Sweden)

    Abdullrahman A. Al-Shamma’a

    2018-01-01

    Full Text Available The hexagram multilevel converter (HMC is composed of six conventional two-level voltage source converters (VSCs, where each VSC module is connected to a string of PV arrays. The VSC modules are connected through inductors, which are essential to minimize the circulating current. Selecting inductors with suitable inductance is no simple process, where the inductance value should be large to minimize the circulating current as well as small to reduce an extra voltage drop. This paper analyzes the utilization of a multiwinding (e.g., two, three, and six windings coupled inductor to interconnect the six VSC modules instead of six single inductors, to minimize the circulating current inside the HMC. Then, a theoretical relationship between the total impedance to the circulating current, the number of coupled inductor windings, and the magnetizing inductance is derived. Owing to the coupled inductors, the impedance on the circulating current path is a multiple of six times the magnetizing inductance, whereas the terminal voltage is slightly affected by the leakage inductance. The HMC is controlled to work under variable solar radiation, providing active power to the grid. Additional functions such as DSTATCOM, during daytime, are also demonstrated. The controller performance is found to be satisfactory for both active and reactive power supplies.

  7. A new approach for power quality improvement of DFIG based wind farms connected to weak utility grid

    Directory of Open Access Journals (Sweden)

    Hossein Mahvash

    2017-09-01

    Full Text Available Most of power quality problems for grid connected doubly fed induction generators (DFIGs with wind turbine include flicker, variations of voltage RMS profile, and injected harmonics due to switching in DFIG converters. Flicker phenomenon is the most important problem in wind power systems. This paper described an effective method for mitigating flicker emission and power quality improvement for a fairly weak grid connected to a wind farm with DFIGs. The method was applied in the rotor side converter (RSC of the DFIG to control the output reactive power. q axis reference current was directly derived according to the mathematical relation between rotor q axis current and DFIG output reactive power without using PI controller. To extract the reference reactive power, the stator voltage control loop with the droop coefficient was proposed to regulate the grid voltage level in each operational condition. The DFIG output active power was separately controlled in d axis considering the stator voltage orientation control (SVOC. Different simulations were carried out on the test system and the flicker short term severity index (Pst was calculated for each case study using the discrete flickermeter model according to IEC 61400 standard. The obtained results validated flicker mitigation and power quality enhancement for the grid.

  8. An analysis of MM5 sensitivity to different parameterizations for high-resolution climate simulations

    Science.gov (United States)

    Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.

    2009-04-01

    An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very

  9. The applicability of the viscous α-parameterization of gravitational instability in circumstellar disks

    Science.gov (United States)

    Vorobyov, E. I.

    2010-01-01

    We study numerically the applicability of the effective-viscosity approach for simulating the effect of gravitational instability (GI) in disks of young stellar objects with different disk-to-star mass ratios ξ . We adopt two α-parameterizations for the effective viscosity based on Lin and Pringle [Lin, D.N.C., Pringle, J.E., 1990. ApJ 358, 515] and Kratter et al. [Kratter, K.M., Matzner, Ch.D., Krumholz, M.R., 2008. ApJ 681, 375] and compare the resultant disk structure, disk and stellar masses, and mass accretion rates with those obtained directly from numerical simulations of self-gravitating disks around low-mass (M∗ ∼ 1.0M⊙) protostars. We find that the effective viscosity can, in principle, simulate the effect of GI in stellar systems with ξ≲ 0.2- 0.3 , thus corroborating a similar conclusion by Lodato and Rice [Lodato, G., Rice, W.K.M., 2004. MNRAS 351, 630] that was based on a different α-parameterization. In particular, the Kratter et al.'s α-parameterization has proven superior to that of Lin and Pringle's, because the success of the latter depends crucially on the proper choice of the α-parameter. However, the α-parameterization generally fails in stellar systems with ξ≳ 0.3 , particularly in the Classes 0 and I phases of stellar evolution, yielding too small stellar masses and too large disk-to-star mass ratios. In addition, the time-averaged mass accretion rates onto the star are underestimated in the early disk evolution and greatly overestimated in the late evolution. The failure of the α-parameterization in the case of large ξ is caused by a growing strength of low-order spiral modes in massive disks. Only in the late Class II phase, when the magnitude of spiral modes diminishes and the mode-to-mode interaction ensues, may the effective viscosity be used to simulate the effect of GI in stellar systems with ξ≳ 0.3 . A simple modification of the effective viscosity that takes into account disk fragmentation can somewhat improve

  10. Impact of cloud microphysics and cumulus parameterization on ...

    Indian Academy of Sciences (India)

    2007-10-09

    Oct 9, 2007 ... Bangladesh. Weather Research and Forecast (WRF–ARW version) modelling system with six dif- .... tem intensified rapidly into a land depression over southern part of ... Impact of cloud microphysics and cumulus parameterization on heavy rainfall. 261 .... tent and temperature and is represented as a sum.

  11. Parameterized representation of macroscopic cross section for PWR reactor

    International Nuclear Information System (INIS)

    Fiel, João Cláudio Batista; Carvalho da Silva, Fernando; Senra Martinez, Aquilino; Leal, Luiz C.

    2015-01-01

    Highlights: • This work describes a parameterized representation of the homogenized macroscopic cross section for PWR reactor. • Parameterization enables a quick determination of problem-dependent cross-sections to be used in few group calculations. • This work allows generating group cross-section data to perform PWR core calculations without computer code calculations. - Abstract: The purpose of this work is to describe, by means of Chebyshev polynomials, a parameterized representation of the homogenized macroscopic cross section for PWR fuel element as a function of soluble boron concentration, moderator temperature, fuel temperature, moderator density and 235 92 U enrichment. The cross-section data analyzed are fission, scattering, total, transport, absorption and capture. The parameterization enables a quick and easy determination of problem-dependent cross-sections to be used in few group calculations. The methodology presented in this paper will allow generation of group cross-section data from stored polynomials to perform PWR core calculations without the need to generate them based on computer code calculations using standard steps. The results obtained by the proposed methodology when compared with results from the SCALE code calculations show very good agreement

  12. Parameterization of planetary wave breaking in the middle atmosphere

    Science.gov (United States)

    Garcia, Rolando R.

    1991-01-01

    A parameterization of planetary wave breaking in the middle atmosphere has been developed and tested in a numerical model which includes governing equations for a single wave and the zonal-mean state. The parameterization is based on the assumption that wave breaking represents a steady-state equilibrium between the flux of wave activity and its dissipation by nonlinear processes, and that the latter can be represented as linear damping of the primary wave. With this and the additional assumption that the effect of breaking is to prevent further amplitude growth, the required dissipation rate is readily obtained from the steady-state equation for wave activity; diffusivity coefficients then follow from the dissipation rate. The assumptions made in the derivation are equivalent to those commonly used in parameterizations for gravity wave breaking, but the formulation in terms of wave activity helps highlight the central role of the wave group velocity in determining the dissipation rate. Comparison of model results with nonlinear calculations of wave breaking and with diagnostic determinations of stratospheric diffusion coefficients reveals remarkably good agreement, and suggests that the parameterization could be useful for simulating inexpensively, but realistically, the effects of planetary wave transport.

  13. Connecting Grammaticalisation

    DEFF Research Database (Denmark)

    Nørgård-Sørensen, Jens; Heltoft, Lars; Schøsler, Lene

    morphological, topological and constructional paradigms often connect to form complex paradigms. The book introduces the concept of connecting grammaticalisation to describe the formation, restructuring and dismantling of such complex paradigms. Drawing primarily on data from Germanic, Romance and Slavic...

  14. Connecting the Dots: State Health Department Approaches to Addressing Shared Risk and Protective Factors Across Multiple Forms of Violence

    Science.gov (United States)

    Wilkins, Natalie; Myers, Lindsey; Kuehl, Tomei; Bauman, Alice; Hertz, Marci

    2018-01-01

    Violence takes many forms, including intimate partner violence, sexual violence, child abuse and neglect, bullying, suicidal behavior, and elder abuse and neglect. These forms of violence are interconnected and often share the same root causes. They can also co-occur together in families and communities and can happen at the same time or at different stages of life. Often, due to a variety of factors, separate, “siloed” approaches are used to address each form of violence. However, understanding and implementing approaches that prevent and address the overlapping root causes of violence (risk factors) and promote factors that increase the resilience of people and communities (protective factors) can help practitioners more effectively and efficiently use limited resources to prevent multiple forms of violence and save lives. This article presents approaches used by 2 state health departments, the Maryland Department of Health and Mental Hygiene and the Colorado Department of Public Health and Environment, to integrate a shared risk and protective factor approach into their violence prevention work and identifies key lessons learned that may serve to inform crosscutting violence prevention efforts in other states. PMID:29189502

  15. Connecting the Dots: State Health Department Approaches to Addressing Shared Risk and Protective Factors Across Multiple Forms of Violence.

    Science.gov (United States)

    Wilkins, Natalie; Myers, Lindsey; Kuehl, Tomei; Bauman, Alice; Hertz, Marci

    Violence takes many forms, including intimate partner violence, sexual violence, child abuse and neglect, bullying, suicidal behavior, and elder abuse and neglect. These forms of violence are interconnected and often share the same root causes. They can also co-occur together in families and communities and can happen at the same time or at different stages of life. Often, due to a variety of factors, separate, "siloed" approaches are used to address each form of violence. However, understanding and implementing approaches that prevent and address the overlapping root causes of violence (risk factors) and promote factors that increase the resilience of people and communities (protective factors) can help practitioners more effectively and efficiently use limited resources to prevent multiple forms of violence and save lives. This article presents approaches used by 2 state health departments, the Maryland Department of Health and Mental Hygiene and the Colorado Department of Public Health and Environment, to integrate a shared risk and protective factor approach into their violence prevention work and identifies key lessons learned that may serve to inform crosscutting violence prevention efforts in other states.

  16. Research of connection between mass audience and new media. Approaches to new model of mass communication measurement

    OpenAIRE

    Sibiriakova Olena Oleksandrivna

    2015-01-01

    In this research the author examines changes to approaches of observation of mass communication. As a result of systemization of key theoretical models of communication, the author comes to conclusion of evolution of ideas about the process of mass communication measurement from linear to multisided and multiple.

  17. Monte Carlo-based subgrid parameterization of vertical velocity and stratiform cloud microphysics in ECHAM5.5-HAM2

    Directory of Open Access Journals (Sweden)

    J. Tonttila

    2013-08-01

    Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.

  18. Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application

    Science.gov (United States)

    Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni

    2018-06-01

    Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.

  19. A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction

    Science.gov (United States)

    Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur

    2009-07-01

    For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).

  20. Physiological and transcriptional approaches reveal connection between nitrogen and manganese cycles in Shewanella algae C6G3

    Science.gov (United States)

    Aigle, Axel; Bonin, Patricia; Iobbi-Nivol, Chantal; Méjean, Vincent; Michotey, Valérie

    2017-03-01

    To explain anaerobic nitrite/nitrate production at the expense of ammonium mediated by manganese oxide (Mn(IV)) in sediment, nitrate and manganese respirations were investigated in a strain (Shewanella algae C6G3) presenting these features. In contrast to S. oneidensis MR-1, a biotic transitory nitrite accumulation at the expense of ammonium was observed in S. algae during anaerobic growth with Mn(IV) under condition of limiting electron acceptor, concomitantly, with a higher electron donor stoichiometry than expected. This low and reproducible transitory accumulation is the result of production and consumption since the strain is able to dissimilative reduce nitrate into ammonium. Nitrite production in Mn(IV) condition is strengthened by comparative expression of the nitrate/nitrite reductase genes (napA, nrfA, nrfA-2), and rates of the nitrate/nitrite reductase activities under Mn(IV), nitrate or fumarate conditions. Compared with S. oneidensis MR-1, S. algae contains additional genes that encode nitrate and nitrite reductases (napA-α and nrfA-2) and an Outer Membrane Cytochrome (OMC)(mtrH). Different patterns of expression of the OMC genes (omcA, mtrF, mtrH and mtrC) were observed depending on the electron acceptor and growth phase. Only gene mtrF-2 (SO1659 homolog) was specifically expressed under the Mn(IV) condition. Nitrate and Mn(IV) respirations seem connected at the physiological and transcriptional levels.

  1. Nursing staff connect libraries with improving patient care but not with achieving organisational objectives: a grounded theory approach.

    Science.gov (United States)

    Chamberlain, David; Brook, Richard

    2014-03-01

    Health organisations are often driven by specific targets defined by mission statements, aims and objectives to improve patient care. Health libraries need to demonstrate that they contribute to organisational objectives, but it is not clear how nurses view that contribution. To investigate ward nursing staff motivations, their awareness of ward and organisational objectives; and their attitudes towards the contribution of health library services to improving patient care. Qualitative research using focus group data was combined with content analysis of literature evidence and library statistics (quantitative data). Data were analysed using thematic coding, divided into five group themes: understanding of Trust, Ward and Personal objectives, use of Library, use of other information sources, quality and Issues. Four basic social-psychological processes were then developed. Behaviour indicates low awareness of organisational objectives despite patient-centric motivation. High awareness of library services is shown with some connection made by ward staff between improved knowledge and improved patient care. There was a two-tiered understanding of ward objectives and library services, based on level of seniority. However, evidence-based culture needs to be intrinsic in the organisation before all staff benefit. Libraries can actively engage in this at ward and board level and improve patient care by supporting organisational objectives. © 2014 The author. Health Information and Libraries Journal © 2014 Health Libraries Group.

  2. Optimal Layout Design using the Element Connectivity Parameterization Method: Application to Three Dimensional Geometrical Nonlinear Structures

    DEFF Research Database (Denmark)

    Yoon, Gil Ho; Joung, Young Soo; Kim, Yoon Young

    2005-01-01

    The topology design optimization of “three-dimensional geometrically-nonlinear” continuum structures is still a difficult problem not only because of its problem size but also the occurrence of unstable continuum finite elements during the design optimization. To overcome this difficulty, the ele......) stiffness matrix of continuum finite elements. Therefore, any finite element code, including commercial codes, can be readily used for the ECP implementation. The key ideas and characteristics of these methods will be presented in this paper....

  3. Topology Optimization of Shape Memory Alloy Actuators using Element Connectivity Parameterization

    DEFF Research Database (Denmark)

    Langelaar, Matthijs; Yoon, Gil Ho; Kim, Yoon Young

    2005-01-01

    This paper presents the first application of topology optimization to the design of shape memory alloy actuators. Shape memory alloys (SMA’s) exhibit strongly nonlinear, temperature-dependent material behavior. The complexity in the constitutive behavior makes the topology design of SMA structure......) stiffness matrix of continuum finite elements. Therefore, any finite element code, including commercial codes, can be readily used for the ECP implementation. The key ideas and characteristics of these methods will be presented in this paper....

  4. Approach of the Two-way Influence Between Lean and Green Manufacturing and its Connection to Related Organisational Areas

    Directory of Open Access Journals (Sweden)

    Rodrigo Salvador

    2017-07-01

    Full Text Available Initiatives toward Lean and Green Manufacturing are given mainly due to organisational response to current market’s economic and environmental pressures. This paper, therefore, aims to present a brief discussion based on a literature review of the potential two-way influence between Lean and Green Manufacturing and its role on the main organisational areas with a closer relationship to such approaches, which were observed to be more extensively discussed in the literature. Naturally lean practises seem more likely to deploy into green outcomes, though the other way around can also occur. There is some blur on the factual integration of both themes, as some authors suggest. Notwhithstanding, they certainly present certain synergy. Thereupon, further research is needed to unveil the real ties, overlaps and gaps between these approaches.

  5. Connecting the East and the West, the Local and the Universal: The Methodological Elements of a Transcultural Approach to Bioethics.

    Science.gov (United States)

    Nie, Jing-Bao; Fitzgerald, Ruth P

    From the outset, cross-cultural and transglobal bioethics has constituted a potent arena for a dynamic public discourse and academic debate alike. But prominent bioethical debates on such issues as the notion of common morality and a distinctive "Asian" bioethics in contrast to a "Western" one reveal some deeply rooted and still popular but seriously problematic methodological habits in approaching cultural differences, most notably, radically dichotomizing the East and the West, the local and the universal. In this paper, a "transcultural" approach to bioethics and cultural studies is proposed. It takes seriously the challenges offered by social sciences, anthropology in particular, towards the development of new methodologies for comparative and global bioethics. The key methodological elements of "transculturalism" include acknowledging the great internal plurality within every culture; highlighting the complexity of cultural differences; upholding the primacy of morality; incorporating a reflexive theory of social power; and promoting changes or progress towards shared and sometimes new moral values.

  6. THEORETICAL AND METHODOLOGICAL APPROACHES TO THE STUDY OF THE IMPACT OF INFORMATION TECHNOLOGY ON SOCIAL CONNECTIONS AMONG YOUTH

    Directory of Open Access Journals (Sweden)

    Sofia Alexandrovna Zverkova

    2015-11-01

    Full Text Available The urgency is due to the virtualization of communication in modern society, especially among young people, affecting social relations and social support services. Stressed the need for a more in-depth study of network virtualization of social relations of society, due to the ambiguous consequences of this phenomenon among the youth.Purpose. Analyze classic and contemporary theoretical and methodological approaches to the study of social ties and social support in terms of technological progress.Results. The article presents a sociological analysis of theoretical and methodological approaches to the study of problems of interaction and social support among youth through strong and weak social ties in cyberspace and in the real world.Practical implications. The analysis gives the opportunity for a wide range of examining social relations in various fields of sociology, such as sociology of youth, sociology of communications.

  7. The Zebrafish GenomeWiki: a crowdsourcing approach to connect the long tail for zebrafish gene annotation

    OpenAIRE

    Singh, Meghna; Bhartiya, Deeksha; Maini, Jayant; Sharma, Meenakshi; Singh, Angom Ramcharan; Kadarkaraisamy, Subburaj; Rana, Rajiv; Sabharwal, Ankit; Nanda, Srishti; Ramachandran, Aravindhakshan; Mittal, Ashish; Kapoor, Shruti; Sehgal, Paras; Asad, Zainab; Kaushik, Kriti

    2014-01-01

    A large repertoire of gene-centric data has been generated in the field of zebrafish biology. Although the bulk of these data are available in the public domain, most of them are not readily accessible or available in nonstandard formats. One major challenge is to unify and integrate these widely scattered data sources. We tested the hypothesis that active community participation could be a viable option to address this challenge. We present here our approach to create standards for assimilat...

  8. Preferential flow dynamics in agricultural soils in Navarre (Spain): an experimental approach to gain insight into water connectivity

    Science.gov (United States)

    Iturria, Iban; Zubieta, Elena; Giménez, Rafael; Ángel Campo-Bescós, Miguel

    2017-04-01

    To address studies on soil erosion and water quality it is essential to understand and quantify water movements through the soil. The estimation of this movement is usually based on soil texture and structure since it is assumed that the water moves across soil matrix. However, soils prone to the formation of cracks or macropores could trigger rapid flow paths, capable of drastically changing the movement of the water and, therefore, its connectivity across the soil. This would have important consequences both for runoff -and thus for erosion- and for groundwater quality. Local preliminary studies have shown that in many agrarian soils in Navarre (Spain), infiltration rate was mainly determined by this type of preferential flow. On the other hand, the formation of these cracks basically responded to expansion/contraction processes of clays due to changes in soil moisture content caused by rainfall. The aim of this work was to quantify in agricultural soil the presence of cracks/macropores responsible for preferential flow and their temporal variation compared to different soil moisture contents. The work was carried out in experimental plots (150 m2) of the UPNA with different type of conventional tillage: (i) mouldboard plough: (ii) chisel and (iii) mouldboard+Molon rake. Each plot was divided into two halves or subplots. On half was submitted to the action of 4 simulated rainfall (5 days passing between each event); whereas in the other half, no rain was applied. Six subplots were thus defined. After each of the 4 rainfall, and once the 5 days had passed, the following experiments were conducted in each of the 6 subplots. In microplots (0.5 m2) a colourant (aqueous solution of bromide) was applied (Lu and Wu, 2003). To be specific, 8 mm of this solution was applied as intense rain with a sprinkler, but avoiding any waterlogging. Then, vertical cuts of 50-60 cm were made where the cracks/macropores were evidenced by the colourant. Photographs of the profiles were

  9. A transcriptional approach to unravel the connection between phospholipases A₂ and D and ABA signal in citrus under water stress.

    Science.gov (United States)

    Romero, Paco; Lafuente, M Teresa; Alférez, Fernando

    2014-07-01

    The effect of water stress on the interplay between phospholipases (PL) A2 and D and ABA signalling was investigated in fruit and leaves from the sweet orange Navelate and its fruit-specific ABA-deficient mutant Pinalate by studying simultaneously expression of 5 PLD and 3 PLA2-encoding genes. In general, expression levels of PLD-encoding genes were higher at harvest in the flavedo (coloured outer part of the peel) from Pinalate. Moreover, a higher and transient increase in expression of CsPLDα, CsPLDβ, CsPLDδ and CsPLDζ was observed in the mutant as compared to Navelate fruit under water stress, which may reflect a mechanism of acclimation to water stress influenced by ABA deficiency. An early induction in CsPLDγ gene expression, when increase in peel damage during fruit storage was most evident, suggested a role for this gene in membrane degradation processes during water stress. Exogenous ABA on mutant fruit modified the expression of all PLD genes and reduced the expression of CsPLDα and CsPLDβ by 1 week to levels similar to those of Navelate, suggesting a repressor role of ABA on these genes. In general, CssPLA2α and β transcript levels were lower in flavedo from Pinalate than from Navelate fruit during the first 3 weeks of storage, suggesting that expression of these genes also depends at least partially on ABA levels. Patterns of expression of PLD and PLA2-encoding genes were very similar in Navelate and Pinalate leaves, which have similar ABA levels, when comparing both RH conditions. Results comparison with other from previous works in the same experimental systems helped to decipher the effect of the stress severity on the differential response of some of these genes under dehydration conditions and pointed out the interplay between PLA2 and PLD families and their connection with ABA signalling in citrus. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  10. Parameterizing Coefficients of a POD-Based Dynamical System

    Science.gov (United States)

    Kalb, Virginia L.

    2010-01-01

    A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter

  11. A simple approach to enhance the performance of complex-coefficient filter-based PLL in grid-connected applications

    DEFF Research Database (Denmark)

    Ramezani, Malek; Golestan, Saeed; Li, Shuhui

    2018-01-01

    In recent years, a large number of three-phase phase-locked loops (PLLs) have been developed. One of the most popular ones is the complex coefficient filterbased PLL (CCF-PLL). The CCFs benefit from a sequence selective filtering ability and, hence, enable the CCF-PLL to selectively reject/extract...... disturbances before the PLL control loop while maintaining an acceptable dynamic behavior. The aim of this paper is presenting a simple yet effective approach to enhance the standard CCF-PLL performance without requiring any additional computational load....

  12. Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology

    Science.gov (United States)

    Jin, Z.; Azzari, G.; Lobell, D. B.

    2016-12-01

    Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.

  13. The Grell-Freitas Convective Parameterization: Recent Developments and Applications Within the NASA GEOS Global Model

    Science.gov (United States)

    Freitas, S.; Grell, G. A.; Molod, A.

    2017-12-01

    We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization (Grell and Freitas, 2014) is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Scale dependence for deep convection is implemented either through using the method described by Arakawa et al (2011), or through lateral spreading of the subsidence terms. Aerosol effects are included though the dependence of autoconversion and evaporation on the CCN number concentration.Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, congestus, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Also, a beta-pdf is employed now to represent the normalized mass flux profile. This opens up an additional venue to apply stochasticism in the scheme.

  14. Stochastic parameterizing manifolds and non-Markovian reduced equations stochastic manifolds for nonlinear SPDEs II

    CERN Document Server

    Chekroun, Mickaël D; Wang, Shouhong

    2015-01-01

    In this second volume, a general approach is developed to provide approximate parameterizations of the "small" scales by the "large" ones for a broad class of stochastic partial differential equations (SPDEs). This is accomplished via the concept of parameterizing manifolds (PMs), which are stochastic manifolds that improve, for a given realization of the noise, in mean square error the partial knowledge of the full SPDE solution when compared to its projection onto some resolved modes. Backward-forward systems are designed to give access to such PMs in practice. The key idea consists of representing the modes with high wave numbers as a pullback limit depending on the time-history of the modes with low wave numbers. Non-Markovian stochastic reduced systems are then derived based on such a PM approach. The reduced systems take the form of stochastic differential equations involving random coefficients that convey memory effects. The theory is illustrated on a stochastic Burgers-type equation.

  15. submitter Data-driven RBE parameterization for helium ion beams

    CERN Document Server

    Mairani, A; Dokic, I; Valle, S M; Tessonnier, T; Galm, R; Ciocca, M; Parodi, K; Ferrari, A; Jäkel, O; Haberer, T; Pedroni, P; Böhlen, T T

    2016-01-01

    Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter ${{(\\alpha /\\beta )}_{\\text{ph}}}$ of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the $\\text{RB}{{\\text{E}}_{\\alpha}}={{\\alpha}_{\\text{He}}}/{{\\alpha}_{\\text{ph}}}$ and ${{\\text{R}}_{\\beta}}={{\\beta}_{\\text{He}}}/{{\\beta}_{\\text{ph}}}$ ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival ($\\text{RB}{{\\text{E}}_{10}}$ ) are compared with the experimental ones. Pearson's correlati...

  16. Parameterized neural networks for high-energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Baldi, Pierre; Sadowski, Peter [University of California, Department of Computer Science, Irvine, CA (United States); Cranmer, Kyle [NYU, Department of Physics, New York, NY (United States); Faucett, Taylor; Whiteson, Daniel [University of California, Department of Physics and Astronomy, Irvine, CA (United States)

    2016-05-15

    We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results. (orig.)

  17. IR OPTICS MEASUREMENT WITH LINEAR COUPLING'S ACTION-ANGLE PARAMETERIZATION

    International Nuclear Information System (INIS)

    LUO, Y.; BAI, M.; PILAT, R.; SATOGATA, T.; TRBOJEVIC, D.

    2005-01-01

    A parameterization of linear coupling in action-angle coordinates is convenient for analytical calculations and interpretation of turn-by-turn (TBT) beam position monitor (BPM) data. We demonstrate how to use this parameterization to extract the twiss and coupling parameters in interaction regions (IRs), using BPMs on each side of the long IR drift region. The example of TBT BPM analysis was acquired at the Relativistic Heavy Ion Collider (RHIC), using an AC dipole to excite a single eigenmode. Besides the full treatment, a fast estimate of beta*, the beta function at the interaction point (IP), is provided, along with the phase advance between these BPMs. We also calculate and measure the waist of the beta function and the local optics

  18. Firefly Algorithm for Polynomial Bézier Surface Parameterization

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2013-01-01

    reality, medical imaging, computer graphics, computer animation, and many others. Very often, the preferred approximating surface is polynomial, usually described in parametric form. This leads to the problem of determining suitable parametric values for the data points, the so-called surface parameterization. In real-world settings, data points are generally irregularly sampled and subjected to measurement noise, leading to a very difficult nonlinear continuous optimization problem, unsolvable with standard optimization techniques. This paper solves the parameterization problem for polynomial Bézier surfaces by applying the firefly algorithm, a powerful nature-inspired metaheuristic algorithm introduced recently to address difficult optimization problems. The method has been successfully applied to some illustrative examples of open and closed surfaces, including shapes with singularities. Our results show that the method performs very well, being able to yield the best approximating surface with a high degree of accuracy.

  19. Parameterized neural networks for high-energy physics

    International Nuclear Information System (INIS)

    Baldi, Pierre; Sadowski, Peter; Cranmer, Kyle; Faucett, Taylor; Whiteson, Daniel

    2016-01-01

    We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results. (orig.)

  20. Elastic FWI for VTI media: A synthetic parameterization study

    KAUST Repository

    Kamath, Nishant

    2016-09-06

    A major challenge for multiparameter full-waveform inversion (FWI) is the inherent trade-offs (or cross-talk) between model parameters. Here, we perform FWI of multicomponent data generated for a synthetic VTI (transversely isotropic with a vertical symmetry axis) model based on a geologic section of the Valhall field. A horizontal displacement source, which excites intensive shear waves in the conventional offset range, helps provide more accurate updates to the SV-wave vertical velocity. We test three model parameterizations, which exhibit different radiation patterns and, therefore, create different parameter trade-offs. The results show that the choice of parameterization for FWI depends on the availability of long-offset data, the quality of the initial model for the anisotropy coefficients, and the parameter that needs to be resolved with the highest accuracy.

  1. Parameterization of phase change of water in a mesoscale model

    Energy Technology Data Exchange (ETDEWEB)

    Levkov, L; Eppel, D; Grassl, H

    1987-01-01

    A parameterization scheme of phase change of water is suggested to be used in the 3-D numerical nonhydrostatic model GESIMA. The microphysical formulation follows the so-called bulk technique. With this procedure the net production rates in the balance equations for water and potential temperature are given both for liquid and ice-phase. Convectively stable as well as convectively unstable mesoscale systems are considered. With 2 figs..

  2. Robust parameterization of elastic and absorptive electron atomic scattering factors

    International Nuclear Information System (INIS)

    Peng, L.M.; Ren, G.; Dudarev, S.L.; Whelan, M.J.

    1996-01-01

    A robust algorithm and computer program have been developed for the parameterization of elastic and absorptive electron atomic scattering factors. The algorithm is based on a combined modified simulated-annealing and least-squares method, and the computer program works well for fitting both elastic and absorptive atomic scattering factors with five Gaussians. As an application of this program, the elastic electron atomic scattering factors have been parameterized for all neutral atoms and for s up to 6 A -1 . Error analysis shows that the present results are considerably more accurate than the previous analytical fits in terms of the mean square value of the deviation between the numerical and fitted scattering factors. Parameterization for absorptive atomic scattering factors has been made for 17 important materials with the zinc blende structure over the temperature range 1 to 1000 K, where appropriate, and for temperature ranges for which accurate Debye-Waller factors are available. For other materials, the parameterization of the absorptive electron atomic scattering factors can be made using the program by supplying the atomic number of the element, the Debye-Waller factor and the acceleration voltage. For ions or when more accurate numerical results for neutral atoms are available, the program can read in the numerical values of the elastic scattering factors and return the parameters for both the elastic and absorptive scattering factors. The computer routines developed have been tested both on computer workstations and desktop PC computers, and will be made freely available via electronic mail or on floppy disk upon request. (orig.)

  3. Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change

    Science.gov (United States)

    Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.

    2017-12-01

    Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at

  4. A revised design approach of the attachment system for the ITER EU-HCPB-TBM based on a central cylindrical connection element

    International Nuclear Information System (INIS)

    Zeile, Christian; Neuberger, Heiko

    2012-01-01

    Highlights: ► Design of an attachment system based on a cylinder to connect TBM and shield. ► Attachment system has to cope with high EM loads and different thermal expansions. ► Stiff design and central position fulfill these requirements. ► Transient thermal-structural analyses confirm compliance of design with design codes. - Abstract: The EU-Helium Cooled Pebble Bed Test Blanket Module (HCPB-TBM), which is located inside an equatorial port plug, is attached to the shield by an attachment system. The design of the attachment system has to fulfill two conflicting requirements. On the one hand, it has to transfer the high electromagnetic forces acting on the TBM to the shield and on the other hand, it has to compensate the different thermal expansions between the shield and the back plate of the TBM. The recent design approach of the attachment system consists of a hollow cylinder located at the center of the back plate. This design combines two advantages: a simple geometry and correspondingly low fabrication effort and the central location where the differential strain between back plate and shield is minimal. Static and transient thermal-structural analyses of the most demanding load cases, a fast vertical displacement event type II and the operation state tritium outgassing, have been performed to evaluate the design and confirm the compliance with the relevant design codes. A welded connection of the attachment system to the TBM back plate and a bolted connection in combination with a splined shaft is proposed for the shield side because of the dissimilar materials.

  5. Multisite Evaluation of APEX for Water Quality: I. Best Professional Judgment Parameterization.

    Science.gov (United States)

    Baffaut, Claire; Nelson, Nathan O; Lory, John A; Senaviratne, G M M M Anomaa; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S

    2017-11-01

    The Agricultural Policy Environmental eXtender (APEX) model is capable of estimating edge-of-field water, nutrient, and sediment transport and is used to assess the environmental impacts of management practices. The current practice is to fully calibrate the model for each site simulation, a task that requires resources and data not always available. The objective of this study was to compare model performance for flow, sediment, and phosphorus transport under two parameterization schemes: a best professional judgment (BPJ) parameterization based on readily available data and a fully calibrated parameterization based on site-specific soil, weather, event flow, and water quality data. The analysis was conducted using 12 datasets at four locations representing poorly drained soils and row-crop production under different tillage systems. Model performance was based on the Nash-Sutcliffe efficiency (NSE), the coefficient of determination () and the regression slope between simulated and measured annualized loads across all site years. Although the BPJ model performance for flow was acceptable (NSE = 0.7) at the annual time step, calibration improved it (NSE = 0.9). Acceptable simulation of sediment and total phosphorus transport (NSE = 0.5 and 0.9, respectively) was obtained only after full calibration at each site. Given the unacceptable performance of the BPJ approach, uncalibrated use of APEX for planning or management purposes may be misleading. Model calibration with water quality data prior to using APEX for simulating sediment and total phosphorus loss is essential. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  6. Parameterized Shower Simulation in Lelaps: a Comparison with Geant4

    International Nuclear Information System (INIS)

    Langeveld, Willy G.J.

    2003-01-01

    The detector simulation toolkit Lelaps[1] simulates electromagnetic and hadronic showers in calorimetric detector elements of high-energy particle detectors using a parameterization based on the algorithms originally developed by Grindhammer and Peters[2] and Bock et al.[3]. The primary motivations of the present paper are to verify the implementation of the parameterization, to explore regions of energy where the parameterization is valid and to serve as a basis for further improvement of the algorithm. To this end, we compared the Lelaps simulation to a detailed simulation provided by Geant4[4]. A number of different calorimeters, both electromagnetic and hadronic, were implemented in both programs. Longitudinal and radial shower profiles and their fluctuations were obtained from Geant4 over a wide energy range and compared with those obtained from Lelaps. Generally the longitudinal shower profiles are found to be in good agreement in a large part of the energy range, with poorer results at energies below about 300 MeV. Radial profiles agree well in homogeneous detectors, but are somewhat deficient in segmented ones. These deficiencies are discussed

  7. Air quality modeling: evaluation of chemical and meteorological parameterizations

    International Nuclear Information System (INIS)

    Kim, Youngseob

    2011-01-01

    The influence of chemical mechanisms and meteorological parameterizations on pollutant concentrations calculated with an air quality model is studied. The influence of the differences between two gas-phase chemical mechanisms on the formation of ozone and aerosols in Europe is low on average. For ozone, the large local differences are mainly due to the uncertainty associated with the kinetics of nitrogen monoxide (NO) oxidation reactions on the one hand and the representation of different pathways for the oxidation of aromatic compounds on the other hand. The aerosol concentrations are mainly influenced by the selection of all major precursors of secondary aerosols and the explicit treatment of chemical regimes corresponding to the nitrogen oxides (NO x ) levels. The influence of the meteorological parameterizations on the concentrations of aerosols and their vertical distribution is evaluated over the Paris region in France by comparison to lidar data. The influence of the parameterization of the dynamics in the atmospheric boundary layer is important; however, it is the use of an urban canopy model that improves significantly the modeling of the pollutant vertical distribution (author) [fr

  8. Statistical dynamical subgrid-scale parameterizations for geophysical flows

    International Nuclear Information System (INIS)

    O'Kane, T J; Frederiksen, J S

    2008-01-01

    Simulations of both atmospheric and oceanic circulations at given finite resolutions are strongly dependent on the form and strengths of the dynamical subgrid-scale parameterizations (SSPs) and in particular are sensitive to subgrid-scale transient eddies interacting with the retained scale topography and the mean flow. In this paper, we present numerical results for SSPs of the eddy-topographic force, stochastic backscatter, eddy viscosity and eddy-mean field interaction using an inhomogeneous statistical turbulence model based on a quasi-diagonal direct interaction approximation (QDIA). Although the theoretical description on which our model is based is for general barotropic flows, we specifically focus on global atmospheric flows where large-scale Rossby waves are present. We compare and contrast the closure-based results with an important earlier heuristic SSP of the eddy-topographic force, based on maximum entropy or statistical canonical equilibrium arguments, developed specifically for general ocean circulation models (Holloway 1992 J. Phys. Oceanogr. 22 1033-46). Our results demonstrate that where strong zonal flows and Rossby waves are present, such as in the atmosphere, maximum entropy arguments are insufficient to accurately parameterize the subgrid contributions due to eddy-eddy, eddy-topographic and eddy-mean field interactions. We contrast our atmospheric results with findings for the oceans. Our study identifies subgrid-scale interactions that are currently not parameterized in numerical atmospheric climate models, which may lead to systematic defects in the simulated circulations.

  9. An efficient algorithmic approach for mass spectrometry-based disulfide connectivity determination using multi-ion analysis

    Directory of Open Access Journals (Sweden)

    Yen Ten-Yang

    2011-02-01

    Full Text Available Abstract Background Determining the disulfide (S-S bond pattern in a protein is often crucial for understanding its structure and function. In recent research, mass spectrometry (MS based analysis has been applied to this problem following protein digestion under both partial reduction and non-reduction conditions. However, this paradigm still awaits solutions to certain algorithmic problems fundamental amongst which is the efficient matching of an exponentially growing set of putative S-S bonded structural alternatives to the large amounts of experimental spectrometric data. Current methods circumvent this challenge primarily through simplifications, such as by assuming only the occurrence of certain ion-types (b-ions and y-ions that predominate in the more popular dissociation methods, such as collision-induced dissociation (CID. Unfortunately, this can adversely impact the quality of results. Method We present an algorithmic approach to this problem that can, with high computational efficiency, analyze multiple ions types (a, b, bo, b*, c, x, y, yo, y*, and z and deal with complex bonding topologies, such as inter/intra bonding involving more than two peptides. The proposed approach combines an approximation algorithm-based search formulation with data driven parameter estimation. This formulation considers only those regions of the search space where the correct solution resides with a high likelihood. Putative disulfide bonds thus obtained are finally combined in a globally consistent pattern to yield the overall disulfide bonding topology of the molecule. Additionally, each bond is associated with a confidence score, which aids in interpretation and assimilation of the results. Results The method was tested on nine different eukaryotic Glycosyltransferases possessing disulfide bonding topologies of varying complexity. Its performance was found to be characterized by high efficiency (in terms of time and the fraction of search space

  10. Object detection approach using generative sparse, hierarchical networks with top-down and lateral connections for combining texture/color detection and shape/contour detection

    Science.gov (United States)

    Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.; Schultz, Peter F.; George, John S.

    2015-07-28

    An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using a combinatorial algorithm.

  11. Object detection approach using generative sparse, hierarchical networks with top-down and lateral connections for combining texture/color detection and shape/contour detection

    Energy Technology Data Exchange (ETDEWEB)

    Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.; Schultz, Peter F.; George, John S.

    2016-10-25

    An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using a combinatorial algorithm.

  12. The Connection Between Forms of Guidance for Inquiry-Based Learning and the Communicative Approaches Applied—a Case Study in the Context of Pre-service Teachers

    Science.gov (United States)

    Lehtinen, Antti; Lehesvuori, Sami; Viiri, Jouni

    2017-09-01

    Recent research has argued that inquiry-based science learning should be guided by providing the learners with support. The research on guidance for inquiry-based learning has concentrated on how providing guidance affects learning through inquiry. How guidance for inquiry-based learning could promote learning about inquiry (e.g. epistemic practices) is in need of exploration. A dialogic approach to classroom communication and pedagogical link-making offers possibilities for learners to acquire these practices. The focus of this paper is to analyse the role of different forms of guidance for inquiry-based learning on building the communicative approach applied in classrooms. The data for the study comes from an inquiry-based physics lesson implemented by a group of five pre-service primary science teachers to a class of sixth graders. The lesson was video recorded and the discussions were transcribed. The data was analysed by applying two existing frameworks—one for the forms of guidance provided and another for the communicative approaches applied. The findings illustrate that providing non-specific forms of guidance, such as prompts, caused the communicative approach to be dialogic. On the other hand, providing the learners with specific forms of guidance, such as explanations, shifted the communication to be more authoritative. These results imply that different forms of guidance provided by pre-service teachers can affect the communicative approach applied in inquiry-based science lessons, which affects the possibilities learners are given to connect their existing ideas to the scientific view. Future research should focus on validating these results by also analysing inservice teachers' lessons.

  13. A parameterization method and application in breast tomosynthesis dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xinhua; Zhang, Da; Liu, Bob [Division of Diagnostic Imaging Physics and Webster Center for Advanced Research and Education in Radiation, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States)

    2013-09-15

    Purpose: To present a parameterization method based on singular value decomposition (SVD), and to provide analytical parameterization of the mean glandular dose (MGD) conversion factors from eight references for evaluating breast tomosynthesis dose in the Mammography Quality Standards Act (MQSA) protocol and in the UK, European, and IAEA dosimetry protocols.Methods: MGD conversion factor is usually listed in lookup tables for the factors such as beam quality, breast thickness, breast glandularity, and projection angle. The authors analyzed multiple sets of MGD conversion factors from the Hologic Selenia Dimensions quality control manual and seven previous papers. Each data set was parameterized using a one- to three-dimensional polynomial function of 2–16 terms. Variable substitution was used to improve accuracy. A least-squares fit was conducted using the SVD.Results: The differences between the originally tabulated MGD conversion factors and the results computed using the parameterization algorithms were (a) 0.08%–0.18% on average and 1.31% maximum for the Selenia Dimensions quality control manual, (b) 0.09%–0.66% on average and 2.97% maximum for the published data by Dance et al. [Phys. Med. Biol. 35, 1211–1219 (1990); ibid. 45, 3225–3240 (2000); ibid. 54, 4361–4372 (2009); ibid. 56, 453–471 (2011)], (c) 0.74%–0.99% on average and 3.94% maximum for the published data by Sechopoulos et al. [Med. Phys. 34, 221–232 (2007); J. Appl. Clin. Med. Phys. 9, 161–171 (2008)], and (d) 0.66%–1.33% on average and 2.72% maximum for the published data by Feng and Sechopoulos [Radiology 263, 35–42 (2012)], excluding one sample in (d) that does not follow the trends in the published data table.Conclusions: A flexible parameterization method is presented in this paper, and was applied to breast tomosynthesis dosimetry. The resultant data offer easy and accurate computations of MGD conversion factors for evaluating mean glandular breast dose in the MQSA

  14. Shallow cumuli ensemble statistics for development of a stochastic parameterization

    Science.gov (United States)

    Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs

    2014-05-01

    According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a

  15. Integrating Fisheries Dependent and Independent Approaches to assess Fisheries, Abundance, Diversity, Distribution and Genetic Connectivity of Red Sea Elasmobranch Populations

    KAUST Repository

    Spaet, Julia L.

    2014-05-01

    The Red Sea has long been recognized as a global hotspot of marine biodiversity. Ongoing overfishing, however, is threatening this unique ecosystem, recently leading to the identification of the Red Sea as one of three major hotspots of extinction risk for sharks and rays worldwide. Elasmobranch catches in Saudi Arabian Red Sea waters are unregulated, often misidentified and unrecorded, resulting in a lack of species-specific landings information, which would be vital for the formulation of effective management strategies. Here we employed an integrated approach of fisheries dependent and independent survey methods combined with molecular tools to provide biological, ecological and fisheries data to aid in the assessment of the status of elasmobranch populations in the Red Sea. Over the course of two years, we conducted market surveys at the biggest Saudi Arabian fish market in Jeddah. Market landings were dominated by, mostly immature individuals - implying both recruitment and growth overfishing. Additionally, we employed baited remote underwater video (BRUVS) and longline surveys along almost the entire length of the Red Sea coast of Saudi Arabia as well as at selected reef systems in Sudan. The comparison of catch per unit effort (CPUE) data for Saudi Arabian Red Sea BRUVS and longline surveys to published data originating from non-Red Sea ocean systems revealed CPUE values several orders of magnitude lower for both survey methods in the Red Sea compared to other locations around the world. Finally, we infered the regional population structure of four commercially important shark species between the Red Sea and the Western Indian Ocean.We genotyped nearly 2000 individuals at the mitochondrial control region as well as a total of 20 microsatellite loci. Genetic homogeneity could not be rejected for any of the four species across the spatial comparison. Based on high levels of region-wide exploitation, we suggest that, for management purposes, the population

  16. About Connections

    Directory of Open Access Journals (Sweden)

    Kathleen S Rockland

    2015-05-01

    Full Text Available Despite the attention attracted by connectomics, one can lose sight of the very real questions concerning What are connections? In the neuroimaging community, structural connectivity is ground truth and underlying constraint on functional or effective connectivity. It is referenced to underlying anatomy; but, as increasingly remarked, there is a large gap between the wealth of human brain mapping and the relatively scant data on actual anatomical connectivity. Moreover, connections have typically been discussed as pairwise, point x projecting to point y (or: to points y and z, or more recently, in graph theoretical terms, as nodes or regions and the interconnecting edges. This is a convenient shorthand, but tends not to capture the richness and nuance of basic anatomical properties as identified in the classic tradition of tracer studies. The present short review accordingly revisits connectional weights, heterogeneity, reciprocity, topography, and hierarchical organization, drawing on concrete examples. The emphasis is on presynaptic long-distance connections, motivated by the intention to probe current assumptions and promote discussions about further progress and synthesis.

  17. Parameterization of a fuzzy classifier for the diagnosis of an industrial process

    International Nuclear Information System (INIS)

    Toscano, R.; Lyonnet, P.

    2002-01-01

    The aim of this paper is to present a classifier based on a fuzzy inference system. For this classifier, we propose a parameterization method, which is not necessarily based on an iterative training. This approach can be seen as a pre-parameterization, which allows the determination of the rules base and the parameters of the membership functions. We also present a continuous and derivable version of the previous classifier and suggest an iterative learning algorithm based on a gradient method. An example using the learning basis IRIS, which is a benchmark for classification problems, is presented showing the performances of this classifier. Finally this classifier is applied to the diagnosis of a DC motor showing the utility of this method. However in many cases the total knowledge necessary to the synthesis of the fuzzy diagnosis system (FDS) is not, in general, directly available. It must be extracted from an often-considerable mass of information. For this reason, a general methodology for the design of a FDS is presented and illustrated on a non-linear plant

  18. Gravitational wave tests of general relativity with the parameterized post-Einsteinian framework

    International Nuclear Information System (INIS)

    Cornish, Neil; Sampson, Laura; Yunes, Nicolas; Pretorius, Frans

    2011-01-01

    Gravitational wave astronomy has tremendous potential for studying extreme astrophysical phenomena and exploring fundamental physics. The waves produced by binary black hole mergers will provide a pristine environment in which to study strong-field dynamical gravity. Extracting detailed information about these systems requires accurate theoretical models of the gravitational wave signals. If gravity is not described by general relativity, analyses that are based on waveforms derived from Einstein's field equations could result in parameter biases and a loss of detection efficiency. A new class of ''parameterized post-Einsteinian'' waveforms has been proposed to cover this eventuality. Here, we apply the parameterized post-Einsteinian approach to simulated data from a network of advanced ground-based interferometers and from a future space-based interferometer. Bayesian inference and model selection are used to investigate parameter biases, and to determine the level at which departures from general relativity can be detected. We find that in some cases the parameter biases from assuming the wrong theory can be severe. We also find that gravitational wave observations will beat the existing bounds on deviations from general relativity derived from the orbital decay of binary pulsars by a large margin across a wide swath of parameter space.

  19. Structural test of the parameterized-backbone method for protein design.

    Science.gov (United States)

    Plecs, Joseph J; Harbury, Pehr B; Kim, Peter S; Alber, Tom

    2004-09-03

    Designing new protein folds requires a method for simultaneously optimizing the conformation of the backbone and the side-chains. One approach to this problem is the use of a parameterized backbone, which allows the systematic exploration of families of structures. We report the crystal structure of RH3, a right-handed, three-helix coiled coil that was designed using a parameterized backbone and detailed modeling of core packing. This crystal structure was determined using another rationally designed feature, a metal-binding site that permitted experimental phasing of the X-ray data. RH3 adopted the intended fold, which has not been observed previously in biological proteins. Unanticipated structural asymmetry in the trimer was a principal source of variation within the RH3 structure. The sequence of RH3 differs from that of a previously characterized right-handed tetramer, RH4, at only one position in each 11 amino acid sequence repeat. This close similarity indicates that the design method is sensitive to the core packing interactions that specify the protein structure. Comparison of the structures of RH3 and RH4 indicates that both steric overlap and cavity formation provide strong driving forces for oligomer specificity.

  20. Actual and Idealized Crystal Field Parameterizations for the Uranium Ions in UF 4

    Science.gov (United States)

    Gajek, Z.; Mulak, J.; Krupa, J. C.

    1993-12-01

    The crystal field parameters for the actual coordination symmetries of the uranium ions in UF 4, C2 and C1, and for their idealizations to D2, C2 v , D4, D4 d , and the Archimedean antiprism point symmetries are given. They have been calculated by means of both the perturbative ab initio model and the angular overlap model and are referenced to the recent results fitted by Carnall's group. The equivalency of some different sets of parameters has been verified with the standardization procedure. The adequacy of several idealized approaches has been tested by comparison of the corresponding splitting patterns of the 3H 4 ground state. Our results support the parameterization given by Carnall. Furthermore, the parameterization of the crystal field potential and the splitting diagram for the symmetryless uranium ion U( C1) are given. Having at our disposal the crystal field splittings for the two kinds of uranium ions in UF 4, U( C2) and U( C1), we calculate the model plots of the paramagnetic susceptibility χ( T) and the magnetic entropy associated with the Schottky anomaly Δ S( T) for UF 4.

  1. Model-driven harmonic parameterization of the cortical surface: HIP-HOP.

    Science.gov (United States)

    Auzias, G; Lefèvre, J; Le Troter, A; Fischer, C; Perrot, M; Régis, J; Coulon, O

    2013-05-01

    In the context of inter subject brain surface matching, we present a parameterization of the cortical surface constrained by a model of cortical organization. The parameterization is defined via an harmonic mapping of each hemisphere surface to a rectangular planar domain that integrates a representation of the model. As opposed to previous landmark-based registration methods we do not match folds between individuals but instead optimize the fit between cortical sulci and specific iso-coordinate axis in the model. This strategy overcomes some limitation to sulcus-based registration techniques such as topological variability in sulcal landmarks across subjects. Experiments on 62 subjects with manually traced sulci are presented and compared with the result of the Freesurfer software. The evaluation involves a measure of dispersion of sulci with both angular and area distortions. We show that the model-based strategy can lead to a natural, efficient and very fast (less than 5 min per hemisphere) method for defining inter subjects correspondences. We discuss how this approach also reduces the problems inherent to anatomically defined landmarks and open the way to the investigation of cortical organization through the notion of orientation and alignment of structures across the cortex.

  2. Device-Free Localization via an Extreme Learning Machine with Parameterized Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    Jie Zhang

    2017-04-01

    Full Text Available Device-free localization (DFL is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF DFL system, radio transmitters (RTs and radio receivers (RXs are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN, support vector machine (SVM, back propagation neural network (BPNN, as well as the well-known radio tomographic imaging (RTI DFL approach.

  3. Parameterized equation for the estimation of the hydraulic conductivity function not saturated in ferralsols south of Havana

    International Nuclear Information System (INIS)

    González Robaina, Felicita; López Seijas, Teresa

    2008-01-01

    The modeling of the processes involved in the movement of water in soil solutions generally requires the general equation of water flow for the condition of saturation, or Darcy - Buckinghan approach. In this approach the hydraulic - soil moisture (K(0)) conductivity function is a fundamental property of the soil to determine for each field condition. Several methods reported in the literature for determining the hydraulic conductivity are based on simplifications of assuming unit gradient method or a fixed ratio K(0). In recent years related to the search for simple, rapid and inexpensive methods to measure this relationship in the field using a lot of work aftershocks reported. One of these methods is the parameterized equation proposed by Reichardt, using the parameters of the equations describing the process of internal drainage and explain the exponential nature of the relationship K(0). The objective of this work is to estimate the K(0), with the method of the parameterized equation. To do the test results of internal drainage on a Ferralsol area south of Havana will be used. The results show that the parameterized equation provides an estimation of K(0) for those similar to the methods that assume unit gradient conditions

  4. Usage of Parameterized Fatigue Spectra and Physics-Based Systems Engineering Models for Wind Turbine Component Sizing: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, Taylor; Guo, Yi; Veers, Paul; Dykes, Katherine; Damiani, Rick

    2016-01-26

    Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrum is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.

  5. Comparison of Four Mixed Layer Mesoscale Parameterizations and the Equation for an Arbitrary Tracer

    Science.gov (United States)

    Canuto, V. M.; Dubovikov, M. S.

    2011-01-01

    In this paper we discuss two issues, the inter-comparison of four mixed layer mesoscale parameterizations and the search for the eddy induced velocity for an arbitrary tracer. It must be stressed that our analysis is limited to mixed layer mesoscales since we do not treat sub-mesoscales and small turbulent mixing. As for the first item, since three of the four parameterizations are expressed in terms of a stream function and a residual flux of the RMT formalism (residual mean theory), while the fourth is expressed in terms of vertical and horizontal fluxes, we needed a formalism to connect the two formulations. The standard RMT representation developed for the deep ocean cannot be extended to the mixed layer since its stream function does not vanish at the ocean's surface. We develop a new RMT representation that satisfies the surface boundary condition. As for the general form of the eddy induced velocity for an arbitrary tracer, thus far, it has been assumed that there is only the one that originates from the curl of the stream function. This is because it was assumed that the tracer residual flux is purely diffusive. On the other hand, we show that in the case of an arbitrary tracer, the residual flux has also a skew component that gives rise to an additional bolus velocity. Therefore, instead of only one bolus velocity, there are now two, one coming from the curl of the stream function and other from the skew part of the residual flux. In the buoyancy case, only one bolus velocity contributes to the mean buoyancy equation since the residual flux is indeed only diffusive.

  6. Internet Connectivity

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Internet Connectivity. BSNL, SIFY, HCL in Guwahati; only BSNL elsewhere in NE (local player in Shillong). Service poor; All vendors lease BW from BSNL.

  7. Mathematics Connection

    African Journals Online (AJOL)

    MATHEMATICS CONNECTION aims at providing a forum topromote the development of Mathematics Education in Ghana. Articles that seekto enhance the teaching and/or learning of mathematics at all levels of theeducational system are welcome.

  8. HR Connect

    Data.gov (United States)

    US Agency for International Development — HR Connect is the USAID HR personnel system which allows HR professionals to process HR actions related to employee's personal and position information. This system...

  9. Model parameterization as method for data analysis in dendroecology

    Science.gov (United States)

    Tychkov, Ivan; Shishov, Vladimir; Popkova, Margarita

    2017-04-01

    There is no argue in usefulness of process-based models in ecological studies. Only limitations is how developed algorithm of model and how it will be applied for research. Simulation of tree-ring growth based on climate provides valuable information of tree-ring growth response on different environmental conditions, but also shares light on species-specifics of tree-ring growth process. Visual parameterization of the Vaganov-Shashkin model, allows to estimate non-linear response of tree-ring growth based on daily climate data: daily temperature, estimated day light and soil moisture. Previous using of the VS-Oscilloscope (a software tool of the visual parameterization) shows a good ability to recreate unique patterns of tree-ring growth for coniferous species in Siberian Russia, USA, China, Mediterranean Spain and Tunisia. But using of the models mostly is one-sided to better understand different tree growth processes, opposite to statistical methods of analysis (e.g. Generalized Linear Models, Mixed Models, Structural Equations.) which can be used for reconstruction and forecast. Usually the models are used either for checking of new hypothesis or quantitative assessment of physiological tree growth data to reveal a growth process mechanisms, while statistical methods used for data mining assessment and as a study tool itself. The high sensitivity of the model's VS-parameters reflects the ability of the model to simulate tree-ring growth and evaluates value of limiting growth climate factors. Precise parameterization of VS-Oscilloscope provides valuable information about growth processes of trees and under what conditions these processes occur (e.g. day of growth season onset, length of season, value of minimal/maximum temperature for tree-ring growth, formation of wide or narrow rings etc.). The work was supported by the Russian Science Foundation (RSF # 14-14-00219)

  10. Systematic Parameterization of Lignin for the CHARMM Force Field

    Energy Technology Data Exchange (ETDEWEB)

    Vermaas, Joshua; Petridis, Loukas; Beckham, Gregg; Crowley, Michael

    2017-07-06

    Plant cell walls have three primary components, cellulose, hemicellulose, and lignin, the latter of which is a recalcitrant, aromatic heteropolymer that provides structure to plants, water and nutrient transport through plant tissues, and a highly effective defense against pathogens. Overcoming the recalcitrance of lignin is key to effective biomass deconstruction, which would in turn enable the use of biomass as a feedstock for industrial processes. Our understanding of lignin structure in the plant cell wall is hampered by the limitations of the available lignin forcefields, which currently only account for a single linkage between lignins and lack explicit parameterization for emerging lignin structures both from natural variants and engineered lignin structures. Since polymerization of lignin occurs via radical intermediates, multiple C-O and C-C linkages have been isolated , and the current force field only represents a small subset of lignin the diverse lignin structures found in plants. In order to take into account the wide range of lignin polymerization chemistries, monomers and dimer combinations of C-, H-, G-, and S-lignins as well as with hydroxycinnamic acid linkages were subjected to extensive quantum mechanical calculations to establish target data from which to build a complete molecular mechanics force field tuned specifically for diverse lignins. This was carried out in a GPU-accelerated global optimization process, whereby all molecules were parameterized simultaneously using the same internal parameter set. By parameterizing lignin specifically, we are able to more accurately represent the interactions and conformations of lignin monomers and dimers relative to a general force field. This new force field will enables computational researchers to study the effects of different linkages on the structure of lignin, as well as construct more accurate plant cell wall models based on observed statistical distributions of lignin that differ between

  11. Multisite Evaluation of APEX for Water Quality: II. Regional Parameterization.

    Science.gov (United States)

    Nelson, Nathan O; Baffaut, Claire; Lory, John A; Anomaa Senaviratne, G M M M; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S

    2017-11-01

    Phosphorus (P) Index assessment requires independent estimates of long-term average annual P loss from fields, representing multiple climatic scenarios, management practices, and landscape positions. Because currently available measured data are insufficient to evaluate P Index performance, calibrated and validated process-based models have been proposed as tools to generate the required data. The objectives of this research were to develop a regional parameterization for the Agricultural Policy Environmental eXtender (APEX) model to estimate edge-of-field runoff, sediment, and P losses in restricted-layer soils of Missouri and Kansas and to assess the performance of this parameterization using monitoring data from multiple sites in this region. Five site-specific calibrated models (SSCM) from within the region were used to develop a regionally calibrated model (RCM), which was further calibrated and validated with measured data. Performance of the RCM was similar to that of the SSCMs for runoff simulation and had Nash-Sutcliffe efficiency (NSE) > 0.72 and absolute percent bias (|PBIAS|) 90%) and was particularly ineffective at simulating sediment loss from locations with small sediment loads. The RCM had acceptable performance for simulation of total P loss (NSE > 0.74, |PBIAS| < 30%) but underperformed the SSCMs. Total P-loss estimates should be used with caution due to poor simulation of sediment loss. Although we did not attain our goal of a robust regional parameterization of APEX for estimating sediment and total P losses, runoff estimates with the RCM were acceptable for P Index evaluation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  12. Parameterization models for solar radiation and solar technology applications

    International Nuclear Information System (INIS)

    Khalil, Samy A.

    2008-01-01

    Solar radiation is very important for the evaluation and wide use of solar renewable energy systems. The development of calibration procedures for broadband solar radiation photometric instrumentation and the improvement of broadband solar radiation measurement accuracy have been done. An improved diffuse sky reference and photometric calibration and characterization software for outdoor pyranometer calibrations are outlined. Parameterizations for direct beam, total hemispherical and diffuse sky radiation and solar radiation technology are briefly reviewed. The uncertainties for various broadband solar radiations of solar energy and atmospheric effects are discussed. The varying responsivities of solar radiation with meteorological, statistical and climatological parameters and possibility atmospheric conditions was examined

  13. Non-perturbative Aspects of QCD and Parameterized Quark Propagator

    Institute of Scientific and Technical Information of China (English)

    HAN Ding-An; ZHOU Li-Juan; ZENG Ya-Guang; GU Yun-Ting; CAO Hui; MA Wei-Xing; MENG Cheng-Ju; PAN Ji-Huan

    2008-01-01

    Based on the Global Color Symmetry Model, the non-perturbative QCD vacuum is investigated in theparameterized fully dressed quark propagator. Our theoretical predictions for various quantities characterized the QCD vacuum are in agreement with those predicted by many other phenomenological QCD inspired models. The successful predictions clearly indicate the extensive validity of our parameterized quark propagator used here. A detailed discussion on the arbitrariness in determining the integration cut-off parameter of# in calculating QCD vacuum condensates and a good method, which avoided the dependence of calculating results on the cut-off parameter is also strongly recommended to readers.

  14. Parameterization models for solar radiation and solar technology applications

    Energy Technology Data Exchange (ETDEWEB)

    Khalil, Samy A. [National Research Institute of Astronomy and Geophysics, Solar and Space Department, Marsed Street, Helwan, 11421 Cairo (Egypt)

    2008-08-15

    Solar radiation is very important for the evaluation and wide use of solar renewable energy systems. The development of calibration procedures for broadband solar radiation photometric instrumentation and the improvement of broadband solar radiation measurement accuracy have been done. An improved diffuse sky reference and photometric calibration and characterization software for outdoor pyranometer calibrations are outlined. Parameterizations for direct beam, total hemispherical and diffuse sky radiation and solar radiation technology are briefly reviewed. The uncertainties for various broadband solar radiations of solar energy and atmospheric effects are discussed. The varying responsivities of solar radiation with meteorological, statistical and climatological parameters and possibility atmospheric conditions was examined. (author)

  15. IR Optics Measurement with Linear Coupling's Action-Angle Parameterization

    CERN Document Server

    Luo, Yun; Pilat, Fulvia Caterina; Satogata, Todd; Trbojevic, Dejan

    2005-01-01

    The interaction region (IP) optics are measured with the two DX/BPMs close to the IPs at the Relativistic Heavy Ion Collider (RHIC). The beta functions at IP are measured with the two eigenmodes' phase advances between the two BPMs. And the beta waists are also determined through the beta functions at the two BPMs. The coupling parameters at the IPs are also given through the linear coupling's action-angle parameterization. All the experimental data are taken during the driving oscillations with the AC dipole. The methods to do these measurements are discussed. And the measurement results during the beta*

  16. Parameterization of interatomic potential by genetic algorithms: A case study

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Partha S., E-mail: psghosh@barc.gov.in; Arya, A.; Dey, G. K. [Materials Science Division, Bhabha Atomic Research Centre, Mumbai-400085 (India); Ranawat, Y. S. [Department of Ceramic Engineering, Indian Institute of Technology (BHU), Varanasi-221005 (India)

    2015-06-24

    A framework for Genetic Algorithm based methodology is developed to systematically obtain and optimize parameters for interatomic force field functions for MD simulations by fitting to a reference data base. This methodology is applied to the fitting of ThO{sub 2} (CaF{sub 2} prototype) – a representative of ceramic based potential fuel for nuclear applications. The resulting GA optimized parameterization of ThO{sub 2} is able to capture basic structural, mechanical, thermo-physical properties and also describes defect structures within the permissible range.

  17. The causal structure of spacetime is a parameterized Randers geometry

    Energy Technology Data Exchange (ETDEWEB)

    Skakala, Jozef; Visser, Matt, E-mail: jozef.skakala@msor.vuw.ac.nz, E-mail: matt.visser@msor.vuw.ac.nz [School of Mathematics, Statistics and Operations Research, Victoria University of Wellington, PO Box 600, Wellington (New Zealand)

    2011-03-21

    There is a well-established isomorphism between stationary four-dimensional spacetimes and three-dimensional purely spatial Randers geometries-these Randers geometries being a particular case of the more general class of three-dimensional Finsler geometries. We point out that in stably causal spacetimes, by using the (time-dependent) ADM decomposition, this result can be extended to general non-stationary spacetimes-the causal structure (conformal structure) of the full spacetime is completely encoded in a parameterized (t-dependent) class of Randers spaces, which can then be used to define a Fermat principle, and also to reconstruct the null cones and causal structure.

  18. The causal structure of spacetime is a parameterized Randers geometry

    International Nuclear Information System (INIS)

    Skakala, Jozef; Visser, Matt

    2011-01-01

    There is a well-established isomorphism between stationary four-dimensional spacetimes and three-dimensional purely spatial Randers geometries-these Randers geometries being a particular case of the more general class of three-dimensional Finsler geometries. We point out that in stably causal spacetimes, by using the (time-dependent) ADM decomposition, this result can be extended to general non-stationary spacetimes-the causal structure (conformal structure) of the full spacetime is completely encoded in a parameterized (t-dependent) class of Randers spaces, which can then be used to define a Fermat principle, and also to reconstruct the null cones and causal structure.

  19. Defining landscape resistance values in least-cost connectivity models for the invasive grey squirrel: a comparison of approaches using expert-opinion and habitat suitability modelling.

    Directory of Open Access Journals (Sweden)

    Claire D Stevenson-Holt

    Full Text Available Least-cost models are widely used to study the functional connectivity of habitat within a varied landscape matrix. A critical step in the process is identifying resistance values for each land cover based upon the facilitating or impeding impact on species movement. Ideally resistance values would be parameterised with empirical data, but due to a shortage of such information, expert-opinion is often used. However, the use of expert-opinion is seen as subjective, human-centric and unreliable. This study derived resistance values from grey squirrel habitat suitability models (HSM in order to compare the utility and validity of this approach with more traditional, expert-led methods. Models were built and tested with MaxEnt, using squirrel presence records and a categorical land cover map for Cumbria, UK. Predictions on the likelihood of squirrel occurrence within each land cover type were inverted, providing resistance values which were used to parameterise a least-cost model. The resulting habitat networks were measured and compared to those derived from a least-cost model built with previously collated information from experts. The expert-derived and HSM-inferred least-cost networks differ in precision. The HSM-informed networks were smaller and more fragmented because of the higher resistance values attributed to most habitats. These results are discussed in relation to the applicability of both approaches for conservation and management objectives, providing guidance to researchers and practitioners attempting to apply and interpret a least-cost approach to mapping ecological networks.

  20. Defining landscape resistance values in least-cost connectivity models for the invasive grey squirrel: a comparison of approaches using expert-opinion and habitat suitability modelling.

    Science.gov (United States)

    Stevenson-Holt, Claire D; Watts, Kevin; Bellamy, Chloe C; Nevin, Owen T; Ramsey, Andrew D

    2014-01-01

    Least-cost models are widely used to study the functional connectivity of habitat within a varied landscape matrix. A critical step in the process is identifying resistance values for each land cover based upon the facilitating or impeding impact on species movement. Ideally resistance values would be parameterised with empirical data, but due to a shortage of such information, expert-opinion is often used. However, the use of expert-opinion is seen as subjective, human-centric and unreliable. This study derived resistance values from grey squirrel habitat suitability models (HSM) in order to compare the utility and validity of this approach with more traditional, expert-led methods. Models were built and tested with MaxEnt, using squirrel presence records and a categorical land cover map for Cumbria, UK. Predictions on the likelihood of squirrel occurrence within each land cover type were inverted, providing resistance values which were used to parameterise a least-cost model. The resulting habitat networks were measured and compared to those derived from a least-cost model built with previously collated information from experts. The expert-derived and HSM-inferred least-cost networks differ in precision. The HSM-informed networks were smaller and more fragmented because of the higher resistance values attributed to most habitats. These results are discussed in relation to the applicability of both approaches for conservation and management objectives, providing guidance to researchers and practitioners attempting to apply and interpret a least-cost approach to mapping ecological networks.

  1. Finding significantly connected voxels based on histograms of connection strengths

    DEFF Research Database (Denmark)

    Kasenburg, Niklas; Pedersen, Morten Vester; Darkner, Sune

    2016-01-01

    We explore a new approach for structural connectivity based segmentations of subcortical brain regions. Connectivity based segmentations are usually based on fibre connections from a seed region to predefined target regions. We present a method for finding significantly connected voxels based...... on the distribution of connection strengths. Paths from seed voxels to all voxels in a target region are obtained from a shortest-path tractography. For each seed voxel we approximate the distribution with a histogram of path scores. We hypothesise that the majority of estimated connections are false-positives...... and that their connection strength is distributed differently from true-positive connections. Therefore, an empirical null-distribution is defined for each target region as the average normalized histogram over all voxels in the seed region. Single histograms are then tested against the corresponding null...

  2. Establishing Connectivity

    DEFF Research Database (Denmark)

    Kjær, Poul F.

    Global law settings are characterised by a structural pre-eminence of connectivity norms, a type of norm which differs from coherency or possibility norms. The centrality of connectivity norms emerges from the function of global law, which is to increase the probability of transfers of condensed ...... and human rights can be understood as serving a constitutionalising function aimed at stabilising and facilitating connectivity. This allows for an understanding of colonialism and contemporary global governance as functional, but not as normative, equivalents.......Global law settings are characterised by a structural pre-eminence of connectivity norms, a type of norm which differs from coherency or possibility norms. The centrality of connectivity norms emerges from the function of global law, which is to increase the probability of transfers of condensed...... social components, such as economic capital and products, religious doctrines and scientific knowledge, from one legally structured context to another within world society. This was the case from colonialism and colonial law to contemporary global supply chains and human rights. Both colonial law...

  3. The parameterization of microchannel-plate-based detection systems

    Science.gov (United States)

    Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Barrie, Alexander C.; Chornay, Dennis J.; MacDonald, Elizabeth A.; Holland, Matthew P.; Giles, Barbara L.; Pollock, Craig J.

    2016-10-01

    The most common instrument for low-energy plasmas consists of a top-hat electrostatic analyzer (ESA) geometry coupled with a microchannel-plate-based (MCP-based) detection system. While the electrostatic optics for such sensors are readily simulated and parameterized during the laboratory calibration process, the detection system is often less well characterized. Here we develop a comprehensive mathematical description of particle detection systems. As a function of instrument azimuthal angle, we parameterize (1) particle scattering within the ESA and at the surface of the MCP, (2) the probability distribution of MCP gain for an incident particle, (3) electron charge cloud spreading between the MCP and anode board, and (4) capacitive coupling between adjacent discrete anodes. Using the Dual Electron Spectrometers on the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission as an example, we demonstrate a method for extracting these fundamental detection system parameters from laboratory calibration. We further show that parameters that will evolve in flight, namely, MCP gain, can be determined through application of this model to specifically tailored in-flight calibration activities. This methodology provides a robust characterization of sensor suite performance throughout mission lifetime. The model developed in this work is not only applicable to existing sensors but also can be used as an analytical design tool for future particle instrumentation.

  4. Further study on parameterization of reactor NAA: Pt. 2

    International Nuclear Information System (INIS)

    Tian Weizhi; Zhang Shuxin

    1989-01-01

    In the last paper, Ik 0 method was proposed for fission interference corrections. Another important kind of interferences in reator NAA is due to threshold reaction induced by reactor fast neutrons. In view of the increasing importance of this kind of interferences, and difficulties encountered in using the relative comparison method, a parameterized method has been introduced. Typical channels in heavy water reflector and No.2 horizontal channel of Heavy Water Research Reactor in the Insitute of Atomic Energy have been shown to have fast neutron energy distributions (E>4 MeV) close to primary fission neutron spectrum, by using multi-threshold detectors. On this basis, Ti foil is used as an 'instant fast neutron flux monitor' in parameterized corrections for threshold reaction interferences in the long irradiations. Constant values of φ f /φ s = 0.70 ± 0.02% have been obtained for No.2 rabbit channel. This value can be directly used for threshold reaction inference correction in the short irradiations

  5. Improving microphysics in a convective parameterization: possibilities and limitations

    Science.gov (United States)

    Labbouz, Laurent; Heikenfeld, Max; Stier, Philip; Morrison, Hugh; Milbrandt, Jason; Protat, Alain; Kipling, Zak

    2017-04-01

    The convective cloud field model (CCFM) is a convective parameterization implemented in the climate model ECHAM6.1-HAM2.2. It represents a population of clouds within each ECHAM-HAM model column, simulating up to 10 different convective cloud types with individual radius, vertical velocities and microphysical properties. Comparisons between CCFM and radar data at Darwin, Australia, show that in order to reproduce both the convective cloud top height distribution and the vertical velocity profile, the effect of aerodynamic drag on the rising parcel has to be considered, along with a reduced entrainment parameter. A new double-moment microphysics (the Predicted Particle Properties scheme, P3) has been implemented in the latest version of CCFM and is compared to the standard single-moment microphysics and the radar retrievals at Darwin. The microphysical process rates (autoconversion, accretion, deposition, freezing, …) and their response to changes in CDNC are investigated and compared to high resolution CRM WRF simulations over the Amazon region. The results shed light on the possibilities and limitations of microphysics improvements in the framework of CCFM and in convective parameterizations in general.

  6. Parameterization of ionization rate by auroral electron precipitation in Jupiter

    Directory of Open Access Journals (Sweden)

    Y. Hiraki

    2008-02-01

    Full Text Available We simulate auroral electron precipitation into the Jovian atmosphere in which electron multi-directional scattering and energy degradation processes are treated exactly with a Monte Carlo technique. We make a parameterization of the calculated ionization rate of the neutral gas by electron impact in a similar way as used for the Earth's aurora. Our method allows the altitude distribution of the ionization rate to be obtained as a function of an arbitrary initial energy spectrum in the range of 1–200 keV. It also includes incident angle dependence and an arbitrary density distribution of molecular hydrogen. We show that there is little dependence of the estimated ionospheric conductance on atomic species such as H and He. We compare our results with those of recent studies with different electron transport schemes by adapting our parameterization to their atmospheric conditions. We discuss the intrinsic problem of their simplified assumption. The ionospheric conductance, which is important for Jupiter's magnetosphere-ionosphere coupling system, is estimated to vary by a factor depending on the electron energy spectrum based on recent observation and modeling. We discuss this difference through the relation with field-aligned current and electron spectrum.

  7. Parameterization of ionization rate by auroral electron precipitation in Jupiter

    Directory of Open Access Journals (Sweden)

    Y. Hiraki

    2008-02-01

    Full Text Available We simulate auroral electron precipitation into the Jovian atmosphere in which electron multi-directional scattering and energy degradation processes are treated exactly with a Monte Carlo technique. We make a parameterization of the calculated ionization rate of the neutral gas by electron impact in a similar way as used for the Earth's aurora. Our method allows the altitude distribution of the ionization rate to be obtained as a function of an arbitrary initial energy spectrum in the range of 1–200 keV. It also includes incident angle dependence and an arbitrary density distribution of molecular hydrogen. We show that there is little dependence of the estimated ionospheric conductance on atomic species such as H and He. We compare our results with those of recent studies with different electron transport schemes by adapting our parameterization to their atmospheric conditions. We discuss the intrinsic problem of their simplified assumption. The ionospheric conductance, which is important for Jupiter's magnetosphere-ionosphere coupling system, is estimated to vary by a factor depending on the electron energy spectrum based on recent observation and modeling. We discuss this difference through the relation with field-aligned current and electron spectrum.

  8. Rapid parameterization of small molecules using the Force Field Toolkit.

    Science.gov (United States)

    Mayne, Christopher G; Saam, Jan; Schulten, Klaus; Tajkhorshid, Emad; Gumbart, James C

    2013-12-15

    The inability to rapidly generate accurate and robust parameters for novel chemical matter continues to severely limit the application of molecular dynamics simulations to many biological systems of interest, especially in fields such as drug discovery. Although the release of generalized versions of common classical force fields, for example, General Amber Force Field and CHARMM General Force Field, have posited guidelines for parameterization of small molecules, many technical challenges remain that have hampered their wide-scale extension. The Force Field Toolkit (ffTK), described herein, minimizes common barriers to ligand parameterization through algorithm and method development, automation of tedious and error-prone tasks, and graphical user interface design. Distributed as a VMD plugin, ffTK facilitates the traversal of a clear and organized workflow resulting in a complete set of CHARMM-compatible parameters. A variety of tools are provided to generate quantum mechanical target data, setup multidimensional optimization routines, and analyze parameter performance. Parameters developed for a small test set of molecules using ffTK were comparable to existing CGenFF parameters in their ability to reproduce experimentally measured values for pure-solvent properties (<15% error from experiment) and free energy of solvation (±0.5 kcal/mol from experiment). Copyright © 2013 Wiley Periodicals, Inc.

  9. Why Did the Bear Cross the Road? Comparing the Performance of Multiple Resistance Surfaces and Connectivity Modeling Methods

    Directory of Open Access Journals (Sweden)

    Samuel A. Cushman

    2014-12-01

    Full Text Available There have been few assessments of the performance of alternative resistance surfaces, and little is known about how connectivity modeling approaches differ in their ability to predict organism movements. In this paper, we evaluate the performance of four connectivity modeling approaches applied to two resistance surfaces in predicting the locations of highway crossings by American black bears in the northern Rocky Mountains, USA. We found that a resistance surface derived directly from movement data greatly outperformed a resistance surface produced from analysis of genetic differentiation, despite their heuristic similarities. Our analysis also suggested differences in the performance of different connectivity modeling approaches. Factorial least cost paths appeared to slightly outperform other methods on the movement-derived resistance surface, but had very poor performance on the resistance surface obtained from multi-model landscape genetic analysis. Cumulative resistant kernels appeared to offer the best combination of high predictive performance and sensitivity to differences in resistance surface parameterization. Our analysis highlights that even when two resistance surfaces include the same variables and have a high spatial correlation of resistance values, they may perform very differently in predicting animal movement and population connectivity.

  10. Improvement in the Modeled Representation of North American Monsoon Precipitation Using a Modified Kain–Fritsch Convective Parameterization Scheme

    KAUST Repository

    Luong, Thang

    2018-01-22

    A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers) to simulate organized convection, principally mesoscale convective systems. With the use of convective parameterization, high precipitation biases in model simulations are typically observed over the peaks of mountain ranges. To address this issue, the Kain–Fritsch (KF) cumulus parameterization scheme has been modified with new diagnostic equations to compute the updraft velocity, the convective available potential energy closure assumption, and the convective trigger function. The scheme has been adapted for use in the Weather Research and Forecasting (WRF). A numerical weather prediction-type simulation is conducted for the North American Monsoon Experiment Intensive Observing Period 2 and a regional climate simulation is performed, by dynamically downscaling. In both of these applications, there are notable improvements in the WRF model-simulated precipitation due to the better representation of organized, propagating convection. The use of the modified KF scheme for atmospheric model simulations may provide a more computationally economical alternative to improve the representation of organized convection, as compared to convective-permitting simulations at the kilometer scale or a super-parameterization approach.

  11. An Internet of Things Approach for Extracting Featured Data Using AIS Database: An Application Based on the Viewpoint of Connected Ships

    Directory of Open Access Journals (Sweden)

    Wei He

    2017-09-01

    Full Text Available Automatic Identification System (AIS, as a major data source of navigational data, is widely used in the application of connected ships for the purpose of implementing maritime situation awareness and evaluating maritime transportation. Efficiently extracting featured data from AIS database is always a challenge and time-consuming work for maritime administrators and researchers. In this paper, a novel approach was proposed to extract massive featured data from the AIS database. An Evidential Reasoning rule based methodology was proposed to simulate the procedure of extracting routes from AIS database artificially. First, the frequency distributions of ship dynamic attributes, such as the mean and variance of Speed over Ground, Course over Ground, are obtained, respectively, according to the verified AIS data samples. Subsequently, the correlations between the attributes and belief degrees of the categories are established based on likelihood modeling. In this case, the attributes were characterized into several pieces of evidence, and the evidence can be combined with the Evidential Reasoning rule. In addition, the weight coefficients were trained in a nonlinear optimization model to extract the AIS data more accurately. A real life case study was conducted at an intersection waterway, Yangtze River, Wuhan, China. The results show that the proposed methodology is able to extract data very precisely.

  12. A Coordinated Effort to Improve Parameterization of High-Latitude Cloud and Radiation Processes

    International Nuclear Information System (INIS)

    J. O. Pinto; A.H. Lynch

    2004-01-01

    The goal of this project is the development and evaluation of improved parameterization of arctic cloud and radiation processes and implementation of the parameterizations into a climate model. Our research focuses specifically on the following issues: (1) continued development and evaluation of cloud microphysical parameterizations, focusing on issues of particular relevance for mixed phase clouds; and (2) evaluation of the mesoscale simulation of arctic cloud system life cycles

  13. Making connections

    NARCIS (Netherlands)

    Marion Duimel

    2007-01-01

    Original title: Verbinding maken; senioren en internet. More and more older people are finding their way to the Internet. Many people aged over 50 who have only recently gone online say that a new world has opened up for them. By connecting to the Internet they have the feeling that they

  14. CMS Connect

    Science.gov (United States)

    Balcas, J.; Bockelman, B.; Gardner, R., Jr.; Hurtado Anampa, K.; Jayatilaka, B.; Aftab Khan, F.; Lannon, K.; Larson, K.; Letts, J.; Marra Da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users.

  15. ANALYSIS OF PARAMETERIZATION VALUE REDUCTION OF SOFT SETS AND ITS ALGORITHM

    Directory of Open Access Journals (Sweden)

    Mohammed Adam Taheir Mohammed

    2016-02-01

    Full Text Available In this paper, the parameterization value reduction of soft sets and its algorithm in decision making are studied and described. It is based on parameterization reduction of soft sets. The purpose of this study is to investigate the inherited disadvantages of parameterization reduction of soft sets and its algorithm. The algorithms presented in this study attempt to reduce the value of least parameters from soft set. Through the analysis, two techniques have been described. Through this study, it is found that parameterization reduction of soft sets and its algorithm has yielded a different and inconsistency in suboptimal result.

  16. The scale effect on soil erosion. A plot approach to understand connectivity on slopes under cultivation at variable plot sizes and under Mediterranean climatic conditions

    Science.gov (United States)

    Cerdà, Artemi; Bagarello, Vicenzo; Ferro, Vito; Iovino, Massimo; Borja, Manuel Estaban Lucas; Francisco Martínez Murillo, Juan; González Camarena, Rafael

    2017-04-01

    research share information collected during the last 15 years in the Sparacia and El Teularet soil erosion experimental stations under the same management and research how the concept of connectivity can help us to have a better understanding of the soil erosion process, and the effect of scale. All the data will be treated to show the runoff and sediment eroded at different plot sizes to understand how the sediment is transported at event scale. The rainfall characteristics will be also analysed to understand the soil erosion processes at different scales. Acknowledgements The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n 603498 (RECARE project) and the CGL2013- 47862-C2-1-R and CGL2016-75178-C2-2-R national research projects. References Bagarello, V., Ferro, V., Pampalone, V. 2015b. A new version of the USLE-MM for predicting bare plot soil loss at the Sparacia (South Italy) experimental site. Hydrological Processes, 29(19), 4210-4219. Bagarello, V., Ferro, V., Giordano, G., Mannocchi, F., Pampalone, V., Todisco, F. 2015a. A modified applicative criterion of the physical model concept for evaluating plot soil erosion predictions. Catena, 126, 53-58. Cerdà, A. and M. F. Jurgensen. 2011. Ant Mounds as a Source of Sediment on Citrus Orchard Plantations in Eastern Spain. A Three-Scale Rainfall Simulation Approach. Catena 85 (3): 231-236. doi:10.1016/j.catena.2011.01.008. Cerdà, A., Giménez-Morera, A., Jordan, A., Pereira, P., Novara, A., Keesstra, S., Sinoga, J. D. R. 2015. Shrubland as a soil and water conservation agent in Mediterranean-type ecosystems: The Sierra de Enguera study site contribution. Monitoring and Modelling Dynamic Environments, 45. Cerdà, A., R. Brazier, M. Nearing, and J. de Vente. 2013. Scales and Erosion. Catena 102: 1-2. doi:10.1016/j.catena.2011.09.006. Galdino, S., E. E. Sano, R. G. Andrade, C. R. Grego, S. F. Nogueira, C. Bragantini, and A. H. G

  17. A new conceptual framework for water and sediment connectivity

    Science.gov (United States)

    Keesstra, Saskia; Cerdà, Artemi; Parsons, Tony; Nunes, Joao Pedro; Saco, Patricia

    2016-04-01

    hydrological and sediment connectivity as described in previous research by Bracken et al (2013, 2015). By looking at the individual parts of the system, it becomes more manageable and less conceptual, which is important because we have to indicate where the research on connectivity should focus on. With this approach, processes and feedbacks in the catchment system can be pulled apart to study separately, making the system understandable and measureable, which will enable parameterization of models with actual measured data. The approach we took in describing water and sediment transfer is to first assess how they work in a system in dynamic equilibrium. After describing this, an assessment is made of how such dynamic equilibriums can be taken out of balance by an external push. Baartman, J.E.M., Masselink, R.H., Keesstra, S.D., Temme, A.J.A.M., 2013. Linking landscape morphological complexity and sediment connectivity. Earth Surface Processes and Landforms 38: 1457-1471. Bracken, L.J., Wainwright, J., Ali, G.A., Tetzlaff, D., Smith, M.W., Reaney, S.M., and Roy, A.G. 2013. Concepts of hydrological connectivity: research approaches, pathways and future agendas. Earth Science Reviews, 119, 17-34. Bracken, L.J., Turnbull, L, Wainwright, J. and Boogart, P. Submitted. Sediment Connectivity: A Framework for Understanding Sediment Transfer at Multiple Scales. Earth Surface Processes and Landforms. Cerdà, A., Brazier, R., Nearing, M., and de Vente, J. 2012. scales and erosion. Catena, 102, 1-2. doi:10.1016/j.catena.2011.09.006 Parsons A.J., Bracken L., Peoppl , R., Wainwright J., Keesstra, S.D., 2015. Editorial: Introduction to special issue on connectivity in water and sediment dynamics. In press in Earth Surface Processes and Landforms. DOI: 10.1002/esp.3714

  18. Parameterization of ion channeling half-angles and minimum yields

    Energy Technology Data Exchange (ETDEWEB)

    Doyle, Barney L.

    2016-03-15

    A MS Excel program has been written that calculates ion channeling half-angles and minimum yields in cubic bcc, fcc and diamond lattice crystals. All of the tables and graphs in the three Ion Beam Analysis Handbooks that previously had to be manually looked up and read from were programed into Excel in handy lookup tables, or parameterized, for the case of the graphs, using rather simple exponential functions with different power functions of the arguments. The program then offers an extremely convenient way to calculate axial and planar half-angles, minimum yields, effects on half-angles and minimum yields of amorphous overlayers. The program can calculate these half-angles and minimum yields for 〈u v w〉 axes and [h k l] planes up to (5 5 5). The program is open source and available at (http://www.sandia.gov/pcnsc/departments/iba/ibatable.html).

  19. A simple parameterization of aerosol emissions in RAMS

    Science.gov (United States)

    Letcher, Theodore

    Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical

  20. A stratiform cloud parameterization for General Circulation Models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  1. A stratiform cloud parameterization for general circulation models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in general circulation models (GCMs) is widely recognized as a major limitation in applying these models to predictions of global climate change. The purpose of this project is to develop in GCMs a stratiform cloud parameterization that expresses clouds in terms of bulk microphysical properties and their subgrid variability. Various clouds variables and their interactions are summarized. Precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  2. Sensitivity of tropical cyclone simulations to microphysics parameterizations in WRF

    International Nuclear Information System (INIS)

    Reshmi Mohan, P.; Srinivas, C.V.; Bhaskaran, R.; Venkatraman, B.; Yesubabu, V.

    2018-01-01

    Tropical cyclones (TC) cause storm surge along coastal areas where these storms cross the coast. As major nuclear facilities are usually installed in coastal region, the surge predictions are highly important for DAE. The critical TC parameters needed in estimating storm surge are intensity (winds, central pressure and radius of maximum winds) and storm tracks. The predictions with numerical models are generally made by representing the clouds and precipitation processes using convective and microphysics parameterization. At high spatial resolutions (1-3Km) microphysics can act as cloud resolving NWP model to explicitly resolve the convective precipitation without using convection schemes. Recent simulation studies using WRF on severe weather phenomena such as thunderstorms and hurricanes indicated large sensitivity of predicted rainfall and hurricane tracks to microphysics due to variation in temperature and pressure gradients which generate winds that determine the storm track. In the present study the sensitivity of tropical cyclone tracks and intensity to different microphysics schemes has been conducted

  3. Connecting the failure of K-theory inside and above vegetation canopies and ejection-sweep cycles by a large eddy simulation

    International Nuclear Information System (INIS)

    Banerjee, Tirtha; De Roo, Frederik; Mauder, Matthias

    2017-01-01

    Parameterizations of biosphere-atmosphere interaction processes in climate models and other hydrological applications require characterization of turbulent transport of momentum and scalars between vegetation canopies and the atmosphere, which is often modeled using a turbulent analogy to molecular diffusion processes. However, simple flux-gradient approaches (K-theory) fail for canopy turbulence. One cause is turbulent transport by large coherent eddies at the canopy scale, which can be linked to sweep-ejection events, and bear signatures of non-local organized eddy motions. K-theory, that parameterizes the turbulent flux or stress proportional to the local concentration or velocity gradient, fails to account for these non-local organized motions. The connection to sweep-ejection cycles and the local turbulent flux can be traced back to the turbulence triple moment (C ′ W ′ W ′ )-bar. In this work, we use large-eddy simulation to investigate the diagnostic connection between the failure of K-theory and sweep-ejection motions. Analyzed schemes are quadrant analysis (QA) and a complete and incomplete cumulant expansion (CEM and ICEM) method. The latter approaches introduce a turbulence timescale in the modeling. Furthermore, we find that the momentum flux needs a different formulation for the turbulence timescale than the sensible heat flux. In conclusion, accounting for buoyancy in stratified conditions is also deemed to be important in addition to accounting for non-local events to predict the correct momentum or scalar fluxes.

  4. Examining Chaotic Convection with Super-Parameterization Ensembles

    Science.gov (United States)

    Jones, Todd R.

    This study investigates a variety of features present in a new configuration of the Community Atmosphere Model (CAM) variant, SP-CAM 2.0. The new configuration (multiple-parameterization-CAM, MP-CAM) changes the manner in which the super-parameterization (SP) concept represents physical tendency feedbacks to the large-scale by using the mean of 10 independent two-dimensional cloud-permitting model (CPM) curtains in each global model column instead of the conventional single CPM curtain. The climates of the SP and MP configurations are examined to investigate any significant differences caused by the application of convective physical tendencies that are more deterministic in nature, paying particular attention to extreme precipitation events and large-scale weather systems, such as the Madden-Julian Oscillation (MJO). A number of small but significant changes in the mean state climate are uncovered, and it is found that the new formulation degrades MJO performance. Despite these deficiencies, the ensemble of possible realizations of convective states in the MP configuration allows for analysis of uncertainty in the small-scale solution, lending to examination of those weather regimes and physical mechanisms associated with strong, chaotic convection. Methods of quantifying precipitation predictability are explored, and use of the most reliable of these leads to the conclusion that poor precipitation predictability is most directly related to the proximity of the global climate model column state to atmospheric critical points. Secondarily, the predictability is tied to the availability of potential convective energy, the presence of mesoscale convective organization on the CPM grid, and the directive power of the large-scale.

  5. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.

    Science.gov (United States)

    Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.

  6. Gendered Connections

    DEFF Research Database (Denmark)

    Jensen, Steffen Bo

    2009-01-01

    This article explores the gendered nature of urban politics in Cape Town by focusing on a group of female, township politicians. Employing the Deleuzian concept of `wild connectivity', it argues that these politically entrepreneurial women were able to negotiate a highly volatile urban landscape...... by drawing on and operationalizing violent, male networks — from struggle activists' networks, to vigilante groups and gangs, to the police. The fact that they were women helped them to tap into and exploit these networks. At the same time, they were restricted by their sex, as their ability to navigate...... space also drew on quite traditional notions of female respectability. Furthermore, the article argues, the form of wild connectivity to an extent was a function of the political transition, which destabilized formal structures of gendered authority. It remains a question whether this form...

  7. Assessing the performance of wave breaking parameterizations in shallow waters in spectral wave models

    Science.gov (United States)

    Lin, Shangfei; Sheng, Jinyu

    2017-12-01

    Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.

  8. A clinically parameterized mathematical model of Shigella immunity to inform vaccine design.

    Directory of Open Access Journals (Sweden)

    Courtney L Davis

    Full Text Available We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella's lipopolysaccharide (LPS and O-membrane proteins (OMP were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella's LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1 the rate that Shigella migrates into the lamina propria or epithelium, 2 the rate that memory B cells (BM differentiate into antibody-secreting cells (ASC, 3 the rate at which antibodies are produced by activated ASC, and 4 the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine.

  9. A clinically parameterized mathematical model of Shigella immunity to inform vaccine design.

    Science.gov (United States)

    Davis, Courtney L; Wahid, Rezwanul; Toapanta, Franklin R; Simon, Jakub K; Sztein, Marcelo B

    2018-01-01

    We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella's lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella's LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine.

  10. The CCPP-ARM Parameterization Testbed (CAPT): Where Climate Simulation Meets Weather Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J

    2003-11-21

    To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands, in particular, that the GCM parameterizations of unresolved processes should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provied that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be similarly tested. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the USDOE is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM. Numerical weather prediction methods show promise for improving parameterizations in climate GCMs.

  11. A shallow convection parameterization for the non-hydrostatic MM5 mesoscale model

    Energy Technology Data Exchange (ETDEWEB)

    Seaman, N.L.; Kain, J.S.; Deng, A. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    A shallow convection parameterization suitable for the Pennsylvannia State University (PSU)/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) is being developed at PSU. The parameterization is based on parcel perturbation theory developed in conjunction with a 1-D Mellor Yamada 1.5-order planetary boundary layer scheme and the Kain-Fritsch deep convection model.

  12. Distance parameterization for efficient seismic history matching with the ensemble Kalman Filter

    NARCIS (Netherlands)

    Leeuwenburgh, O.; Arts, R.

    2012-01-01

    The Ensemble Kalman Filter (EnKF), in combination with travel-time parameterization, provides a robust and flexible method for quantitative multi-model history matching to time-lapse seismic data. A disadvantage of the parameterization in terms of travel-times is that it requires simulation of

  13. The Nuisance of Nuisance Regression: Spectral Misspecification in a Common Approach to Resting-State fMRI Preprocessing Reintroduces Noise and Obscures Functional Connectivity

    OpenAIRE

    Hallquist, Michael N.; Hwang, Kai; Luna, Beatriz

    2013-01-01

    Recent resting-state functional connectivity fMRI (RS-fcMRI) research has demonstrated that head motion during fMRI acquisition systematically influences connectivity estimates despite bandpass filtering and nuisance regression, which are intended to reduce such nuisance variability. We provide evidence that the effects of head motion and other nuisance signals are poorly controlled when the fMRI time series are bandpass-filtered but the regressors are unfiltered, resulting in the inadvertent...

  14. Comparing parameterized versus measured microphysical properties of tropical convective cloud bases during the ACRIDICON–CHUVA campaign

    Directory of Open Access Journals (Sweden)

    R. C. Braga

    2017-06-01

    Full Text Available The objective of this study is to validate parameterizations that were recently developed for satellite retrievals of cloud condensation nuclei supersaturation spectra, NCCN(S, at cloud base alongside more traditional parameterizations connecting NCCN(S with cloud base updrafts and drop concentrations. This was based on the HALO aircraft measurements during the ACRIDICON–CHUVA campaign over the Amazon region, which took place in September 2014. The properties of convective clouds were measured with a cloud combination probe (CCP, a cloud and aerosol spectrometer (CAS-DPOL, and a CCN counter onboard the HALO aircraft. An intercomparison of the cloud drop size distributions (DSDs and the cloud water content (CWC derived from the different instruments generally shows good agreement within the instrumental uncertainties. To this end, the directly measured cloud drop concentrations (Nd near cloud base were compared with inferred values based on the measured cloud base updraft velocity (Wb and NCCN(S spectra. The measurements of Nd at cloud base were also compared with drop concentrations (Na derived on the basis of an adiabatic assumption and obtained from the vertical evolution of cloud drop effective radius (re above cloud base. The measurements of NCCN(S and Wb reproduced the observed Nd within the measurements uncertainties when the old (1959 Twomey's parameterization was used. The agreement between the measured and calculated Nd was only within a factor of 2 with attempts to use cloud base S, as obtained from the measured Wb, Nd, and NCCN(S. This underscores the yet unresolved challenge of aircraft measurements of S in clouds. Importantly, the vertical evolution of re with height reproduced the observation-based nearly adiabatic cloud base drop concentrations, Na. The combination of these results provides aircraft observational support for the various components of the satellite-retrieved methodology that was recently developed to

  15. On the Dependence of Cloud Feedbacks on Physical Parameterizations in WRF Aquaplanet Simulations

    Science.gov (United States)

    Cesana, Grégory; Suselj, Kay; Brient, Florent

    2017-10-01

    We investigate the effects of physical parameterizations on cloud feedback uncertainty in response to climate change. For this purpose, we construct an ensemble of eight aquaplanet simulations using the Weather Research and Forecasting (WRF) model. In each WRF-derived simulation, we replace only one parameterization at a time while all other parameters remain identical. By doing so, we aim to (i) reproduce cloud feedback uncertainty from state-of-the-art climate models and (ii) understand how parametrizations impact cloud feedbacks. Our results demonstrate that this ensemble of WRF simulations, which differ only in physical parameterizations, replicates the range of cloud feedback uncertainty found in state-of-the-art climate models. We show that microphysics and convective parameterizations govern the magnitude and sign of cloud feedbacks, mostly due to tropical low-level clouds in subsidence regimes. Finally, this study highlights the advantages of using WRF to analyze cloud feedback mechanisms owing to its plug-and-play parameterization capability.

  16. Cosmic Connections

    CERN Document Server

    Ellis, Jonathan Richard

    2003-01-01

    A National Research Council study on connecting quarks with the cosmos has recently posed a number of the more important open questions at the interface between particle physics and cosmology. These questions include the nature of dark matter and dark energy, how the Universe began, modifications to gravity, the effects of neutrinos on the Universe, how cosmic accelerators work, and whether there are new states of matter at high density and pressure. These questions are discussed in the context of the talks presented at this Summer Institute.

  17. Field Investigation of the Turbulent Flux Parameterization and Scalar Turbulence Structure over a Melting Valley Glacier

    Science.gov (United States)

    Guo, X.; Yang, K.; Yang, W.; Li, S.; Long, Z.

    2011-12-01

    We present a field investigation over a melting valley glacier on the Tibetan Plateau. One particular aspect lies in that three melt phases are distinguished during the glacier's ablation season, which enables us to compare results over snow, bare-ice, and hummocky surfaces [with aerodynamic roughness lengths (z0M) varying on the order of 10-4-10-2 m]. We address two issues of common concern in the study of glacio-meteorology and micrometeorology. First, we study turbulent energy flux estimation through a critical evaluation of three parameterizations of the scalar roughness lengths (z0T for temperature and z0q for humidity), viz. key factors for the accurate estimation of sensible heat and latent heat fluxes using the bulk aerodynamic method. The first approach (Andreas 1987, Boundary-Layer Meteorol 38:159-184) is based on surface-renewal models and has been very widely applied in glaciated areas; the second (Yang et al. 2002, Q J Roy Meteorol Soc 128:2073-2087) has never received application over an ice/snow surface, despite its validity in arid regions; the third approach (Smeets and van den Broeke 2008, Boundary-Layer Meteorol 128:339-355) is proposed for use specifically over rough ice defined as z0M > 10-3 m or so. This empirical z0M threshold value is deemed of general relevance to glaciated areas (e.g. ice sheet/cap and valley/outlet glaciers), above which the first approach gives underestimated z0T and z0q. The first and the third approaches tend to underestimate and overestimate turbulent heat/moisture exchange, respectively (relative errors often > 30%). Overall, the second approach produces fairly low errors in energy flux estimates; it thus emerges as a practically useful choice to parameterize z0T and z0q over an ice/snow surface. Our evaluation of z0T and z0q parameterizations hopefully serves as a useful source of reference for physically based modeling of land-ice surface energy budget and mass balance. Second, we explore how scalar turbulence

  18. Parameterized Algorithms for Survivable Network Design with Uniform Demands

    DEFF Research Database (Denmark)

    Bang-Jensen, Jørgen; Klinkby Knudsen, Kristine Vitting; Saurabh, Saket

    2018-01-01

    problem in combinatorial optimization that captures numerous well-studied problems in graph theory and graph algorithms. Consequently, there is a long line of research into exact-polynomial time algorithms as well as approximation algorithms for various restrictions of this problem. An important...... that SNDP is W[1]-hard for both arc and vertex connectivity versions on digraphs. The core of our algorithms is composed of new combinatorial results on connectivity in digraphs and undirected graphs....

  19. Sex- and habitat-specific movement of an omnivorous semi-terrestrial crab controls habitat connectivity and subsidies: a multi-parameter approach.

    Science.gov (United States)

    Hübner, Lena; Pennings, Steven C; Zimmer, Martin

    2015-08-01

    Distinct habitats are often linked through fluxes of matter and migration of organisms. In particular, intertidal ecotones are prone to being influenced from both the marine and the terrestrial realms, but whether or not small-scale migration for feeding, sheltering or reproducing is detectable may depend on the parameter studied. Within the ecotone of an upper saltmarsh in the United States, we investigated the sex-specific movement of the semi-terrestrial crab Armases cinereum using an approach of determining multiple measures of across-ecotone migration. To this end, we determined food preference, digestive abilities (enzyme activities), bacterial hindgut communities (genetic fingerprint), and the trophic position of Armases and potential food sources (stable isotopes) of males versus females of different sub-habitats, namely high saltmarsh and coastal forest. Daily observations showed that Armases moved frequently between high-intertidal (saltmarsh) and terrestrial (forest) habitats. Males were encountered more often in the forest habitat, whilst gravid females tended to be more abundant in the marsh habitat but moved more frequently. Food preference was driven by both sex and habitat. The needlerush Juncus was preferred over three other high-marsh detrital food sources, and the periwinkle Littoraria was the preferred prey of male (but not female) crabs from the forest habitats; both male and female crabs from marsh habitat preferred the fiddler crab Uca over three other prey items. In the field, the major food sources were clearly vegetal, but males have a higher trophic position than females. In contrast to food preference, isotope data excluded Uca and Littoraria as major food sources, except for males from the forest, and suggested that Armases consumes a mix of C4 and C3 plants along with animal prey. Digestive enzyme activities differed significantly between sexes and habitats and were higher in females and in marsh crabs. The bacterial hindgut community

  20. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    Directory of Open Access Journals (Sweden)

    Erin E Poor

    Full Text Available Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent and expert-based (Analytic Hierarchy Process. We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  1. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    Science.gov (United States)

    Poor, Erin E; Loucks, Colby; Jakes, Andrew; Urban, Dean L

    2012-01-01

    Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  2. Connecting Architecture and Implementation

    Science.gov (United States)

    Buchgeher, Georg; Weinreich, Rainer

    Software architectures are still typically defined and described independently from implementation. To avoid architectural erosion and drift, architectural representation needs to be continuously updated and synchronized with system implementation. Existing approaches for architecture representation like informal architecture documentation, UML diagrams, and Architecture Description Languages (ADLs) provide only limited support for connecting architecture descriptions and implementations. Architecture management tools like Lattix, SonarJ, and Sotoarc and UML-tools tackle this problem by extracting architecture information directly from code. This approach works for low-level architectural abstractions like classes and interfaces in object-oriented systems but fails to support architectural abstractions not found in programming languages. In this paper we present an approach for linking and continuously synchronizing a formalized architecture representation to an implementation. The approach is a synthesis of functionality provided by code-centric architecture management and UML tools and higher-level architecture analysis approaches like ADLs.

  3. Places Connected:

    DEFF Research Database (Denmark)

    Hansen, Annette Skovsted

    This paper argues that development assistance contributed to the globalization of the 20th century by financing truly global networks of people. By focusing on the networks financed by development assistance bound by the national histories of Denmark and Japan, I illustrate how the people who...... experiences of place, however, when it is often the same people who experience many different places? Along with many other so-called donors in the 1950s, Denmark and Japan chose to invest in the education of own and other nationals involved in development and thereby financed personal connections between...... individuals throughout the world. Development assistance , where there are two or three links only between a Bangladeshi farmer, a street child in Sao Paolo and the President of the United States, the Queen of Denmark, or a suburban house wife in Japan, who has never left the Osaka area, but mothered a United...

  4. Experimental continuously reinforced concrete pavement parameterization using nondestructive methods

    Directory of Open Access Journals (Sweden)

    L. S. Salles

    Full Text Available ABSTRACT Four continuously reinforced concrete pavement (CRCP sections were built at the University of São Paulo campus in order to analyze the pavement performance in a tropical environment. The sections short length coupled with particular project aspects made the experimental CRCP cracking be different from the traditional CRCP one. After three years of construction, a series of nondestructive testing were performed - Falling Weight Deflectometer (FWD loadings - to verify and to parameterize the pavement structural condition based on two main properties: the elasticity modulus of concrete (E and the modulus of subgrade reaction (k. These properties estimation was obtained through the matching process between real and EverFE simulated basins with the load at the slab center, between two consecutive cracks. The backcalculation results show that the lack of anchorage at the sections end decreases the E and k values and that the longitudinal reinforcement percentage provides additional stiffness to the pavement. Additionally, FWD loadings tangential to the cracks allowed the load transfer efficiency (LTE estimation determination across cracks. The LTE resulted in values above 90 % for all cracks.

  5. Parameterization-based tracking for the P2 experiment

    Science.gov (United States)

    Sorokin, Iurii

    2017-08-01

    The P2 experiment in Mainz aims to determine the weak mixing angle θW at low momentum transfer by measuring the parity-violating asymmetry of elastic electronproton scattering. In order to achieve the intended precision of Δ(sin2 θW)/sin2θW = 0:13% within the planned 10 000 hours of running the experiment has to operate at the rate of 1011 detected electrons per second. Although it is not required to measure the kinematic parameters of each individual electron, every attempt is made to achieve the highest possible throughput in the track reconstruction chain. In the present work a parameterization-based track reconstruction method is described. It is a variation of track following, where the results of the computation-heavy steps, namely the propagation of a track to the further detector plane, and the fitting, are pre-calculated, and expressed in terms of parametric analytic functions. This makes the algorithm extremely fast, and well-suited for an implementation on an FPGA. The method also takes implicitly into account the actual phase space distribution of the tracks already at the stage of candidate construction. Compared to a simple algorithm, that does not use such information, this allows reducing the combinatorial background by many orders of magnitude, down to O(1) background candidate per one signal track. The method is developed specifically for the P2 experiment in Mainz, and the presented implementation is tightly coupled to the experimental conditions.

  6. Parameterization-based tracking for the P2 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Sorokin, Iurii [Institut fuer Kernphysik and PRISMA Cluster of Excellence, Mainz (Germany); Collaboration: P2-Collaboration

    2016-07-01

    The P2 experiment at the new MESA accelerator in Mainz aims to determine the weak mixing angle by measuring the parity-violating asymmetry in elastic electron-proton scattering at low momentum transfer. To achieve an unprecedented precision an order of 10{sup 11} scattered electrons per second have to be acquired. %within the acceptance. Whereas the tracking system is not required to operate at such high rates, every attempt is made to achieve as high rate capability as possible. The P2 tracking system will consist of four planes of high-voltage monolithic active pixel sensors (HV-MAPS). With the present preliminary design one expects about 150 signal electron tracks and 20000 background hits (from bremsstrahlung photons) per plane in every 50 ns readout frame at the full rate. In order to cope with this extreme combinatorial background in on-line mode, a parameterization-based tracking is considered as a possible solution. The idea is to transform the hit positions into a set of weakly correlated quantities, and to find simple (e.g. polynomial) functions of these quantities, that would give the required characteristics of the track (e.g. momentum). The parameters of the functions are determined from a sample of high-quality tracks, taken either from a simulation, or reconstructed in a conventional way from a sample of low-rate data.

  7. Factors influencing the parameterization of anvil clouds within GCMs

    International Nuclear Information System (INIS)

    Leone, J.M. Jr.; Chin, Hung-Neng.

    1993-03-01

    The overall goal of this project is to improve the representation of clouds and their effects within global climate models (GCMs). The authors have concentrated on a small portion of the overall goal, the evolution of convectively generated cirrus clouds and their effects on the large-scale environment. Because of the large range of time and length scales involved they have been using a multi-scale attack. For the early time generation and development of the cirrus anvil they are using a cloud-scale model with horizontal resolution of 1--2 kilometers; while for the larger scale transport by the larger scale flow they are using a mesoscale model with a horizontal resolution of 20--60 kilometers. The eventual goal is to use the information obtained from these simulations together with available observations to derive improved cloud parameterizations for use in GCMs. This paper presents results from their cloud-scale studies and describes a new tool, a cirrus generator, that they have developed to aid in their mesoscale studies

  8. Influence of Ice Nuclei Parameterization Schemes on the Hail Process

    Directory of Open Access Journals (Sweden)

    Xiaoli Liu

    2018-01-01

    Full Text Available Ice nuclei are very important factors as they significantly affect the development and evolvement of convective clouds such as hail clouds. In this study, numerical simulations of hail processes in the Zhejiang Province were conducted using a mesoscale numerical model (WRF v3.4. The effects of six ice nuclei parameterization schemes on the macroscopic and microscopic structures of hail clouds were compared. The effect of the ice nuclei concentration on ground hailfall is stronger than that on ground rainfall. There were significant spatiotemporal, intensity, and distribution differences in hailfall. Changes in the ice nuclei concentration caused different changes in hydrometeors and directly affected the ice crystals, and, hence, the spatiotemporal distribution of other hydrometeors and the thermodynamic structure of clouds. An increased ice nuclei concentration raises the initial concentration of ice crystals with higher mixing ratio. In the developing and early maturation stages of hail cloud, a larger number of ice crystals competed for water vapor with increasing ice nuclei concentration. This effect prevents ice crystals from maturing into snow particles and inhibits the formation and growth of hail embryos. During later maturation stages, updraft in the cloud intensified and more supercooled water was transported above the 0°C level, benefitting the production and growth of hail particles. An increased ice nuclei concentration therefore favors the formation of hail.

  9. Frozen soil parameterization in a distributed biosphere hydrological model

    Directory of Open Access Journals (Sweden)

    L. Wang

    2010-03-01

    Full Text Available In this study, a frozen soil parameterization has been modified and incorporated into a distributed biosphere hydrological model (WEB-DHM. The WEB-DHM with the frozen scheme was then rigorously evaluated in a small cold area, the Binngou watershed, against the in-situ observations from the WATER (Watershed Allied Telemetry Experimental Research. First, by using the original WEB-DHM without the frozen scheme, the land surface parameters and two van Genuchten parameters were optimized using the observed surface radiation fluxes and the soil moistures at upper layers (5, 10 and 20 cm depths at the DY station in July. Second, by using the WEB-DHM with the frozen scheme, two frozen soil parameters were calibrated using the observed soil temperature at 5 cm depth at the DY station from 21 November 2007 to 20 April 2008; while the other soil hydraulic parameters were optimized by the calibration of the discharges at the basin outlet in July and August that covers the annual largest flood peak in 2008. With these calibrated parameters, the WEB-DHM with the frozen scheme was then used for a yearlong validation from 21 November 2007 to 20 November 2008. Results showed that the WEB-DHM with the frozen scheme has given much better performance than the WEB-DHM without the frozen scheme, in the simulations of soil moisture profile at the cold regions catchment and the discharges at the basin outlet in the yearlong simulation.

  10. Boundary layer parameterizations and long-range transport

    International Nuclear Information System (INIS)

    Irwin, J.S.

    1992-01-01

    A joint work group between the American Meteorological Society (AMS) and the EPA is perusing the construction of an air quality model that incorporates boundary layer parameterizations of dispersion and transport. This model could replace the currently accepted model, the Industrial Source Complex (ISC) model. The ISC model is a Gaussian-plume multiple point-source model that provides for consideration of fugitive emissions, aerodynamic wake effects, gravitational settling and dry deposition. A work group of several Federal and State agencies is perusing the construction of an air quality modeling system for use in assessing and tracking visibility impairment resulting from long-range transport of pollutants. The modeling system is designed to use the hourly vertical profiles of wind, temperature and moisture resulting from a mesoscale meteorological processor that employs four dimensional data assimilation (FDDA). FDDA involves adding forcing functions to the governing model equations to gradually ''nudge'' the model state toward the observations (12-hourly upper air observations of wind, temperature and moisture, and 3-hourly surface observations of wind and moisture). In this way it is possible to generate data sets whose accuracy, in terms of transport, precipitation, and dynamic consistency is superior to both direct interpolation of synoptic-scale analyses of observations and purely predictive mode model result. (AB) ( 19 refs.)

  11. Parameterized Finite Element Modeling and Buckling Analysis of Six Typical Composite Grid Cylindrical Shells

    Science.gov (United States)

    Lai, Changliang; Wang, Junbiao; Liu, Chuang

    2014-10-01

    Six typical composite grid cylindrical shells are constructed by superimposing three basic types of ribs. Then buckling behavior and structural efficiency of these shells are analyzed under axial compression, pure bending, torsion and transverse bending by finite element (FE) models. The FE models are created by a parametrical FE modeling approach that defines FE models with original natural twisted geometry and orients cross-sections of beam elements exactly. And the approach is parameterized and coded by Patran Command Language (PCL). The demonstrations of FE modeling indicate the program enables efficient generation of FE models and facilitates parametric studies and design of grid shells. Using the program, the effects of helical angles on the buckling behavior of six typical grid cylindrical shells are determined. The results of these studies indicate that the triangle grid and rotated triangle grid cylindrical shell are more efficient than others under axial compression and pure bending, whereas under torsion and transverse bending, the hexagon grid cylindrical shell is most efficient. Additionally, buckling mode shapes are compared and provide an understanding of composite grid cylindrical shells that is useful in preliminary design of such structures.

  12. The feasibility of parameterizing four-state equilibria using relaxation dispersion measurements

    International Nuclear Information System (INIS)

    Li Pilong; Martins, Ilídio R. S.; Rosen, Michael K.

    2011-01-01

    Coupled equilibria play important roles in controlling information flow in biochemical systems, including allosteric molecules and multidomain proteins. In the simplest case, two equilibria are coupled to produce four interconverting states. In this study, we assessed the feasibility of determining the degree of coupling between two equilibria in a four-state system via relaxation dispersion measurements. A major bottleneck in this effort is the lack of efficient approaches to data analysis. To this end, we designed a strategy to efficiently evaluate the smoothness of the target function surface (TFS). Using this approach, we found that the TFS is very rough when fitting benchmark CPMG data to all adjustable variables of the four-state equilibria. After constraining a portion of the adjustable variables, which can often be achieved through independent biochemical manipulation of the system, the smoothness of TFS improves dramatically, although it is still insufficient to pinpoint the solution. The four-state equilibria can be finally solved with further incorporation of independent chemical shift information that is readily available. We also used Monte Carlo simulations to evaluate how well each adjustable parameter can be determined in a large kinetic and thermodynamic parameter space and how much improvement can be achieved in defining the parameters through additional measurements. The results show that in favorable conditions the combination of relaxation dispersion and biochemical manipulation allow the four-state equilibrium to be parameterized, and thus coupling strength between two processes to be determined.

  13. The parameterized post-Newtonian limit of bimetric theories of gravity

    International Nuclear Information System (INIS)

    Clifton, Timothy; Banados, Maximo; Skordis, Constantinos

    2010-01-01

    We consider the post-Newtonian limit of a general class of bimetric theories of gravity, in which both metrics are dynamical. The established parameterized post-Newtonian approach is followed as closely as possible, although new potentials are found that do not exist within the standard framework. It is found that these theories can evade solar system tests of post-Newtonian gravity remarkably well. We show that perturbations about Minkowski space in these theories contain both massless and massive degrees of freedom, and that in general there are two different types of massive mode, each with a different mass parameter. If both of these masses are sufficiently large then the predictions of the most general class of theories we consider are indistinguishable from those of general relativity, up to post-Newtonian order in a weak-field, low-velocity expansion. In the limit that the massive modes become massless, we find that these general theories do not exhibit a van Dam-Veltman-Zakharov-like discontinuity in their γ parameter, although there are discontinuities in other post-Newtonian parameters as the massless limit is approached. This smooth behaviour in γ is due to the discontinuities from each of the two different massive modes cancelling each other out. Such cancellations cannot occur in special cases with only one massive mode, such as the Isham-Salam-Strathdee theory.

  14. Biological engineering applications of feedforward neural networks designed and parameterized by genetic algorithms.

    Science.gov (United States)

    Ferentinos, Konstantinos P

    2005-09-01

    Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.

  15. Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds

    Science.gov (United States)

    Yun, Yuxing; Penner, Joyce E.

    2012-04-01

    A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.

  16. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    Science.gov (United States)

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    2018-03-01

    Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^'), and velocity-impedance-II (α″, β″ and I_S^'). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density

  17. USING PARAMETERIZATION OF OBJECTS IN AUTODESK INVENTOR IN DESIGNING STRUCTURAL CONNECTORS

    Directory of Open Access Journals (Sweden)

    Gabriel Borowski

    2015-05-01

    Full Text Available The article presents the parameterization of objects used for designing the type of elements as structural connectors and making modifications of their characteristics. The design process was carried out using Autodesk Inventor 2015. We show the latest software tools, which were used for parameterization and modeling selected types of structural connectors. We also show examples of the use of parameterization facilities in the process of constructing some details and making changes to geometry with holding of the shape the element. The presented method of Inventor usage has enabled fast and efficient creation of new objects based on sketches created.

  18. Political remittances, connectivity, and the trans-local politics of place: An alternative approach to the dominant narratives on ‘displacement’ in Colombia

    DEFF Research Database (Denmark)

    Velez Torres, Irene; Agergaard, Jytte

    2014-01-01

    communities, whose building process binds socio-political spaces beyond de-territorialisation. In this respect, we argue that social and normative categories such as ‘displacement’ and ‘ethnic territories’ need to be reinterpreted to include the interactive political ideas and actions that connect societies...... forced into motion....

  19. Dimensionless parameterization of lidar for laser remote sensing of the atmosphere and its application to systems with SiPM and PMT detectors.

    Science.gov (United States)

    Agishev, Ravil; Comerón, Adolfo; Rodriguez, Alejandro; Sicard, Michaël

    2014-05-20

    In this paper, we show a renewed approach to the generalized methodology for atmospheric lidar assessment, which uses the dimensionless parameterization as a core component. It is based on a series of our previous works where the problem of universal parameterization over many lidar technologies were described and analyzed from different points of view. The modernized dimensionless parameterization concept applied to relatively new silicon photomultiplier detectors (SiPMs) and traditional photomultiplier (PMT) detectors for remote-sensing instruments allowed predicting the lidar receiver performance with sky background available. The renewed approach can be widely used to evaluate a broad range of lidar system capabilities for a variety of lidar remote-sensing applications as well as to serve as a basis for selection of appropriate lidar system parameters for a specific application. Such a modernized methodology provides a generalized, uniform, and objective approach for evaluation of a broad range of lidar types and systems (aerosol, Raman, DIAL) operating on different targets (backscatter or topographic) and under intense sky background conditions. It can be used within the lidar community to compare different lidar instruments.

  20. A parameterization of nuclear track profiles in CR-39 detector

    Science.gov (United States)

    Azooz, A. A.; Al-Nia'emi, S. H.; Al-Jubbori, M. A.

    2012-11-01

    In this work, the empirical parameterization describing the alpha particles’ track depth in CR-39 detectors is extended to describe longitudinal track profiles against etching time for protons and alpha particles. MATLAB based software is developed for this purpose. The software calculates and plots the depth, diameter, range, residual range, saturation time, and etch rate versus etching time. The software predictions are compared with other experimental data and with results of calculations using the original software, TRACK_TEST, developed for alpha track calculations. The software related to this work is freely downloadable and performs calculations for protons in addition to alpha particles. Program summary Program title: CR39 Catalog identifier: AENA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENA_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Copyright (c) 2011, Aasim Azooz Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met • Redistributions of source code must retain the above copyright, this list of conditions and the following disclaimer. • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution This software is provided by the copyright holders and contributors “as is” and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and

  1. Parameterization of erodibility in the Rangeland Hydrology and Erosion Model

    Science.gov (United States)

    The magnitude of erosion from a hillslope is governed by the availability of sediment and connectivity of runoff and erosion processes. For undisturbed rangelands, sediment is primarily detached and transported by rainsplash and sheetflow (splash-sheet) processes in isolated bare batches, but sedime...

  2. Prioritizing connection requests in GMPLS-controlled optical networks

    DEFF Research Database (Denmark)

    Ruepp, Sarah Renée; Koster, A.; Andriolli, N.

    2009-01-01

    We prioritize bidirectional connection requests by combining dynamic connection provisioning with off-line optimization. Results show that the proposed approach decreases wavelength-converter usage, thereby allowing operators to reduce blocking-probably under bulk connection assignment or network...

  3. A Comparative Study of Nucleation Parameterizations: 2. Three-Dimensional Model Application and Evaluation

    Science.gov (United States)

    Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...

  4. Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model

    Science.gov (United States)

    This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.

  5. Improving Convection and Cloud Parameterization Using ARM Observations and NCAR Community Atmosphere Model CAM5

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Guang J. [Univ. of California, San Diego, CA (United States)

    2016-11-07

    The fundamental scientific objectives of our research are to use ARM observations and the NCAR CAM5 to understand the large-scale control on convection, and to develop improved convection and cloud parameterizations for use in GCMs.

  6. Radiative flux and forcing parameterization error in aerosol-free clear skies.

    Science.gov (United States)

    Pincus, Robert; Mlawer, Eli J; Oreopoulos, Lazaros; Ackerman, Andrew S; Baek, Sunghye; Brath, Manfred; Buehler, Stefan A; Cady-Pereira, Karen E; Cole, Jason N S; Dufresne, Jean-Louis; Kelley, Maxwell; Li, Jiangnan; Manners, James; Paynter, David J; Roehrig, Romain; Sekiguchi, Miho; Schwarzkopf, Daniel M

    2015-07-16

    Radiation parameterizations in GCMs are more accurate than their predecessorsErrors in estimates of 4 ×CO 2 forcing are large, especially for solar radiationErrors depend on atmospheric state, so global mean error is unknown.

  7. Parameterizing the competition between homogeneous and heterogeneous freezing in cirrus cloud formation – monodisperse ice nuclei

    Directory of Open Access Journals (Sweden)

    D. Barahona

    2009-01-01

    Full Text Available We present a parameterization of cirrus cloud formation that computes the ice crystal number and size distribution under the presence of homogeneous and heterogeneous freezing. The parameterization is very simple to apply and is derived from the analytical solution of the cloud parcel equations, assuming that the ice nuclei population is monodisperse and chemically homogeneous. In addition to the ice distribution, an analytical expression is provided for the limiting ice nuclei number concentration that suppresses ice formation from homogeneous freezing. The parameterization is evaluated against a detailed numerical parcel model, and reproduces numerical simulations over a wide range of conditions with an average error of 6±33%. The parameterization also compares favorably against other formulations that require some form of numerical integration.

  8. Valuable Connections

    DEFF Research Database (Denmark)

    Kjærsgaard, Mette Gislev; Smith, Rachel Charlotte

    2014-01-01

    and blurred boundaries between physical, digital and hybrid contexts, as well as design, production and use, we might need to rethink the role of ethnography within design and business development. Perhaps the aim is less about ”getting closer” to user needs and real-life contexts, through familiarization......, mediation, advocacy and facilitation, as in conventional approaches to ethnography in user centred design, and more about creating a critical theoretically informed distance from which to perceive and reflect upon complex interconnections between people, technology, business and design, as well as our roles...

  9. Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data

    International Nuclear Information System (INIS)

    Somerville, R.C.J.; Iacobellis, S.F.

    2005-01-01

    Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional

  10. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    Directory of Open Access Journals (Sweden)

    A. Endalamaw

    2017-09-01

    Full Text Available Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which better represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW in Interior Alaska: one nearly permafrost-free (LowP sub-basin and one permafrost-dominated (HighP sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC mesoscale hydrological model to simulate runoff, evapotranspiration (ET, and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub

  11. Parameterization of pion production and reaction cross sections at LAMPF energies

    International Nuclear Information System (INIS)

    Burman, R.L.; Smith, E.S.

    1989-05-01

    A parameterization of pion production and reaction cross sections is developed for eventual use in modeling neutrino production by protons in a beam stop. Emphasis is placed upon smooth parameterizations for proton energies up to 800 MeV, for all pion energies and angles, and for a wide range of materials. The resulting representations of the data are well-behaved and can be used for extrapolation to regions where there are no measurements. 22 refs., 16 figs., 2 tabs

  12. A scheme for parameterizing ice cloud water content in general circulation models

    Science.gov (United States)

    Heymsfield, Andrew J.; Donner, Leo J.

    1989-01-01

    A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.

  13. A Stochastic Lagrangian Basis for a Probabilistic Parameterization of Moisture Condensation in Eulerian Models

    OpenAIRE

    Tsang, Yue-Kin; Vallis, Geoffrey K.

    2018-01-01

    In this paper we describe the construction of an efficient probabilistic parameterization that could be used in a coarse-resolution numerical model in which the variation of moisture is not properly resolved. An Eulerian model using a coarse-grained field on a grid cannot properly resolve regions of saturation---in which condensation occurs---that are smaller than the grid boxes. Thus, in the absence of a parameterization scheme, either the grid box must become saturated or condensation will ...

  14. A parameterization for the absorption of solar radiation by water vapor in the earth's atmosphere

    Science.gov (United States)

    Wang, W.-C.

    1976-01-01

    A parameterization for the absorption of solar radiation as a function of the amount of water vapor in the earth's atmosphere is obtained. Absorption computations are based on the Goody band model and the near-infrared absorption band data of Ludwig et al. A two-parameter Curtis-Godson approximation is used to treat the inhomogeneous atmosphere. Heating rates based on a frequently used one-parameter pressure-scaling approximation are also discussed and compared with the present parameterization.

  15. The Development of a Parameterized Scatter Removal Algorithm for Nuclear Materials Identification System Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon Robert [Univ. of Tennessee, Knoxville, TN (United States)

    2010-03-01

    This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using

  16. THE DEVELOPMENT OF A PARAMETERIZED SCATTER REMOVAL ALGORITHM FOR NUCLEAR MATERIALS IDENTIFICATION SYSTEM IMAGING

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon R [ORNL

    2010-05-01

    This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the

  17. A simple parameterization for the rising velocity of bubbles in a liquid pool

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sung Hoon [Dept. of Environmental Engineering, Sunchon National University, Suncheon (Korea, Republic of); Park, Chang Hwan; Lee, Jin Yong; Lee, Byung Chul [FNC Technology, Co., Ltd., Yongin (Korea, Republic of)

    2017-06-15

    The determination of the shape and rising velocity of gas bubbles in a liquid pool is of great importance in analyzing the radioactive aerosol emissions from nuclear power plant accidents in terms of the fission product release rate and the pool scrubbing efficiency of radioactive aerosols. This article suggests a simple parameterization for the gas bubble rising velocity as a function of the volume-equivalent bubble diameter; this parameterization does not require prior knowledge of bubble shape. This is more convenient than previously suggested parameterizations because it is given as a single explicit formula. It is also shown that a bubble shape diagram, which is very similar to the Grace's diagram, can be easily generated using the parameterization suggested in this article. Furthermore, the boundaries among the three bubble shape regimes in the E{sub o}–R{sub e} plane and the condition for the bypass of the spheroidal regime can be delineated directly from the parameterization formula. Therefore, the parameterization suggested in this article appears to be useful not only in easily determining the bubble rising velocity (e.g., in postulated severe accident analysis codes) but also in understanding the trend of bubble shape change due to bubble growth.

  18. A review of the theoretical basis for bulk mass flux convective parameterization

    Directory of Open Access Journals (Sweden)

    R. S. Plant

    2010-04-01

    Full Text Available Most parameterizations for precipitating convection in use today are bulk schemes, in which an ensemble of cumulus elements with different properties is modelled as a single, representative entraining-detraining plume. We review the underpinning mathematical model for such parameterizations, in particular by comparing it with spectral models in which elements are not combined into the representative plume. The chief merit of a bulk model is that the representative plume can be described by an equation set with the same structure as that which describes each element in a spectral model. The equivalence relies on an ansatz for detrained condensate introduced by Yanai et al. (1973 and on a simplified microphysics. There are also conceptual differences in the closure of bulk and spectral parameterizations. In particular, we show that the convective quasi-equilibrium closure of Arakawa and Schubert (1974 for spectral parameterizations cannot be carried over to a bulk parameterization in a straightforward way. Quasi-equilibrium of the cloud work function assumes a timescale separation between a slow forcing process and a rapid convective response. But, for the natural bulk analogue to the cloud-work function, the relevant forcing is characterised by a different timescale, and so its quasi-equilibrium entails a different physical constraint. Closures of bulk parameterizations that use a parcel value of CAPE do not suffer from this timescale issue. However, the Yanai et al. (1973 ansatz must be invoked as a necessary ingredient of those closures.

  19. A simple parameterization for the rising velocity of bubbles in a liquid pool

    International Nuclear Information System (INIS)

    Park, Sung Hoon; Park, Chang Hwan; Lee, Jin Yong; Lee, Byung Chul

    2017-01-01

    The determination of the shape and rising velocity of gas bubbles in a liquid pool is of great importance in analyzing the radioactive aerosol emissions from nuclear power plant accidents in terms of the fission product release rate and the pool scrubbing efficiency of radioactive aerosols. This article suggests a simple parameterization for the gas bubble rising velocity as a function of the volume-equivalent bubble diameter; this parameterization does not require prior knowledge of bubble shape. This is more convenient than previously suggested parameterizations because it is given as a single explicit formula. It is also shown that a bubble shape diagram, which is very similar to the Grace's diagram, can be easily generated using the parameterization suggested in this article. Furthermore, the boundaries among the three bubble shape regimes in the E_o–R_e plane and the condition for the bypass of the spheroidal regime can be delineated directly from the parameterization formula. Therefore, the parameterization suggested in this article appears to be useful not only in easily determining the bubble rising velocity (e.g., in postulated severe accident analysis codes) but also in understanding the trend of bubble shape change due to bubble growth

  20. Parameterized Disturbance Observer Based Controller to Reduce Cyclic Loads of Wind Turbine

    Directory of Open Access Journals (Sweden)

    Raja M. Imran

    2018-05-01

    Full Text Available This paper is concerned with bump-less transfer of parameterized disturbance observer based controller with individual pitch control strategy to reduce cyclic loads of wind turbine in full load operation. Cyclic loads are generated due to wind shear and tower shadow effects. Multivariable disturbance observer based linear controllers are designed with objective to reduce output power fluctuation, tower oscillation and drive-train torsion using optimal control theory. Linear parameterized controllers are designed by using a smooth scheduling mechanism between the controllers. The proposed parameterized controller with individual pitch was tested on nonlinear Fatigue, Aerodynamics, Structures, and Turbulence (FAST code model of National Renewable Energy Laboratory (NREL’s 5 MW wind turbine. The closed-loop system performance was assessed by comparing the simulation results of proposed controller with a fixed gain and parameterized controller with collective pitch for full load operation of wind turbine. Simulations are performed with step wind to see the behavior of the system with wind shear and tower shadow effects. Then, turbulent wind is applied to see the smooth transition of the controllers. It can be concluded from the results that the proposed parameterized control shows smooth transition from one controller to another controller. Moreover, 3p and 6p harmonics are well mitigated as compared to fixed gain DOBC and parameterized DOBC with collective pitch.

  1. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    Science.gov (United States)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  2. Development and testing of an aerosol-stratus cloud parameterization scheme for middle and high latitudes

    Energy Technology Data Exchange (ETDEWEB)

    Olsson, P.Q.; Meyers, M.P.; Kreidenweis, S.; Cotton, W.R. [Colorado State Univ., Fort Collins, CO (United States)

    1996-04-01

    The aim of this new project is to develop an aerosol/cloud microphysics parameterization of mixed-phase stratus and boundary layer clouds. Our approach is to create, test, and implement a bulk-microphysics/aerosol model using data from Atmospheric Radiation Measurement (ARM) Cloud and Radiation Testbed (CART) sites and large-eddy simulation (LES) explicit bin-resolving aerosol/microphysics models. The primary objectives of this work are twofold. First, we need the prediction of number concentrations of activated aerosol which are transferred to the droplet spectrum, so that the aerosol population directly affects the cloud formation and microphysics. Second, we plan to couple the aerosol model to the gas and aqueous-chemistry module that will drive the aerosol formation and growth. We begin by exploring the feasibility of performing cloud-resolving simulations of Arctic stratus clouds over the North Slope CART site. These simulations using Colorado State University`s regional atmospheric modeling system (RAMS) will be useful in designing the structure of the cloud-resolving model and in interpreting data acquired at the North Slope site.

  3. Using Remote Sensing Data to Parameterize Ice Jam Modeling for a Northern Inland Delta

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2017-04-01

    Full Text Available The Slave River is a northern river in Canada, with ice being an important component of its flow regime for at least half of the year. During the spring breakup period, ice jams and ice-jam flooding can occur in the Slave River Delta, which is of benefit for the replenishment of moisture and sediment required to maintain the ecological integrity of the delta. To better understand the ice jam processes that lead to flooding, as well as the replenishment of the delta, the one-dimensional hydraulic river ice model RIVICE was implemented to simulate and explore ice jam formation in the Slave River Delta. Incoming ice volume, a crucial input parameter for RIVICE, was determined by the novel approach of using MODIS space-born remote sensing imagery. Space-borne and air-borne remote sensing data were used to parameterize the upstream ice volume available for ice jamming. Gauged data was used to complement modeling calibration and validation. HEC-RAS, another one-dimensional hydrodynamic model, was used to determine ice volumes required for equilibrium jams and the upper limit of ice volume that a jam can sustain, as well as being used as a threshold for the volumes estimated by the dynamic ice jam simulations using RIVICE. Parameter sensitivity analysis shows that morphological and hydraulic properties have great impacts on the ice jam length and water depth in the Slave River Delta.

  4. Direct spondylolisthesis identification and measurement in MR/CT using detectors trained by articulated parameterized spine model

    Science.gov (United States)

    Cai, Yunliang; Leung, Stephanie; Warrington, James; Pandey, Sachin; Shmuilovich, Olga; Li, Shuo

    2017-02-01

    The identification of spondylolysis and spondylolisthesis is important in spinal diagnosis, rehabilitation, and surgery planning. Accurate and automatic detection of spinal portion with spondylolisthesis problem will significantly reduce the manual work of physician and provide a more robust evaluation for the spine condition. Most existing automatic identification methods adopted the indirect approach which used vertebrae locations to measure the spondylolisthesis. However, these methods relied heavily on automatic vertebra detection which often suffered from the pool spatial accuracy and the lack of validated pathological training samples. In this study, we present a novel spondylolisthesis detection method which can directly locate the irregular spine portion and output the corresponding grading. The detection is done by a set of learning-based detectors which are discriminatively trained by synthesized spondylolisthesis image samples. To provide sufficient pathological training samples, we used a parameterized spine model to synthesize different types of spondylolysis images from real MR/CT scans. The parameterized model can automatically locate the vertebrae in spine images and estimate their pose orientations, and can inversely alter the vertebrae locations and poses by changing the corresponding parameters. Various training samples can then be generated from only a few spine MR/CT images. The preliminary results suggest great potential for the fast and efficient spondylolisthesis identification and measurement in both MR and CT spine images.

  5. Parameterization of a ruminant model of phosphorus digestion and metabolism.

    Science.gov (United States)

    Feng, X; Knowlton, K F; Hanigan, M D

    2015-10-01

    The objective of the current work was to parameterize the digestive elements of the model of Hill et al. (2008) using data collected from animals that were ruminally, duodenally, and ileally cannulated, thereby providing a better understanding of the digestion and metabolism of P fractions in growing and lactating cattle. The model of Hill et al. (2008) was fitted and evaluated for adequacy using the data from 6 animal studies. We hypothesized that sufficient data would be available to estimate P digestion and metabolism parameters and that these parameters would be sufficient to derive P bioavailabilities of a range of feed ingredients. Inputs to the model were dry matter intake; total feed P concentration (fPtFd); phytate (Pp), organic (Po), and inorganic (Pi) P as fractions of total P (fPpPt, fPoPt, fPiPt); microbial growth; amount of Pi and Pp infused into the omasum or ileum; milk yield; and BW. The available data were sufficient to derive all model parameters of interest. The final model predicted that given 75 g/d of total P input, the total-tract digestibility of P was 40.8%, Pp digestibility in the rumen was 92.4%, and in the total-tract was 94.7%. Blood P recycling to the rumen was a major source of Pi flow into the small intestine, and the primary route of excretion. A large proportion of Pi flowing to the small intestine was absorbed; however, additional Pi was absorbed from the large intestine (3.15%). Absorption of Pi from the small intestine was regulated, and given the large flux of salivary P recycling, the effective fractional small intestine absorption of available P derived from the diet was 41.6% at requirements. Milk synthesis used 16% of total absorbed P, and less than 1% was excreted in urine. The resulting model could be used to derive P bioavailabilities of commonly used feedstuffs in cattle production. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. THOR: A New Higher-Order Closure Assumed PDF Subgrid-Scale Parameterization; Evaluation and Application to Low Cloud Feedbacks

    Science.gov (United States)

    Firl, G. J.; Randall, D. A.

    2013-12-01

    The so-called "assumed probability density function (PDF)" approach to subgrid-scale (SGS) parameterization has shown to be a promising method for more accurately representing boundary layer cloudiness under a wide range of conditions. A new parameterization has been developed, named the Two-and-a-Half ORder closure (THOR), that combines this approach with a higher-order turbulence closure. THOR predicts the time evolution of the turbulence kinetic energy components, the variance of ice-liquid water potential temperature (θil) and total non-precipitating water mixing ratio (qt) and the covariance between the two, and the vertical fluxes of horizontal momentum, θil, and qt. Ten corresponding third-order moments in addition to the skewnesses of θil and qt are calculated using diagnostic functions assuming negligible time tendencies. The statistical moments are used to define a trivariate double Gaussian PDF among vertical velocity, θil, and qt. The first three statistical moments of each variable are used to estimate the two Gaussian plume means, variances, and weights. Unlike previous similar models, plume variances are not assumed to be equal or zero. Instead, they are parameterized using the idea that the less dominant Gaussian plume (typically representing the updraft-containing portion of a grid cell) has greater variance than the dominant plume (typically representing the "environmental" or slowly subsiding portion of a grid cell). Correlations among the three variables are calculated using the appropriate covariance moments, and both plume correlations are assumed to be equal. The diagnosed PDF in each grid cell is used to calculate SGS condensation, SGS fluxes of cloud water species, SGS buoyancy terms, and to inform other physical parameterizations about SGS variability. SGS condensation is extended from previous similar models to include condensation over both liquid and ice substrates, dependent on the grid cell temperature. Implementations have been

  7. Integrated landscape-based approach of remote sensing, GIS, and physical modelling to study the hydrological connectivity of wetlands to the downstream water: progress and challenge

    Science.gov (United States)

    Yeo, I. Y.

    2015-12-01

    We report the recent progress on our effort to improve the mapping of wetland dynamics and the modelling of its functioning and hydrological connection to the downstream waters. Our study focused on the Coastal Plain of the Chesapeake Bay Watershed (CBW), the Delmarva Peninsula, where the most of wetlands in CBW are densely distributed. The wetland ecosystem plays crucial roles in improving water quality and ecological integrity for the downstream waters and the Chesapeake Bay, and headwater wetlands in the region, such as Delmarva Bay, are now subject to the legal protection under the Clean Water Rules. We developed new wetland maps using time series Landsat images and a highly accurate LiDAR map over last 30 years. These maps show the changes in surface water fraction at a 30-m grid cell at annual time scale. Using GIS, we analyse these maps to characterize changing dynamics of wetland inundation due to the physical environmental factors (e.g., weather variability, tide) and assessed the hydrological connection of wetlands to the downstream water at the watershed scale. Focusing on the two adjacent watersheds in the upper region of the Choptank River Basin, we study how wetland inundation dynamics and the hydrologic linkage of wetlands to downstream water would vary by the local hydrogeological setting and attempt to identify the key landscape factors affecting the wetland ecosystems and functioning. We then discuss the potential of using remote sensing products to improve the physical modelling of wetlands from our experience with SWAT (Soil and Water Assessment Tool).

  8. Impact of cloud parameterization on the numerical simulation of a super cyclone

    Energy Technology Data Exchange (ETDEWEB)

    Deshpande, M.S.; Pattnaik, S.; Salvekar, P.S. [Indian Institute of Tropical Meteorology, Pune (India)

    2012-07-01

    This study examines the role of parameterization of convection and explicit moisture processes on the simulated track, intensity and inner core structure of Orissa super cyclone (1999) in Bay of Bengal (north Indian Ocean). Sensitivity experiments are carried out to examine the impact of cumulus parameterization schemes (CPS) using MM5 model (Version 3.7) in a two-way nested domain (D1 and D2) configuration at horizontal resolutions (45-15 km). Three different cumulus parameterization schemes, namely Grell (Gr), Betts-Miller (BM) and updated Kain Fritsch (KF2), are tested. It is noted that track and intensity both are very sensitive to CPS and comparatively, KF2 predicts them reasonably well. Particularly, the rapid intensification phase of the super cyclone is best simulated by KF2 compared to other CPS. To examine the effect of the cumulus parameterization scheme at high resolution (5 km), the three-domain configuration (45-15-5 km resolution) is utilized. Based on initial results, KF2 scheme is used for both the domains (D1 and D2). Two experiments are conducted: one in which KF2 is used as CPS and another in which no CPS is used in the third domain. The intensity is well predicted when no CPS is used in the innermost domain. The sensitivity experiments are also carried out to examine the impact from microphysics parameterization schemes (MPS). Four cloud microphysics parameterization schemes, namely mixed phase (MP), Goddard microphysics with Graupel (GG), Reisner Graupel (RG) and Schultz (Sc), are tested in these experiments. It is noted that the tropical cyclone tracks and intensity variation have considerable sensitivity to the varying cloud microphysical parameterization schemes. The MPS of MP and Sc could very well capture the rapid intensification phase. The final intensity is well predicted by MP, which is overestimated by Sc. The MPS of GG and RG underestimates the intensity. (orig.)

  9. Dynamic parameterization and ladder operators for the Kratzer molecular potential

    International Nuclear Information System (INIS)

    Devi, O Babynanda; Singh, C Amuba

    2014-01-01

    Introducing independent parameters k and δ to represent the strength of the attractive and repulsive components, respectively, we write the Kratzer molecular potential as V(k,δ)=(ℏ 2 /2 m)(−k/r+δ(δ−1)/r 2 ). This parameterisation is not only natural, but also convenient for the construction of ladder operators for the system. Adopting the straightforward method of deriving recurrence relations among confluent hypergeometric functions, we construct seven pairs of ladder operators for the Kratzer potential system. Detailed analysis of the laddering actions of these operators is given to show that they connect eigenstates of equal energy but belong to a hierarchy of Kratzer potential systems corresponding to different values of the parameters k and δ. Significantly, it is pointed out that it may not be possible to construct, in the position representation, a ladder operator which would connect different eigenstates belonging to the same potential V(k,δ). Transition to the hydrogen atom case is discussed. A number (14 altogether) of functional relations among the confluent hypergeometric functions have been derived and reported separately in an appendix. (paper)

  10. Minimum cost connection networks

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    In the present paper we consider the allocation of cost in connection networks. Agents have connection demands in form of pairs of locations they want to be connected. Connections between locations are costly to build. The problem is to allocate costs of networks satisfying all connection demands...

  11. IoT and Connected Insurance Reshaping The Health Insurance Industry. A Customer-centric “From Cure To Care” Approach

    Directory of Open Access Journals (Sweden)

    A. Silvello

    2017-12-01

    Full Text Available An increasing global population, the rise in number of chronic disease patients and the threat of global epidemics have made the way for technology as a potential answer to many of these problems. Health insurance can contribute to the resolution of some of these issues but insurers need to transition from simple “Payers” to “Players” in order to achieve that. They need to become points of reference on which the customer and the health care system can count on. This is possible and is strictly related to connected insurance and in particular to wearables and devices that are able to gather vital data from patients and share them with the care givers.

  12. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    Science.gov (United States)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  13. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    Science.gov (United States)

    Khan, Tanvir R.; Perlinger, Judith A.

    2017-10-01

    Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of

  14. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    Directory of Open Access Journals (Sweden)

    T. R. Khan

    2017-10-01

    Full Text Available Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001 (Z01, Petroff and Zhang (2010 (PZ10, Kouznetsov and Sofiev (2012 (KS12, Zhang and He (2014 (ZH14, and Zhang and Shao (2014 (ZS14, respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy; the influence of imprecision in input parameter values on the modeled Vd (uncertainty; and identification of the most influential parameter(s (sensitivity. The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs: grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy, we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp  =  0.001 to 1.0

  15. The importance of parameterization when simulating the hydrologic response of vegetative land-cover change

    Science.gov (United States)

    White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John

    2017-08-01

    Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management

  16. Climate Simulations from Super-parameterized and Conventional General Circulation Models with a Third-order Turbulence Closure

    Science.gov (United States)

    Xu, Kuan-Man; Cheng, Anning

    2014-05-01

    A high-resolution cloud-resolving model (CRM) embedded in a general circulation model (GCM) is an attractive alternative for climate modeling because it replaces all traditional cloud parameterizations and explicitly simulates cloud physical processes in each grid column of the GCM. Such an approach is called "Multiscale Modeling Framework." MMF still needs to parameterize the subgrid-scale (SGS) processes associated with clouds and large turbulent eddies because circulations associated with planetary boundary layer (PBL) and in-cloud turbulence are unresolved by CRMs with horizontal grid sizes on the order of a few kilometers. A third-order turbulence closure (IPHOC) has been implemented in the CRM component of the super-parameterized Community Atmosphere Model (SPCAM). IPHOC is used to predict (or diagnose) fractional cloudiness and the variability of temperature and water vapor at scales that are not resolved on the CRM's grid. This model has produced promised results, especially for low-level cloud climatology, seasonal variations and diurnal variations (Cheng and Xu 2011, 2013a, b; Xu and Cheng 2013a, b). Because of the enormous computational cost of SPCAM-IPHOC, which is 400 times of a conventional CAM, we decided to bypass the CRM and implement the IPHOC directly to CAM version 5 (CAM5). IPHOC replaces the PBL/stratocumulus, shallow convection, and cloud macrophysics parameterizations in CAM5. Since there are large discrepancies in the spatial and temporal scales between CRM and CAM5, IPHOC used in CAM5 has to be modified from that used in SPCAM. In particular, we diagnose all second- and third-order moments except for the fluxes. These prognostic and diagnostic moments are used to select a double-Gaussian probability density function to describe the SGS variability. We also incorporate a diagnostic PBL height parameterization to represent the strong inversion above PBL. The goal of this study is to compare the simulation of the climatology from these three

  17. DISSECTING HABITAT CONNECTIVITY

    Science.gov (United States)

    abstractConnectivity is increasingly recognized as an important element of a successful reserve design. Connectivity matters in reserve design to the extent that it promotes or hinders the viability of target populations. While conceptually straightforward, connectivity i...

  18. Mixed Connective Tissue Disease

    Science.gov (United States)

    Mixed connective tissue disease Overview Mixed connective tissue disease has signs and symptoms of a combination of disorders — primarily lupus, scleroderma and polymyositis. For this reason, mixed connective tissue disease ...

  19. Undifferentiated Connective Tissue Disease

    Science.gov (United States)

    ... Home Conditions Undifferentiated Connective Tissue Disease (UCTD) Undifferentiated Connective Tissue Disease (UCTD) Make an Appointment Find a Doctor ... by Barbara Goldstein, MD (February 01, 2016) Undifferentiated connective tissue disease (UCTD) is a systemic autoimmune disease. This ...

  20. ‘’Mapping Self- and Co-regulation Approaches in the EU Context’’ : Explorative Study for the European Commission, DG Connect

    NARCIS (Netherlands)

    Senden, L.A.J.; Kica, E.; Klinger, Kilian; Hiemstra, M.I.

    2015-01-01

    This report presents a first mapping or inventory of the different approaches to self- and co- regulation (SCR) that can be found within the EU context, in a number of Member States (MS) and international organizations. The report consists of four sections. Whereas the first section provides an

  1. Intermodal Passenger Connectivity Database -

    Data.gov (United States)

    Department of Transportation — The Intermodal Passenger Connectivity Database (IPCD) is a nationwide data table of passenger transportation terminals, with data on the availability of connections...

  2. Fast Determination of Distribution-Connected PV Impacts Using a Variable Time-Step Quasi-Static Time-Series Approach: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Mather, Barry

    2017-08-24

    The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce the required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.

  3. Optimal data replication: A new approach to optimizing parallel EM algorithms on a mesh-connected multiprocessor for 3D PET image reconstruction

    International Nuclear Information System (INIS)

    Chen, C.M.; Lee, S.Y.

    1995-01-01

    The EM algorithm promises an estimated image with the maximal likelihood for 3D PET image reconstruction. However, due to its long computation time, the EM algorithm has not been widely used in practice. While several parallel implementations of the EM algorithm have been developed to make the EM algorithm feasible, they do not guarantee an optimal parallelization efficiency. In this paper, the authors propose a new parallel EM algorithm which maximizes the performance by optimizing data replication on a mesh-connected message-passing multiprocessor. To optimize data replication, the authors have formally derived the optimal allocation of shared data, group sizes, integration and broadcasting of replicated data as well as the scheduling of shared data accesses. The proposed parallel EM algorithm has been implemented on an iPSC/860 with 16 PEs. The experimental and theoretical results, which are consistent with each other, have shown that the proposed parallel EM algorithm could improve performance substantially over those using unoptimized data replication

  4. A review of recent research on improvement of physical parameterizations in the GLA GCM

    Science.gov (United States)

    Sud, Y. C.; Walker, G. K.

    1990-01-01

    A systematic assessment of the effect of a series of improvements in physical parameterizations of the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) are summarized. The implementation of the Simple Biosphere Model (SiB) in the GCM is followed by a comparison of SiB GCM simulations with that of the earlier slab soil hydrology GCM (SSH-GCM) simulations. In the Sahelian context, the biogeophysical component of desertification was analyzed for SiB-GCM simulations. Cumulus parameterization is found to be the primary determinant of the organization of the simulated tropical rainfall of the GLA GCM using Arakawa-Schubert cumulus parameterization. A comparison of model simulations with station data revealed excessive shortwave radiation accompanied by excessive drying and heating to the land. The perpetual July simulations with and without interactive soil moisture shows that 30 to 40 day oscillations may be a natural mode of the simulated earth atmosphere system.

  5. A MODIFIED CUMULUS PARAMETERIZATION SCHEME AND ITS APPLICATION IN THE SIMULATIONS OF THE HEAVY PRECIPITATION CASES

    Institute of Scientific and Technical Information of China (English)

    PING Fan; TANG Xi-ba; YIN Lei

    2016-01-01

    According to the characteristics of organized cumulus convective precipitation in China,a cumulus parameterization scheme suitable for describing the organized convective precipitation in East Asia is presented and modified.The Kain-Fristch scheme is chosen as the scheme to be modified based on analyses and comparisons of simulated precipitation in East Asia by several commonly-used mesoscale parameterization schemes.A key dynamic parameter to dynamically control the cumulus parameterization is then proposed to improve the Kain-Fristch scheme.Numerical simulations of a typhoon case and a Mei-yu front rainfall case are carried out with the improved scheme,and the results show that the improved version performs better than the original in simulating the track and intensity of the typhoons,as well as the distribution of Mei-yu front precipitation.

  6. Parameterizing the competition between homogeneous and heterogeneous freezing in ice cloud formation – polydisperse ice nuclei

    Directory of Open Access Journals (Sweden)

    D. Barahona

    2009-08-01

    Full Text Available This study presents a comprehensive ice cloud formation parameterization that computes the ice crystal number, size distribution, and maximum supersaturation from precursor aerosol and ice nuclei. The parameterization provides an analytical solution of the cloud parcel model equations and accounts for the competition effects between homogeneous and heterogeneous freezing, and, between heterogeneous freezing in different modes. The diversity of heterogeneous nuclei is described through a nucleation spectrum function which is allowed to follow any form (i.e., derived from classical nucleation theory or from observations. The parameterization reproduces the predictions of a detailed numerical parcel model over a wide range of conditions, and several expressions for the nucleation spectrum. The average error in ice crystal number concentration was −2.0±8.5% for conditions of pure heterogeneous freezing, and, 4.7±21% when both homogeneous and heterogeneous freezing were active. The formulation presented is fast and free from requirements of numerical integration.

  7. Developing a stochastic parameterization to incorporate plant trait variability into ecohydrologic modeling

    Science.gov (United States)

    Liu, S.; Ng, G. H. C.

    2017-12-01

    The global plant database has revealed that plant traits can vary more within a plant functional type (PFT) than among different PFTs, indicating that the current paradigm in ecohydrogical models of specifying fixed parameters based solely on plant functional type (PFT) could potentially bias simulations. Although some recent modeling studies have attempted to incorporate this observed plant trait variability, many failed to consider uncertainties due to sparse global observation, or they omitted spatial and/or temporal variability in the traits. Here we present a stochastic parameterization for prognostic vegetation simulations that are stochastic in time and space in order to represent plant trait plasticity - the process by which trait differences arise. We have developed the new PFT parameterization within the Community Land Model 4.5 (CLM 4.5) and tested the method for a desert shrubland watershed in the Mojave Desert, where fixed parameterizations cannot represent acclimation to desert conditions. Spatiotemporally correlated plant trait parameters were first generated based on TRY statistics and were then used to implement ensemble runs for the study area. The new PFT parameterization was then further conditioned on field measurements of soil moisture and remotely sensed observations of leaf-area-index to constrain uncertainties in the sparse global database. Our preliminary results show that incorporating data-conditioned, variable PFT parameterizations strongly affects simulated soil moisture and water fluxes, compared with default simulations. The results also provide new insights about correlations among plant trait parameters and between traits and environmental conditions in the desert shrubland watershed. Our proposed stochastic PFT parameterization method for ecohydrological models has great potential in advancing our understanding of how terrestrial ecosystems are predicted to adapt to variable environmental conditions.

  8. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    Science.gov (United States)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  9. Hydrologic Process Parameterization of Electrical Resistivity Imaging of Solute Plumes Using POD McMC

    Science.gov (United States)

    Awatey, M. T.; Irving, J.; Oware, E. K.

    2016-12-01

    Markov chain Monte Carlo (McMC) inversion frameworks are becoming increasingly popular in geophysics due to their ability to recover multiple equally plausible geologic features that honor the limited noisy measurements. Standard McMC methods, however, become computationally intractable with increasing dimensionality of the problem, for example, when working with spatially distributed geophysical parameter fields. We present a McMC approach based on a sparse proper orthogonal decomposition (POD) model parameterization that implicitly incorporates the physics of the underlying process. First, we generate training images (TIs) via Monte Carlo simulations of the target process constrained to a conceptual model. We then apply POD to construct basis vectors from the TIs. A small number of basis vectors can represent most of the variability in the TIs, leading to dimensionality reduction. A projection of the starting model into the reduced basis space generates the starting POD coefficients. At each iteration, only coefficients within a specified sampling window are resimulated assuming a Gaussian prior. The sampling window grows at a specified rate as the number of iteration progresses starting from the coefficients corresponding to the highest ranked basis to those of the least informative basis. We found this gradual increment in the sampling window to be more stable compared to resampling all the coefficients right from the first iteration. We demonstrate the performance of the algorithm with both synthetic and lab-scale electrical resistivity imaging of saline tracer experiments, employing the same set of basis vectors for all inversions. We consider two scenarios of unimodal and bimodal plumes. The unimodal plume is consistent with the hypothesis underlying the generation of the TIs whereas bimodality in plume morphology was not theorized. We show that uncertainty quantification using McMC can proceed in the reduced dimensionality space while accounting for the

  10. Effects of turbulence on the geometric collision rate of sedimenting droplets. Part 2. Theory and parameterization

    International Nuclear Information System (INIS)

    Ayala, Orlando; Rosa, Bogdan; Wang Lianping

    2008-01-01

    The effect of air turbulence on the geometric collision kernel of cloud droplets can be predicted if the effects of air turbulence on two kinematic pair statistics can be modeled. The first is the average radial relative velocity and the second is the radial distribution function (RDF). A survey of the literature shows that no theory is available for predicting the radial relative velocity of finite-inertia sedimenting droplets in a turbulent flow. In this paper, a theory for the radial relative velocity is developed, using a statistical approach assuming that gravitational sedimentation dominates the relative motion of droplets before collision. In the weak-inertia limit, the theory reveals a new term making a positive contribution to the radial relative velocity resulting from a coupling between sedimentation and air turbulence on the motion of finite-inertia droplets. The theory is compared to the direct numerical simulations (DNS) results in part 1, showing a reasonable agreement with the DNS data for bidisperse cloud droplets. For droplets larger than 30 μm in radius, a nonlinear drag (NLD) can also be included in the theory in terms of an effective inertial response time and an effective terminal velocity. In addition, an empirical model is developed to quantify the RDF. This, together with the theory for radial relative velocity, provides a parameterization for the turbulent geometric collision kernel. Using this integrated model, we find that turbulence could triple the geometric collision kernel, relative to the stagnant air case, for a droplet pair of 10 and 20 μm sedimenting through a cumulus cloud at R λ =2x10 4 and ε=600 cm 2 s -3 . For the self-collisions of 20 μm droplets, the collision kernel depends sensitively on the flow dissipation rate

  11. IMPROVED PARAMETERIZATION OF WATER CLOUD MODEL FOR HYBRID-POLARIZED BACKSCATTER SIMULATION USING INTERACTION FACTOR

    Directory of Open Access Journals (Sweden)

    S. Chauhan

    2017-07-01

    Full Text Available The prime aim of this study was to assess the potential of semi-empirical water cloud model (WCM in simulating hybrid-polarized SAR backscatter signatures (RH and RV retrieved from RISAT-1 data and integrate the results into a graphical user interface (GUI to facilitate easy comprehension and interpretation. A predominant agricultural wheat growing area was selected in Mathura and Bharatpur districts located in the Indian states of Uttar Pradesh and Rajasthan respectively to carry out the study. The three-date datasets were acquired covering the crucial growth stages of the wheat crop. In synchrony, the fieldwork was organized to measure crop/soil parameters. The RH and RV backscattering coefficient images were extracted from the SAR data for all the three dates. The effect of four combinations of vegetation descriptors (V1 and V2 viz., LAI-LAI, LAI-Plant water content (PWC, Leaf water area index (LWAI-LWAI, and LAI-Interaction factor (IF on the total RH and RV backscatter was analyzed. The results revealed that WCM calibrated with LAI and IF as the two vegetation descriptors simulated the total RH and RV backscatter values with highest R2 of 0.90 and 0.85 while the RMSE was lowest among the other tested models (1.18 and 1.25 dB, respectively. The theoretical considerations and interpretations have been discussed and examined in the paper. The novelty of this work emanates from the fact that it is a first step towards the modeling of hybrid-polarized backscatter data using an accurately parameterized semi-empirical approach.

  12. 3D surface parameterization using manifold learning for medial shape representation

    Science.gov (United States)

    Ward, Aaron D.; Hamarneh, Ghassan

    2007-03-01

    The choice of 3D shape representation for anatomical structures determines the effectiveness with which segmentation, visualization, deformation, and shape statistics are performed. Medial axis-based shape representations have attracted considerable attention due to their inherent ability to encode information about the natural geometry of parts of the anatomy. In this paper, we propose a novel approach, based on nonlinear manifold learning, to the parameterization of medial sheets and object surfaces based on the results of skeletonization. For each single-sheet figure in an anatomical structure, we skeletonize the figure, and classify its surface points according to whether they lie on the upper or lower surface, based on their relationship to the skeleton points. We then perform nonlinear dimensionality reduction on the skeleton, upper, and lower surface points, to find the intrinsic 2D coordinate system of each. We then center a planar mesh over each of the low-dimensional representations of the points, and map the meshes back to 3D using the mappings obtained by manifold learning. Correspondence between mesh vertices, established in their intrinsic 2D coordinate spaces, is used in order to compute the thickness vectors emanating from the medial sheet. We show results of our algorithm on real brain and musculoskeletal structures extracted from MRI, as well as an artificial multi-sheet example. The main advantages to this method are its relative simplicity and noniterative nature, and its ability to correctly compute nonintersecting thickness vectors for a medial sheet regardless of both the amount of coincident bending and thickness in the object, and of the incidence of local concavities and convexities in the object's surface.

  13. Zlib: A numerical library for optimal design of truncated power series algebra and map parameterization routines

    International Nuclear Information System (INIS)

    Yan, Y.T.

    1996-11-01

    A brief review of the Zlib development is given. Emphasized is the Zlib nerve system which uses the One-Step Index Pointers (OSIPs) for efficient computation and flexible use of the Truncated Power Series Algebra (TPSA). Also emphasized is the treatment of parameterized maps with an object-oriented language (e.g. C++). A parameterized map can be a Vector Power Series (Vps) or a Lie generator represented by an exponent of a Truncated Power Series (Tps) of which each coefficient is an object of truncated power series

  14. The Histogram-Area Connection

    Science.gov (United States)

    Gratzer, William; Carpenter, James E.

    2008-01-01

    This article demonstrates an alternative approach to the construction of histograms--one based on the notion of using area to represent relative density in intervals of unequal length. The resulting histograms illustrate the connection between the area of the rectangles associated with particular outcomes and the relative frequency (probability)…

  15. Application of Force and Energy Approaches to the Problem of a One-Dimensional, Fully Connected, Nonlinear-Spring Lattice Structure

    Science.gov (United States)

    2015-09-01

    collection of  information  if  it does not display a  currently valid OMB  control  number.  PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.  1...section of the body. In general, this force balancing requires vectorial addition; however, because the problem under consideration is a 1-D lattice...than 1, the formulations would be still more intricate, as vectorial calculations 15 would be required for component resolution. In the force approach

  16. Dispersal and habitat connectivity in complex heterogeneous landscapes: an analysis with a GIS based random walk model

    NARCIS (Netherlands)

    Schippers, P.; Verboom, J.; Knaapen, J.P.; Apeldoorn, van R.

    1996-01-01

    A grid-based random walk model has been developed to simulate animal dispersal, taking landscape heterogeneity and linear barriers such as roads and rivers into account. The model can be used to estimate connectivity and has been parameterized for thebadger in the central part of the Netherlands.

  17. A three-step approach to minimise the impact of a mining site on vicuña (Vicugna vicugna) and to restore landscape connectivity.

    Science.gov (United States)

    Mata, Cristina; Malo, Juan E; Galaz, José Luis; Cadorzo, César; Lagunas, Héctor

    2016-07-01

    Resource extraction projects generate a diversity of negative effects on the environment that are difficult to predict and mitigate. Consequently, adaptive management approaches have been advocated to develop effective responses to impacts that were not predicted. Mammal populations living in or around mine sites are frequently of management concern; yet, there is a dearth of published information on how to minimise the negative effects of different phases of mining operations on them. Here, we present the case study of a copper mine in the Chilean Altiplano, which caused roadkills of the protected vicuña (Vicugna vicugna). This issue led to a three-step solution being implemented: (1) the initial identification of the problem and implementation of an emergency response, (2) the scientific analysis for decision making and (3) the planning and informed implementation of responses for different future scenarios and timescales. The measures taken under each of these steps provide examples of environmental management approaches that make use of scientific information to develop integrated management responses. In brief, our case study showed how (1) the timescale and the necessity/urgency of the case were addressed, (2) the various stakeholders involved were taken into account and (3) changes were included into the physical, human and organisational elements of the company to achieve the stated objectives.

  18. Parameterizing radiative transfer to convert MAX-DOAS dSCDs into near-surface box-averaged mixing ratios

    Directory of Open Access Journals (Sweden)

    R. Sinreich

    2013-06-01

    Full Text Available We present a novel parameterization method to convert multi-axis differential optical absorption spectroscopy (MAX-DOAS differential slant column densities (dSCDs into near-surface box-averaged volume mixing ratios. The approach is applicable inside the planetary boundary layer under conditions with significant aerosol load, and builds on the increased sensitivity of MAX-DOAS near the instrument altitude. It parameterizes radiative transfer model calculations and significantly reduces the computational effort, while retrieving ~ 1 degree of freedom. The biggest benefit of this method is that the retrieval of an aerosol profile, which usually is necessary for deriving a trace gas concentration from MAX-DOAS dSCDs, is not needed. The method is applied to NO2 MAX-DOAS dSCDs recorded during the Mexico City Metropolitan Area 2006 (MCMA-2006 measurement campaign. The retrieved volume mixing ratios of two elevation angles (1° and 3° are compared to volume mixing ratios measured by two long-path (LP-DOAS instruments located at the same site. Measurements are found to agree well during times when vertical mixing is expected to be strong. However, inhomogeneities in the air mass above Mexico City can be detected by exploiting the different horizontal and vertical dimensions probed by the MAX-DOAS and LP-DOAS instruments. In particular, a vertical gradient in NO2 close to the ground can be observed in the afternoon, and is attributed to reduced mixing coupled with near-surface emission inside street canyons. The existence of a vertical gradient in the lower 250 m during parts of the day shows the general challenge of sampling the boundary layer in a representative way, and emphasizes the need of vertically resolved measurements.

  19. A new approach to study of seabird-fishery overlap: Connecting chick feeding with parental foraging and overlap with fishing vessels

    Directory of Open Access Journals (Sweden)

    Junichi Sugishita

    2015-07-01

    Full Text Available Incidental fisheries bycatch is recognised as a major threat to albatross populations worldwide. However, fishery discards and offal produced in large quantities might benefit some scavenging seabirds. Here, we demonstrate an integrated approach to better understand the ecological ramifications of fine-scale overlap between seabirds and fisheries. As a case study, we examined whether foraging in association with a fishing vessel is advantageous for chick provisioning in terms of quantity of food delivered to chicks, in northern royal albatross (Diomedea sanfordi at Taiaroa Head, New Zealand. Fine-scale overlap between albatrosses and vessels was quantified by integrating GPS tracking and Vessel Monitoring Systems (VMS. Meal size delivered to chicks was measured using custom-designed nest balances, and monitoring of attendance of adults fitted with radio transmitters was used in conjunction with time-lapse photography at the nest allowed us to allocate each feeding event to a specific parent. The combination of these techniques enabled comparison of meal sizes delivered to chicks with parental foraging trip durations with or without fishing vessels association. A total of 45 foraging trips and associated chick feeding events were monitored during the chick-rearing period in 2012. Differences in the meal size and foraging trip duration relative to foraging overlap with fisheries were examined using a linear mixed-effect model, adjusted for chick age. Our results, based on three birds, suggest that foraging in association with vessels does not confer an advantage for chick feeding for this population that demonstrated low rates of overlap while foraging. The integrated research design presented can be applied to other seabird species that are susceptible to bycatch, and offers a valuable approach to evaluate habitat quality by linking habitat use and foraging success in terms of total amount of food delivered to offspring.

  20. Parameterization of ion-induced nucleation rates based on ambient observations

    Directory of Open Access Journals (Sweden)

    T. Nieminen

    2011-04-01

    Full Text Available Atmospheric ions participate in the formation of new atmospheric aerosol particles, yet their exact role in this process has remained unclear. Here we derive a new simple parameterization for ion-induced nucleation or, more precisely, for the formation rate of charged 2-nm particles. The parameterization is semi-empirical in the sense that it is based on comprehensive results of one-year-long atmospheric cluster and particle measurements in the size range ~1–42 nm within the EUCAARI (European Integrated project on Aerosol Cloud Climate and Air Quality interactions project. Data from 12 field sites across Europe measured with different types of air ion and cluster mobility spectrometers were used in our analysis, with more in-depth analysis made using data from four stations with concomitant sulphuric acid measurements. The parameterization is given in two slightly different forms: a more accurate one that requires information on sulfuric acid and nucleating organic vapor concentrations, and a simpler one in which this information is replaced with the global radiation intensity. These new parameterizations are applicable to all large-scale atmospheric models containing size-resolved aerosol microphysics, and a scheme to calculate concentrations of sulphuric acid, condensing organic vapours and cluster ions.

  1. A parameterization scheme for the x-ray linear attenuation coefficient and energy absorption coefficient.

    Science.gov (United States)

    Midgley, S M

    2004-01-21

    A novel parameterization of x-ray interaction cross-sections is developed, and employed to describe the x-ray linear attenuation coefficient and mass energy absorption coefficient for both elements and mixtures. The new parameterization scheme addresses the Z-dependence of elemental cross-sections (per electron) using a simple function of atomic number, Z. This obviates the need for a complicated mathematical formalism. Energy dependent coefficients describe the Z-direction curvature of the cross-sections. The composition dependent quantities are the electron density and statistical moments describing the elemental distribution. We show that it is possible to describe elemental cross-sections for the entire periodic table and at energies above the K-edge (from 6 keV to 125 MeV), with an accuracy of better than 2% using a parameterization containing not more than five coefficients. For the biologically important elements 1 coefficients. At higher energies, the parameterization uses fewer coefficients with only two coefficients needed at megavoltage energies.

  2. Evaluating parameterizations of aerodynamic resistance to heat transfer using field measurements

    Directory of Open Access Journals (Sweden)

    Shaomin Liu

    2007-01-01

    Full Text Available Parameterizations of aerodynamic resistance to heat and water transfer have a significant impact on the accuracy of models of land – atmosphere interactions and of estimated surface fluxes using spectro-radiometric data collected from aircrafts and satellites. We have used measurements from an eddy correlation system to derive the aerodynamic resistance to heat transfer over a bare soil surface as well as over a maize canopy. Diurnal variations of aerodynamic resistance have been analyzed. The results showed that the diurnal variation of aerodynamic resistance during daytime (07:00 h–18:00 h was significant for both the bare soil surface and the maize canopy although the range of variation was limited. Based on the measurements made by the eddy correlation system, a comprehensive evaluation of eight popularly used parameterization schemes of aerodynamic resistance was carried out. The roughness length for heat transfer is a crucial parameter in the estimation of aerodynamic resistance to heat transfer and can neither be taken as a constant nor be neglected. Comparing with the measurements, the parameterizations by Choudhury et al. (1986, Viney (1991, Yang et al. (2001 and the modified forms of Verma et al. (1976 and Mahrt and Ek (1984 by inclusion of roughness length for heat transfer gave good agreements with the measurements, while the parameterizations by Hatfield et al. (1983 and Xie (1988 showed larger errors even though the roughness length for heat transfer has been taken into account.

  3. Impact of different parameterization schemes on simulation of mesoscale convective system over south-east India

    Science.gov (United States)

    Madhulatha, A.; Rajeevan, M.

    2018-02-01

    Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.

  4. Ocean's response to Hurricane Frances and its implications for drag coefficient parameterization at high wind speeds

    KAUST Repository

    Zedler, S. E.

    2009-04-25

    The drag coefficient parameterization of wind stress is investigated for tropical storm conditions using model sensitivity studies. The Massachusetts Institute of Technology (MIT) Ocean General Circulation Model was run in a regional setting with realistic stratification and forcing fields representing Hurricane Frances, which in early September 2004 passed east of the Caribbean Leeward Island chain. The model was forced with a NOAA-HWIND wind speed product after converting it to wind stress using four different drag coefficient parameterizations. Respective model results were tested against in situ measurements of temperature profiles and velocity, available from an array of 22 surface drifters and 12 subsurface floats. Changing the drag coefficient parameterization from one that saturated at a value of 2.3 × 10 -3 to a constant drag coefficient of 1.2 × 10-3 reduced the standard deviation difference between the simulated minus the measured sea surface temperature change from 0.8°C to 0.3°C. Additionally, the standard deviation in the difference between simulated minus measured high pass filtered 15-m current speed reduced from 15 cm/s to 5 cm/s. The maximum difference in sea surface temperature response when two different turbulent mixing parameterizations were implemented was 0.3°C, i.e., only 11% of the maximum change of sea surface temperature caused by the storm. Copyright 2009 by the American Geophysical Union.

  5. Multi-sensor remote sensing parameterization of heat fluxes over heterogeneous land surfaces

    NARCIS (Netherlands)

    Faivre, R.D.

    2014-01-01

    The parameterization of heat transfer by remote sensing, and based on SEBS scheme for turbulent heat fluxes retrieval, already proved to be very convenient for estimating evapotranspiration (ET) over homogeneous land surfaces. However, the use of such a method over heterogeneous landscapes (e.g.

  6. Inclusion of Solar Elevation Angle in Land Surface Albedo Parameterization Over Bare Soil Surface.

    Science.gov (United States)

    Zheng, Zhiyuan; Wei, Zhigang; Wen, Zhiping; Dong, Wenjie; Li, Zhenchao; Wen, Xiaohang; Zhu, Xian; Ji, Dong; Chen, Chen; Yan, Dongdong

    2017-12-01

    Land surface albedo is a significant parameter for maintaining a balance in surface energy. It is also an important parameter of bare soil surface albedo for developing land surface process models that accurately reflect diurnal variation characteristics and the mechanism behind the solar spectral radiation albedo on bare soil surfaces and for understanding the relationships between climate factors and spectral radiation albedo. Using a data set of field observations, we conducted experiments to analyze the variation characteristics of land surface solar spectral radiation and the corresponding albedo over a typical Gobi bare soil underlying surface and to investigate the relationships between the land surface solar spectral radiation albedo, solar elevation angle, and soil moisture. Based on both solar elevation angle and soil moisture measurements simultaneously, we propose a new two-factor parameterization scheme for spectral radiation albedo over bare soil underlying surfaces. The results of numerical simulation experiments show that the new parameterization scheme can more accurately depict the diurnal variation characteristics of bare soil surface albedo than the previous schemes. Solar elevation angle is one of the most important factors for parameterizing bare soil surface albedo and must be considered in the parameterization scheme, especially in arid and semiarid areas with low soil moisture content. This study reveals the characteristics and mechanism of the diurnal variation of bare soil surface solar spectral radiation albedo and is helpful in developing land surface process models, weather models, and climate models.

  7. Assessing Impact, DIF, and DFF in Accommodated Item Scores: A Comparison of Multilevel Measurement Model Parameterizations

    Science.gov (United States)

    Beretvas, S. Natasha; Cawthon, Stephanie W.; Lockhart, L. Leland; Kaye, Alyssa D.

    2012-01-01

    This pedagogical article is intended to explain the similarities and differences between the parameterizations of two multilevel measurement model (MMM) frameworks. The conventional two-level MMM that includes item indicators and models item scores (Level 1) clustered within examinees (Level 2) and the two-level cross-classified MMM (in which item…

  8. Integrated cumulus ensemble and turbulence (ICET): An integrated parameterization system for general circulation models (GCMs)

    Energy Technology Data Exchange (ETDEWEB)

    Evans, J.L.; Frank, W.M.; Young, G.S. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    Successful simulations of the global circulation and climate require accurate representation of the properties of shallow and deep convective clouds, stable-layer clouds, and the interactions between various cloud types, the boundary layer, and the radiative fluxes. Each of these phenomena play an important role in the global energy balance, and each must be parameterized in a global climate model. These processes are highly interactive. One major problem limiting the accuracy of parameterizations of clouds and other processes in general circulation models (GCMs) is that most of the parameterization packages are not linked with a common physical basis. Further, these schemes have not, in general, been rigorously verified against observations adequate to the task of resolving subgrid-scale effects. To address these problems, we are designing a new Integrated Cumulus Ensemble and Turbulence (ICET) parameterization scheme, installing it in a climate model (CCM2), and evaluating the performance of the new scheme using data from Atmospheric Radiation Measurement (ARM) Program Cloud and Radiation Testbed (CART) sites.

  9. Incorporating field wind data to improve crop evapotranspiration parameterization in heterogeneous regions

    Science.gov (United States)

    Accurate parameterization of reference evapotranspiration (ET0) is necessary for optimizing irrigation scheduling and avoiding costs associated with over-irrigation (water expense, loss of water productivity, energy costs, pollution) or with under-irrigation (crop stress and suboptimal yields or qua...

  10. Efficient Parameterization for Grey-box Model Identification of Complex Physical Systems

    DEFF Research Database (Denmark)

    Blanke, Mogens; Knudsen, Morten Haack

    2006-01-01

    Grey box model identification preserves known physical structures in a model but with limits to the possible excitation, all parameters are rarely identifiable, and different parametrizations give significantly different model quality. Convenient methods to show which parameterizations are the be...... that need be constrained to achieve satisfactory convergence. Identification of nonlinear models for a ship illustrate the concept....

  11. Aerosol-Cloud-Precipitation Interactions in WRF Model:Sensitivity to Autoconversion Parameterization

    Institute of Scientific and Technical Information of China (English)

    解小宁; 刘晓东

    2015-01-01

    Cloud-to-rain autoconversion process is an important player in aerosol loading, cloud morphology, and precipitation variations because it can modulate cloud microphysical characteristics depending on the par-ticipation of aerosols, and aff ects the spatio-temporal distribution and total amount of precipitation. By applying the Kessler, the Khairoutdinov-Kogan (KK), and the Dispersion autoconversion parameterization schemes in a set of sensitivity experiments, the indirect eff ects of aerosols on clouds and precipitation are investigated for a deep convective cloud system in Beijing under various aerosol concentration backgrounds from 50 to 10000 cm−3. Numerical experiments show that aerosol-induced precipitation change is strongly dependent on autoconversion parameterization schemes. For the Kessler scheme, the average cumulative precipitation is enhanced slightly with increasing aerosols, whereas surface precipitation is reduced signifi-cantly with increasing aerosols for the KK scheme. Moreover, precipitation varies non-monotonically for the Dispersion scheme, increasing with aerosols at lower concentrations and decreasing at higher concentrations. These diff erent trends of aerosol-induced precipitation change are mainly ascribed to diff erences in rain wa-ter content under these three autoconversion parameterization schemes. Therefore, this study suggests that accurate parameterization of cloud microphysical processes, particularly the cloud-to-rain autoconversion process, is needed for improving the scientifi c understanding of aerosol-cloud-precipitation interactions.

  12. Parameterizing Subgrid-Scale Orographic Drag in the High-Resolution Rapid Refresh (HRRR) Atmospheric Model

    Science.gov (United States)

    Toy, M. D.; Olson, J.; Kenyon, J.; Smirnova, T. G.; Brown, J. M.

    2017-12-01

    The accuracy of wind forecasts in numerical weather prediction (NWP) models is improved when the drag forces imparted on atmospheric flow by subgrid-scale orography are included. Without such parameterizations, only the terrain resolved by the model grid, along with the small-scale obstacles parameterized by the roughness lengths can have an effect on the flow. This neglects the impacts of subgrid-scale terrain variations, which typically leads to wind speeds that are too strong. Using statistical information about the subgrid-scale orography, such as the mean and variance of the topographic height within a grid cell, the drag forces due to flow blocking, gravity wave drag, and turbulent form drag are estimated and distributed vertically throughout the grid cell column. We recently implemented the small-scale gravity wave drag paramterization of Steeneveld et al. (2008) and Tsiringakis et al. (2017) for stable planetary boundary layers, and the turbulent form drag parameterization of Beljaars et al. (2004) in the High-Resolution Rapid Refresh (HRRR) NWP model developed at the National Oceanic and Atmospheric Administration (NOAA). As a result, a high surface wind speed bias in the model has been reduced and small improvement to the maintenance of stable layers has also been found. We present the results of experiments with the subgrid-scale orographic drag parameterization for the regional HRRR model, as well as for a global model in development at NOAA, showing the direct and indirect impacts.

  13. Framework to parameterize and validate APEX to support deployment of the nutrient tracking tool

    Science.gov (United States)

    Guidelines have been developed to parameterize and validate the Agricultural Policy Environmental eXtender (APEX) to support the Nutrient Tracking Tool (NTT). This follow-up paper presents 1) a case study to illustrate how the developed guidelines are applied in a headwater watershed located in cent...

  14. Impact of APEX parameterization and soil data on runoff, sediment, and nutrients transport assessment

    Science.gov (United States)

    Hydrological models have become essential tools for environmental assessments. This study’s objective was to evaluate a best professional judgment (BPJ) parameterization of the Agricultural Policy and Environmental eXtender (APEX) model with soil-survey data against the calibrated model with either ...

  15. A unified spectral parameterization for wave breaking: From the deep ocean to the surf zone

    Science.gov (United States)

    Filipot, J.-F.; Ardhuin, F.

    2012-11-01

    A new wave-breaking dissipation parameterization designed for phase-averaged spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is first explicitly calculated in physical space before being distributed over the relevant spectral components. The transition from deep to shallow water is made possible by using a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth. This parameterization is implemented in the WAVEWATCH III modeling framework, which is applied to a wide range of conditions and scales, from the global ocean to the beach scale. Wave height, peak and mean periods, and spectral data are validated using in situ and remote sensing data. Model errors are comparable to those of other specialized deep or shallow water parameterizations. This work shows that it is possible to have a seamless parameterization from the deep ocean to the surf zone.

  16. A new albedo parameterization for use in climate models over the Antarctic ice sheet

    NARCIS (Netherlands)

    Kuipers Munneke, P.|info:eu-repo/dai/nl/304831891; van den Broeke, M.R.|info:eu-repo/dai/nl/073765643; Lenaerts, J.T.M.|info:eu-repo/dai/nl/314850163; Flanner, M.G.; Gardner, A.S.; van de Berg, W.J.|info:eu-repo/dai/nl/304831611

    2011-01-01

    A parameterization for broadband snow surface albedo, based on snow grain size evolution, cloud optical thickness, and solar zenith angle, is implemented into a regional climate model for Antarctica and validated against field observations of albedo for the period 1995–2004. Over the Antarctic

  17. Regional modelling of tracer transport by tropical convection – Part 1: Sensitivity to convection parameterization

    Directory of Open Access Journals (Sweden)

    J. Arteta

    2009-09-01

    Full Text Available The general objective of this series of papers is to evaluate long duration limited area simulations with idealised tracers as a tool to assess tracer transport in chemistry-transport models (CTMs. In this first paper, we analyse the results of six simulations using different convection closures and parameterizations. The simulations are using the Grell and Dévényi (2002 mass-flux framework for the convection parameterization with different closures (Grell = GR, Arakawa-Shubert = AS, Kain-Fritch = KF, Low omega = LO, Moisture convergence = MC and an ensemble parameterization (EN based on the other five closures. The simulations are run for one month during the SCOUT-O3 field campaign lead from Darwin (Australia. They have a 60 km horizontal resolution and a fine vertical resolution in the upper troposphere/lower stratosphere. Meteorological results are compared with satellite products, radiosoundings and SCOUT-O3 aircraft campaign data. They show that the model is generally in good agreement with the measurements with less variability in the model. Except for the precipitation field, the differences between the six simulations are small on average with respect to the differences with the meteorological observations. The comparison with TRMM rainrates shows that the six parameterizations or closures have similar behaviour concerning convection triggering times and locations. However, the 6 simulations provide two different behaviours for rainfall values, with the EN, AS and KF parameterizations (Group 1 modelling better rain fields than LO, MC and GR (Group 2. The vertical distribution of tropospheric tracers is very different for the two groups showing significantly more transport into the TTL for Group 1 related to the larger average values of the upward velocities. Nevertheless the low values for the Group 1 fluxes at and above the cold point level indicate that the model does not simulate significant overshooting. For stratospheric tracers

  18. Minimum cost connection networks

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2015-01-01

    In the present paper we consider the allocation of costs in connection networks. Agents have connection demands in form of pairs of locations they want to have connected. Connections between locations are costly to build. The problem is to allocate costs of networks satisfying all connection...... demands. We use a few axioms to characterize allocation rules that truthfully implement cost minimizing networks satisfying all connection demands in a game where: (1) a central planner announces an allocation rule and a cost estimation rule; (2) every agent reports her own connection demand as well...... as all connection costs; (3) the central planner selects a cost minimizing network satisfying reported connection demands based on the estimated costs; and, (4) the planner allocates the true costs of the selected network. It turns out that an allocation rule satisfies the axioms if and only if relative...

  19. Positive approach: Implications for the relation between number theory and geometry, including connection to Santilli mathematics, from Fibonacci reconstitution of natural numbers and of prime numbers

    Energy Technology Data Exchange (ETDEWEB)

    Johansen, Stein E., E-mail: stein.johansen@svt.ntnu.no [Institute for Basic Research, Division of Physics, Palm Harbor, Florida, USA and Norwegian University of Science and Technology, Department of Social Anthropology, Trondheim (Norway)

    2014-12-10

    The paper recapitulates some key elements in previously published results concerning exact and complete reconstitution of the field of natural numbers, both as ordinal and as cardinal numbers, from systematic unfoldment of the Fibonacci algorithm. By this natural numbers emerge as Fibonacci 'atoms' and 'molecules' consistent with the notion of Zeckendorf sums. Here, the sub-set of prime numbers appears not as the primary numbers, but as an epistructure from a deeper Fibonacci constitution, and is thus targeted from a 'positive approach'. In the Fibonacci reconstitution of number theory natural numbers show a double geometrical aspect: partly as extension in space and partly as position in a successive structuring of space. More specifically, the natural numbers are shown to be distributed by a concise 5:3 code structured from the Fibonacci algorithm via Pascal's triangle. The paper discusses possible implications for the more general relation between number theory and geometry, as well as more specifically in relation to hadronic mathematics, initiated by R.M. Santilli, and also briefly to some other recent science linking number theory more directly to geometry and natural systems.

  20. Positive approach: Implications for the relation between number theory and geometry, including connection to Santilli mathematics, from Fibonacci reconstitution of natural numbers and of prime numbers

    International Nuclear Information System (INIS)

    Johansen, Stein E.

    2014-01-01

    The paper recapitulates some key elements in previously published results concerning exact and complete reconstitution of the field of natural numbers, both as ordinal and as cardinal numbers, from systematic unfoldment of the Fibonacci algorithm. By this natural numbers emerge as Fibonacci 'atoms' and 'molecules' consistent with the notion of Zeckendorf sums. Here, the sub-set of prime numbers appears not as the primary numbers, but as an epistructure from a deeper Fibonacci constitution, and is thus targeted from a 'positive approach'. In the Fibonacci reconstitution of number theory natural numbers show a double geometrical aspect: partly as extension in space and partly as position in a successive structuring of space. More specifically, the natural numbers are shown to be distributed by a concise 5:3 code structured from the Fibonacci algorithm via Pascal's triangle. The paper discusses possible implications for the more general relation between number theory and geometry, as well as more specifically in relation to hadronic mathematics, initiated by R.M. Santilli, and also briefly to some other recent science linking number theory more directly to geometry and natural systems

  1. Parameterization of a complex landscape for a sediment routing model of the Le Sueur River, southern Minnesota

    Science.gov (United States)

    Belmont, P.; Viparelli, E.; Parker, G.; Lauer, W.; Jennings, C.; Gran, K.; Wilcock, P.; Melesse, A.

    2008-12-01

    Modeling sediment fluxes and pathways in complex landscapes is limited by our inability to accurately measure and integrate heterogeneous, spatially distributed sources into a single coherent, predictive geomorphic transport law. In this study, we partition the complex landscape of the Le Sueur River watershed into five distributed primary source types, bluffs (including strath terrace caps), ravines, streambanks, tributaries, and flat,agriculture-dominated uplands. The sediment contribution of each source is quantified independently and parameterized for use in a sand and mud routing model. Rigorous modeling of the evolution of this landscape and sediment flux from each source type requires consideration of substrate characteristics, heterogeneity, and spatial connectivity. The subsurface architecture of the Le Sueur drainage basin is defined by a layer cake sequence of fine-grained tills, interbedded with fluvioglacial sands. Nearly instantaneous baselevel fall of 65 m occurred at 11.5 ka, as a result of the catastrophic draining of glacial Lake Agassiz through the Minnesota River, to which the Le Sueur is a tributary. The major knickpoint that was generated from that event has propagated 40 km into the Le Sueur network, initiating an incised river valley with tall, retreating bluffs and actively incising ravines. Loading estimates constrained by river gaging records that bound the knick zone indicate that bluffs connected to the river are retreating at an average rate of less than 2 cm per year and ravines are incising at an average rate of less than 0.8 mm per year, consistent with the Holocene average incision rate on the main stem of the river of less than 0.6 mm per year. Ongoing work with cosmogenic nuclide sediment tracers, ground-based LiDAR, historic aerial photos, and field mapping will be combined to represent the diversity of erosional environments and processes in a single coherent routing model.

  2. Ozonolysis of α-pinene: parameterization of secondary organic aerosol mass fraction

    Directory of Open Access Journals (Sweden)

    R. K. Pathak

    2007-07-01

    Full Text Available Existing parameterizations tend to underpredict the α-pinene aerosol mass fraction (AMF or yield by a factor of 2–5 at low organic aerosol concentrations (<5 µg m−3. A wide range of smog chamber results obtained at various conditions (low/high NOx, presence/absence of UV radiation, dry/humid conditions, and temperatures ranging from 15–40°C collected by various research teams during the last decade are used to derive new parameterizations of the SOA formation from α-pinene ozonolysis. Parameterizations are developed by fitting experimental data to a basis set of saturation concentrations (from 10−2 to 104 µg m−3 using an absorptive equilibrium partitioning model. Separate parameterizations for α-pinene SOA mass fractions are developed for: 1 Low NOx, dark, and dry conditions, 2 Low NOx, UV, and dry conditions, 3 Low NOx, dark, and high RH conditions, 4 High NOx, dark, and dry conditions, 5 High NOx, UV, and dry conditions. According to the proposed parameterizations the α-pinene SOA mass fractions in an atmosphere with 5 µg m−3 of organic aerosol range from 0.032 to 0.1 for reacted α-pinene concentrations in the 1 ppt to 5 ppb range.

  3. Parameterization of Rocket Dust Storms on Mars in the LMD Martian GCM: Modeling Details and Validation

    Science.gov (United States)

    Wang, Chao; Forget, François; Bertrand, Tanguy; Spiga, Aymeric; Millour, Ehouarn; Navarro, Thomas

    2018-04-01

    The origin of the detached dust layers observed by the Mars Climate Sounder aboard the Mars Reconnaissance Orbiter is still debated. Spiga et al. (2013, https://doi.org/10.1002/jgre.20046) revealed that deep mesoscale convective "rocket dust storms" are likely to play an important role in forming these dust layers. To investigate how the detached dust layers are generated by this mesoscale phenomenon and subsequently evolve at larger scales, a parameterization of rocket dust storms to represent the mesoscale dust convection is designed and included into the Laboratoire de Météorologie Dynamique (LMD) Martian Global Climate Model (GCM). The new parameterization allows dust particles in the GCM to be transported to higher altitudes than in traditional GCMs. Combined with the horizontal transport by large-scale winds, the dust particles spread out and form detached dust layers. During the Martian dusty seasons, the LMD GCM with the new parameterization is able to form detached dust layers. The formation, evolution, and decay of the simulated dust layers are largely in agreement with the Mars Climate Sounder observations. This suggests that mesoscale rocket dust storms are among the key factors to explain the observed detached dust layers on Mars. However, the detached dust layers remain absent in the GCM during the clear seasons, even with the new parameterization. This implies that other relevant atmospheric processes, operating when no dust storms are occurring, are needed to explain the Martian detached dust layers. More observations of local dust storms could improve the ad hoc aspects of this parameterization, such as the trigger and timing of dust injection.

  4. Does age matter? Controls on the spatial organization of age and life expectancy in hillslopes, and implications for transport parameterization using rSAS

    Science.gov (United States)

    Kim, M.; Harman, C. J.; Troch, P. A. A.

    2017-12-01

    Hillslopes have been extensively explored as a natural fundamental unit for spatially-integrated hydrologic models. Much of this attention has focused on their use in predicting the quantity of discharge, but hillslope-based models can potentially be used to predict the composition of discharge (in terms of age and chemistry) if they can be parameterized terms of measurable physical properties. Here we present advances in the use of rank StorAge Selection (rSAS) functions to parameterize transport through hillslopes. These functions provide a mapping between the distribution of water ages in storage and in outfluxes in terms of a probability distribution over storage. It has previously been shown that rSAS functions are related to the relative partitioning and arrangement of flow pathways (and variabilities in that arrangement), while separating out the effect of changes in the overall rate of fluxes in and out. This suggests that rSAS functions should have a connection to the internal organization of flow paths in a hillslope.Using a combination of numerical modeling and theoretical analysis we examined: first, the controls of physical properties on internal spatial organization of age (time since entry), life expectancy (time to exit), and the emergent transit time distribution and rSAS functions; second, the possible parameterization of the rSAS function using the physical properties. The numerical modeling results showed the clear dependence of the rSAS function forms on the physical properties and relations between the internal organization and the rSAS functions. For the different rates of the exponential saturated hydraulic conductivity decline with depth the spatial organization of life expectancy varied dramatically and determined the rSAS function forms, while the organizaiton of the age showed less qualitative differences. Analytical solutions predicting this spatial organization and the resulting rSAS function were derived for simplified systems. These

  5. Splitting turbulence algorithm for mixing parameterization embedded in the ocean climate model. Examples of data assimilation and Prandtl number variations.

    Science.gov (United States)

    Moshonkin, Sergey; Gusev, Anatoly; Zalesny, Vladimir; Diansky, Nikolay

    2017-04-01

    Prandtl number presents possibility for essential improvement of the TKE attenuation with depth and more realistic water entrainment from pycnocline into the mixed layer. The high sensitivity is revealed of the eddy-permitting circulation stable model solution to the change of the used above mixing parameterizations. This sensitivity is connected with significant changes of density fields in the upper baroclinic ocean layer over the total considered area. For instance, assimilation of annual mean climatic buoyancy frequency in equations for TKE and TDF leads to more realistic circulation in the North Atlantic. Variations of Prandtl number made it possible to simulate intense circulation in Beaufort Gyre owing to steric effect during the whole period under consideration. The research was supported by the Russian Foundation for Basic Research (grants №16-05-00534 and 15-05-00557).

  6. The connection-set algebra--a novel formalism for the representation of connectivity structure in neuronal network models.

    Science.gov (United States)

    Djurfeldt, Mikael

    2012-07-01

    The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.

  7. Parameterization models for pesticide exposure via crop consumption.

    Science.gov (United States)

    Fantke, Peter; Wieland, Peter; Juraske, Ronnie; Shaddick, Gavin; Itoiz, Eva Sevigné; Friedrich, Rainer; Jolliet, Olivier

    2012-12-04

    An approach for estimating human exposure to pesticides via consumption of six important food crops is presented that can be used to extend multimedia models applied in health risk and life cycle impact assessment. We first assessed the variation of model output (pesticide residues per kg applied) as a function of model input variables (substance, crop, and environmental properties) including their possible correlations using matrix algebra. We identified five key parameters responsible for between 80% and 93% of the variation in pesticide residues, namely time between substance application and crop harvest, degradation half-lives in crops and on crop surfaces, overall residence times in soil, and substance molecular weight. Partition coefficients also play an important role for fruit trees and tomato (Kow), potato (Koc), and lettuce (Kaw, Kow). Focusing on these parameters, we develop crop-specific models by parametrizing a complex fate and exposure assessment framework. The parametric models thereby reflect the framework's physical and chemical mechanisms and predict pesticide residues in harvest using linear combinations of crop, crop surface, and soil compartments. Parametric model results correspond well with results from the complex framework for 1540 substance-crop combinations with total deviations between a factor 4 (potato) and a factor 66 (lettuce). Predicted residues also correspond well with experimental data previously used to evaluate the complex framework. Pesticide mass in harvest can finally be combined with reduction factors accounting for food processing to estimate human exposure from crop consumption. All parametric models can be easily implemented into existing assessment frameworks.

  8. Macromolecular refinement by model morphing using non-atomic parameterizations.

    Science.gov (United States)

    Cowtan, Kevin; Agirre, Jon

    2018-02-01

    Refinement is a critical step in the determination of a model which explains the crystallographic observations and thus best accounts for the missing phase components. The scattering density is usually described in terms of atomic parameters; however, in macromolecular crystallography the resolution of the data is generally insufficient to determine the values of these parameters for individual atoms. Stereochemical and geometric restraints are used to provide additional information, but produce interrelationships between parameters which slow convergence, resulting in longer refinement times. An alternative approach is proposed in which parameters are not attached to atoms, but to regions of the electron-density map. These parameters can move the density or change the local temperature factor to better explain the structure factors. Varying the size of the region which determines the parameters at a particular position in the map allows the method to be applied at different resolutions without the use of restraints. Potential applications include initial refinement of molecular-replacement models with domain motions, and potentially the use of electron density from other sources such as electron cryo-microscopy (cryo-EM) as the refinement model.

  9. Connected vehicle standards.

    Science.gov (United States)

    2016-01-01

    Connected vehicles have the potential to transform the way Americans travel by : allowing cars, buses, trucks, trains, traffic signals, smart phones, and other devices to : communicate through a safe, interoperable wireless network. A connected vehic...

  10. Constraining Unsaturated Hydraulic Parameters Using the Latin Hypercube Sampling Method and Coupled Hydrogeophysical Approach

    Science.gov (United States)

    Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.

    2017-12-01

    The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.

  11. Emotional Connections in Higher Education Marketing

    Science.gov (United States)

    Durkin, Mark; McKenna, Seamas; Cummins, Darryl

    2012-01-01

    Purpose: Through examination of a case study this paper aims to describe a brand re-positioning exercise and explore how an emotionally driven approach to branding can help create meaningful connections with potential undergraduate students and can positively influence choice. Design/methodology/approach: The paper's approach is a case study…

  12. Connecting to Everyday Practices

    DEFF Research Database (Denmark)

    Iversen, Ole Sejer; Smith, Rachel Charlotte

    2012-01-01

    construction and reproduction of cultural heritage creating novel connections between self and others and between past, present and future. We present experiences from a current research project, the Digital Natives exhibition, in which social media was designed as an integral part of the exhibition to connect...... focusing on the connections between audiences practices and the museum exhibition....

  13. The Listening Train: A Collaborative, Connective Aesthetics ...

    African Journals Online (AJOL)

    The Listening Train: A Collaborative, Connective Aesthetics Approach to Transgressive Social Learning. ... Southern African Journal of Environmental Education. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue ...

  14. Advances in understanding, models and parameterizations of biosphere-atmosphere ammonia exchange

    Science.gov (United States)

    Flechard, C. R.; Massad, R.-S.; Loubet, B.; Personne, E.; Simpson, D.; Bash, J. O.; Cooter, E. J.; Nemitz, E.; Sutton, M. A.

    2013-07-01

    . Their level of complexity depends on their purpose, the spatial scale at which they are applied, the current level of parameterization, and the availability of the input data they require. State-of-the-art solutions for determining the emission/sink Γ potentials through the soil/canopy system include coupled, interactive chemical transport models (CTM) and soil/ecosystem modelling at the regional scale. However, it remains a matter for debate to what extent realistic options for future regional and global models should be based on process-based mechanistic versus empirical and regression-type models. Further discussion is needed on the extent and timescale by which new approaches can be used, such as integration with ecosystem models and satellite observations.

  15. Parameterization experiments performed via synthetic mass movements prototypes generated by 3D slope stability simulator

    Science.gov (United States)

    Colangelo, Antonio C.

    2010-05-01

    The central purpose of this work is to perform a reverse procedure in the mass movement conventional parameterization approach. The idea is to generate a number of synthetic mass movements by means of the "slope stability simulator" (Colangelo, 2007), and compeer their morphological and physical properties with "real" conditions of effective mass movements. This device is an integrated part of "relief unity emulator" (rue), that permits generate synthetic mass movements in a synthetic slope environment. The "rue" was build upon fundamental geomorphological concepts. These devices operate with an integrated set of mechanical, geomorphic and hydrological models. The "slope stability simulator" device (sss) permits to perform a detailed slope stability analysis in a theoretical three dimensional space, by means of evaluation the spatial behavior of critical depths, gradients and saturation levels in the "potential rupture surfaces" inferred along a set of slope profiles, that compounds a synthetic slope unity. It's a meta-stable 4-dimensional object generated by means of "rue", that represents a sequence evolution of a generator profile applied here, was adapted the infinite slope model for slope. Any slope profiles were sliced by means of finite element solution like in Bishop method. For the synthetic slope systems generated, we assume that the potential rupture surface occurs at soil-regolith or soil-rock boundary in slope material. Sixteen variables were included in the "rue-sss" device that operates in an integrated manner. For each cell, the factor of safety was calculated considering the value of shear strength (cohesion and friction) of material, soil-regolith boundary depth, soil moisture level content, potential rupture surface gradient, slope surface gradient, top of subsurface flow gradient, apparent soil bulk density and vegetation surcharge. The slope soil was considered as cohesive material. The 16 variables incorporated in the models were analyzed for

  16. Advances in understanding, models and parameterizations of biosphere-atmosphere ammonia exchange

    Directory of Open Access Journals (Sweden)

    C. R. Flechard

    2013-07-01

    -chemical species schemes. Their level of complexity depends on their purpose, the spatial scale at which they are applied, the current level of parameterization, and the availability of the input data they require. State-of-the-art solutions for determining the emission/sink Γ potentials through the soil/canopy system include coupled, interactive chemical transport models (CTM and soil/ecosystem modelling at the regional scale. However, it remains a matter for debate to what extent realistic options for future regional and global models should be based on process-based mechanistic versus empirical and regression-type models. Further discussion is needed on the extent and timescale by which new approaches can be used, such as integration with ecosystem models and satellite observations.

  17. Electrochemical-mechanical coupled modeling and parameterization of swelling and ionic transport in lithium-ion batteries

    Science.gov (United States)

    Sauerteig, Daniel; Hanselmann, Nina; Arzberger, Arno; Reinshagen, Holger; Ivanov, Svetlozar; Bund, Andreas

    2018-02-01

    The intercalation and aging induced volume changes of lithium-ion battery electrodes lead to significant mechanical pressure or volume changes on cell and module level. As the correlation between electrochemical and mechanical performance of lithium ion batteries at nano and macro scale requires a comprehensive and multidisciplinary approach, physical modeling accounting for chemical and mechanical phenomena during operation is very useful for the battery design. Since the introduced fully-coupled physical model requires proper parameterization, this work also focuses on identifying appropriate mathematical representation of compressibility as well as the ionic transport in the porous electrodes and the separator. The ionic transport is characterized by electrochemical impedance spectroscopy (EIS) using symmetric pouch cells comprising LiNi1/3Mn1/3Co1/3O2 (NMC) cathode, graphite anode and polyethylene separator. The EIS measurements are carried out at various mechanical loads. The observed decrease of the ionic conductivity reveals a significant transport limitation at high pressures. The experimentally obtained data are applied as input to the electrochemical-mechanical model of a prismatic 10 Ah cell. Our computational approach accounts intercalation induced electrode expansion, stress generation caused by mechanical boundaries, compression of the electrodes and the separator, outer expansion of the cell and finally the influence of the ionic transport within the electrolyte.

  18. Development of a cloud microphysical model and parameterizations to describe the effect of CCN on warm cloud

    Directory of Open Access Journals (Sweden)

    N. Kuba

    2006-01-01

    Full Text Available First, a hybrid cloud microphysical model was developed that incorporates both Lagrangian and Eulerian frameworks to study quantitatively the effect of cloud condensation nuclei (CCN on the precipitation of warm clouds. A parcel model and a grid model comprise the cloud model. The condensation growth of CCN in each parcel is estimated in a Lagrangian framework. Changes in cloud droplet size distribution arising from condensation and coalescence are calculated on grid points using a two-moment bin method in a semi-Lagrangian framework. Sedimentation and advection are estimated in the Eulerian framework between grid points. Results from the cloud model show that an increase in the number of CCN affects both the amount and the area of precipitation. Additionally, results from the hybrid microphysical model and Kessler's parameterization were compared. Second, new parameterizations were developed that estimate the number and size distribution of cloud droplets given the updraft velocity and the number of CCN. The parameterizations were derived from the results of numerous numerical experiments that used the cloud microphysical parcel model. The input information of CCN for these parameterizations is only several values of CCN spectrum (they are given by CCN counter for example. It is more convenient than conventional parameterizations those need values concerned with CCN spectrum, C and k in the equation of N=CSk, or, breadth, total number and median radius, for example. The new parameterizations' predictions of initial cloud droplet size distribution for the bin method were verified by using the aforesaid hybrid microphysical model. The newly developed parameterizations will save computing time, and can effectively approximate components of cloud microphysics in a non-hydrostatic cloud model. The parameterizations are useful not only in the bin method in the regional cloud-resolving model but also both for a two-moment bulk microphysical model and

  19. Network connectivity value.

    Science.gov (United States)

    Dragicevic, Arnaud; Boulanger, Vincent; Bruciamacchie, Max; Chauchard, Sandrine; Dupouey, Jean-Luc; Stenger, Anne

    2017-04-21

    In order to unveil the value of network connectivity, we formalize the construction of ecological networks in forest environments as an optimal control dynamic graph-theoretic problem. The network is based on a set of bioreserves and patches linked by ecological corridors. The node dynamics, built upon the consensus protocol, form a time evolutive Mahalanobis distance weighted by the opportunity costs of timber production. We consider a case of complete graph, where the ecological network is fully connected, and a case of incomplete graph, where the ecological network is partially connected. The results show that the network equilibrium depends on the size of the reception zone, while the network connectivity depends on the environmental compatibility between the ecological areas. Through shadow prices, we find that securing connectivity in partially connected networks is more expensive than in fully connected networks, but should be undertaken when the opportunity costs are significant. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Statistical analysis of the equilibrium configurations of the W7-X stellarator using function parameterization

    International Nuclear Information System (INIS)

    Mc Carthy, P.J.; Sengupta, A.; Geiger, J.; Werner, A.

    2005-01-01

    The W7-X stellarator, under construction at IPP-Greifswald, is being designed to demonstrate the steady state capability of fusion devices. Due to the pulse length involved, real time monitoring and control of the discharges is a crucial issue in steady state operations. For W7-X, we have planned a sequence of in-depth analyses of the magnetic configurations which, ultimately, will lead to a proper understanding of plasma equilibrium, stability and transport. It should also provide insight into the parameterization of the various plasma-related quantities which is important from the point of view of real time study. The first step in our sequence of analyses involved a study of the vacuum configuration, including the detectable magnetic islands, of W7-X. We now proceed to the scenario at finite beta considering full magnetohydrodynamic (MHD) equilibria based on vmec2000 calculations. A database of order 10000 equilibria was calculated on the same parameter space for the coil current ratios. The parameters which were varied randomly and independently consist of the external coil current ratios (6), the parameters of the profiles (as functions of normalised toroidal flux) of plasma pressure and toroidal current (4+4) and the plasma size (a eff ) which is required to vary the plasma volume. A statistical analysis, using Function Parametrization (FP), was performed on a sample of well-converged equilibria. The plasma parameters were varied to allow a good FP for the expected values in W7-X, i.e. volume-averaged up to 5% and toroidal net-current of up to ±50 kA for a mean field strength of about 2 T throughout the database. The profiles were chosen as a sequence of polynomials with the property that the addition of a higher order polynomial would not change the lower order volume-averaged moments of the resulting profile. The aim of this was to try to avoid cross correlations in the independent input parameters for the database generation. However, some restrictions

  1. Evaluation and Improvement of Cloud and Convective Parameterizations from Analyses of ARM Observations and Models

    Energy Technology Data Exchange (ETDEWEB)

    Del Genio, Anthony D. [NASA Goddard Inst. for Space Studies (GISS), New York, NY (United States)

    2016-03-11

    Over this period the PI and his performed a broad range of data analysis, model evaluation, and model improvement studies using ARM data. These included cloud regimes in the TWP and their evolution over the MJO; M-PACE IOP SCM-CRM intercomparisons; simulations of convective updraft strength and depth during TWP-ICE; evaluation of convective entrainment parameterizations using TWP-ICE simulations; evaluation of GISS GCM cloud behavior vs. long-term SGP cloud statistics; classification of aerosol semi-direct effects on cloud cover; depolarization lidar constraints on cloud phase; preferred states of the winter Arctic atmosphere, surface, and sub-surface; sensitivity of convection to tropospheric humidity; constraints on the parameterization of mesoscale organization from TWP-ICE WRF simulations; updraft and downdraft properties in TWP-ICE simulated convection; insights from long-term ARM records at Manus and Nauru.

  2. Impact of climate seasonality on catchment yield: A parameterization for commonly-used water balance formulas

    Science.gov (United States)

    de Lavenne, Alban; Andréassian, Vazken

    2018-03-01

    This paper examines the hydrological impact of the seasonality of precipitation and maximum evaporation: seasonality is, after aridity, a second-order determinant of catchment water yield. Based on a data set of 171 French catchments (where aridity ranged between 0.2 and 1.2), we present a parameterization of three commonly-used water balance formulas (namely, Turc-Mezentsev, Tixeront-Fu and Oldekop formulas) to account for seasonality effects. We quantify the improvement of seasonality-based parameterization in terms of the reconstitution of both catchment streamflow and water yield. The significant improvement obtained (reduction of RMSE between 9 and 14% depending on the formula) demonstrates the importance of climate seasonality in the determination of long-term catchment water balance.

  3. Mesoscale model parameterizations for radiation and turbulent fluxes at the lower boundary

    International Nuclear Information System (INIS)

    Somieski, F.

    1988-11-01

    A radiation parameterization scheme for use in mesoscale models with orography and clouds has been developed. Broadband parameterizations are presented for the solar and the terrestrial spectral ranges. They account for clear, turbid or cloudy atmospheres. The scheme is one-dimensional in the atmosphere, but the effects of mountains (inclination, shading, elevated horizon) are taken into account at the surface. In the terrestrial band, grey and black clouds are considered. Furthermore, the calculation of turbulent fluxes of sensible and latent heat and momentum at an inclined lower model boundary is described. Surface-layer similarity and the surface energy budget are used to evaluate the ground surface temperature. The total scheme is part of the mesoscale model MESOSCOP. (orig.) With 3 figs., 25 refs [de

  4. Application of a planetary wave breaking parameterization to stratospheric circulation statistics

    Science.gov (United States)

    Randel, William J.; Garcia, Rolando R.

    1994-01-01

    The planetary wave parameterization scheme developed recently by Garcia is applied to statospheric circulation statistics derived from 12 years of National Meteorological Center operational stratospheric analyses. From the data a planetary wave breaking criterion (based on the ratio of the eddy to zonal mean meridional potential vorticity (PV) gradients), a wave damping rate, and a meridional diffusion coefficient are calculated. The equatorward flank of the polar night jet during winter is identified as a wave breaking region from the observed PV gradients; the region moves poleward with season, covering all high latitudes in spring. Derived damping rates maximize in the subtropical upper stratosphere (the 'surf zone'), with damping time scales of 3-4 days. Maximum diffusion coefficients follow the spatial patterns of the wave breaking criterion, with magnitudes comparable to prior published estimates. Overall, the observed results agree well with the parameterized calculations of Garcia.

  5. Classification of parameter-dependent quantum integrable models, their parameterization, exact solution and other properties

    International Nuclear Information System (INIS)

    Owusu, Haile K; Yuzbashyan, Emil A

    2011-01-01

    We study general quantum integrable Hamiltonians linear in a coupling constant and represented by finite N x N real symmetric matrices. The restriction on the coupling dependence leads to a natural notion of nontrivial integrals of motion and classification of integrable families into types according to the number of such integrals. A type M family in our definition is formed by N-M nontrivial mutually commuting operators linear in the coupling. Working from this definition alone, we parameterize type M operators, i.e. resolve the commutation relations, and obtain an exact solution for their eigenvalues and eigenvectors. We show that our parameterization covers all type 1, 2 and 3 integrable models and discuss the extent to which it is complete for other types. We also present robust numerical observation on the number of energy-level crossings in type M integrable systems and analyze the taxonomy of types in the 1D Hubbard model. (paper)

  6. Parameterized entropy analysis of EEG following hypoxic-ischemic brain injury

    International Nuclear Information System (INIS)

    Tong Shanbao; Bezerianos, Anastasios; Malhotra, Amit; Zhu Yisheng; Thakor, Nitish

    2003-01-01

    In the present study Tsallis and Renyi entropy methods were used to study the electric activity of brain following hypoxic-ischemic (HI) injury. We investigated the performances of these parameterized information measures in describing the electroencephalogram (EEG) signal of controlled experimental animal HI injury. The results show that (a): compared with Shannon and Renyi entropy, the parameterized Tsallis entropy acts like a spatial filter and the information rate can either tune to long range rhythms or to short abrupt changes, such as bursts or spikes during the beginning of recovery, by the entropic index q; (b): Renyi entropy is a compact and predictive indicator for monitoring the physiological changes during the recovery of brain injury. There is a reduction in the Renyi entropy after brain injury followed by a gradual recovery upon resuscitation

  7. Modeling the energy balance in Marseille: Sensitivity to roughness length parameterizations and thermal admittance

    Science.gov (United States)

    Demuzere, M.; De Ridder, K.; van Lipzig, N. P. M.

    2008-08-01

    During the ESCOMPTE campaign (Experience sur Site pour COntraindre les Modeles de Pollution atmospherique et de Transport d'Emissions), a 4-day intensive observation period was selected to evaluate the Advanced Regional Prediction System (ARPS), a nonhydrostatic meteorological mesoscale model that was optimized with a parameterization for thermal roughness length to better represent urban surfaces. The evaluation shows that the ARPS model is able to correctly reproduce temperature, wind speed, and direction for one urban and two rural measurements stations. Furthermore, simulated heat fluxes show good agreement compared to the observations, although simulated sensible heat fluxes were initially too low for the urban stations. In order to improve the latter, different roughness length parameterization schemes were tested, combined with various thermal admittance values. This sensitivity study showed that the Zilitinkevich scheme combined with and intermediate value of thermal admittance performs best.

  8. Solvation of monovalent anions in formamide and methanol: Parameterization of the IEF-PCM model

    International Nuclear Information System (INIS)

    Boees, Elvis S.; Bernardi, Edson; Stassen, Hubert; Goncalves, Paulo F.B.

    2008-01-01

    The thermodynamics of solvation for a series of monovalent anions in formamide and methanol has been studied using the polarizable continuum model (PCM). The parameterization of this continuum model was guided by molecular dynamics simulations. The parameterized PCM model predicts the Gibbs free energies of solvation for 13 anions in formamide and 16 anions in methanol in very good agreement with experimental data. Two sets of atomic radii were tested in the definition of the solute cavities in the PCM and their performances are evaluated and discussed. Mean absolute deviations of the calculated free energies of solvation from the experimental values are in the range of 1.3-2.1 kcal/mol

  9. Parameterization of light absorption by components of seawater in optically complex coastal waters of the Crimea Peninsula (Black Sea).

    Science.gov (United States)

    Dmitriev, Egor V; Khomenko, Georges; Chami, Malik; Sokolov, Anton A; Churilova, Tatyana Y; Korotaev, Gennady K

    2009-03-01

    The absorption of sunlight by oceanic constituents significantly contributes to the spectral distribution of the water-leaving radiance. Here it is shown that current parameterizations of absorption coefficients do not apply to the optically complex waters of the Crimea Peninsula. Based on in situ measurements, parameterizations of phytoplankton, nonalgal, and total particulate absorption coefficients are proposed. Their performance is evaluated using a log-log regression combined with a low-pass filter and the nonlinear least-square method. Statistical significance of the estimated parameters is verified using the bootstrap method. The parameterizations are relevant for chlorophyll a concentrations ranging from 0.45 up to 2 mg/m(3).

  10. Graphical Derivatives and Stability Analysis for Parameterized Equilibria with Conic Constraints

    Czech Academy of Sciences Publication Activity Database

    Mordukhovich, B. S.; Outrata, Jiří; Ramírez, H. C.

    2015-01-01

    Roč. 23, č. 4 (2015), s. 687-704 ISSN 1877-0533 R&D Projects: GA ČR(CZ) GAP201/12/0671 Institutional support: RVO:67985556 Keywords : Variational analysis and optimization * Parameterized equilibria * Conic constraints * Sensitivity and stability analysis * Solution maps * Graphical derivatives * Normal and tangent cones Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2015 http://library.utia.cas.cz/separaty/2015/MTR/outrata-0449259.pdf

  11. Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds

    Science.gov (United States)

    Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail

    2011-01-01

    Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.

  12. Framework of cloud parameterization including ice for 3-D mesoscale models

    Energy Technology Data Exchange (ETDEWEB)

    Levkov, L; Jacob, D; Eppel, D; Grassl, H

    1989-01-01

    A parameterization scheme for the simulation of ice in clouds incorporated into the hydrostatic version of the GKSS three-dimensional mesoscale model. Numerical simulations of precipitation are performed: over the Northe Sea, the Hawaiian trade wind area and in the region of the intertropical convergence zone. Not only some major features of convective structures in all three areas but also cloud-aerosol interactions have successfully been simulated. (orig.) With 19 figs., 2 tabs.

  13. Reliability and parameterization of Romberg Test in people who have suffered a stroke

    OpenAIRE

    Perez Cruzado, David; Gonzalez Sanchez, Manuel; Cuesta-Vargas, Antonio

    2014-01-01

    AIM: To analyze the reliability and describe the parameterization with inertial sensors, of Romberg test in people who have had a stroke. METHODS: Romberg's Test was performed during 20 seconds in four different setting, depending from supporting leg and position of the eyes (opened eyes / dominant leg; closed eyes / dominant leg; opened eyes / non-dominant leg; closed eyes / non-dominant leg) in people who have suffered a stroke over a year ago. Two inertial sensors (sampli...

  14. Parameterization of cloud droplet formation for global and regional models: including adsorption activation from insoluble CCN

    Directory of Open Access Journals (Sweden)

    P. Kumar

    2009-04-01

    Full Text Available Dust and black carbon aerosol have long been known to exert potentially important and diverse impacts on cloud droplet formation. Most studies to date focus on the soluble fraction of these particles, and overlook interactions of the insoluble fraction with water vapor (even if known to be hydrophilic. To address this gap, we developed a new parameterization that considers cloud droplet formation within an ascending air parcel containing insoluble (but wettable particles externally mixed with aerosol containing an appreciable soluble fraction. Activation of particles with a soluble fraction is described through well-established Köhler theory, while the activation of hydrophilic insoluble particles is treated by "adsorption-activation" theory. In the latter, water vapor is adsorbed onto insoluble particles, the activity of which is described by a multilayer Frenkel-Halsey-Hill (FHH adsorption isotherm modified to account for particle curvature. We further develop FHH activation theory to i find combinations of the adsorption parameters AFHH, BFHH which yield atmospherically-relevant behavior, and, ii express activation properties (critical supersaturation that follow a simple power law with respect to dry particle diameter.

    The new parameterization is tested by comparing the parameterized cloud droplet number concentration against predictions with a detailed numerical cloud model, considering a wide range of particle populations, cloud updraft conditions, water vapor condensation coefficient and FHH adsorption isotherm characteristics. The agreement between parameterization and parcel model is excellent, with an average error of 10% and R2~0.98. A preliminary sensitivity study suggests that the sublinear response of droplet number to Köhler particle concentration is not as strong for FHH particles.

  15. The relationship between a deformation-based eddy parameterization and the LANS-α turbulence model

    Science.gov (United States)

    Bachman, Scott D.; Anstey, James A.; Zanna, Laure

    2018-06-01

    A recent class of ocean eddy parameterizations proposed by Porta Mana and Zanna (2014) and Anstey and Zanna (2017) modeled the large-scale flow as a non-Newtonian fluid whose subgridscale eddy stress is a nonlinear function of the deformation. This idea, while largely new to ocean modeling, has a history in turbulence modeling dating at least back to Rivlin (1957). The new class of parameterizations results in equations that resemble the Lagrangian-averaged Navier-Stokes-α model (LANS-α, e.g., Holm et al., 1998a). In this note we employ basic tensor mathematics to highlight the similarities between these turbulence models using component-free notation. We extend the Anstey and Zanna (2017) parameterization, which was originally presented in 2D, to 3D, and derive variants of this closure that arise when the full non-Newtonian stress tensor is used. Despite the mathematical similarities between the non-Newtonian and LANS-α models which might provide insight into numerical implementation, the input and dissipation of kinetic energy between these two turbulent models differ.

  16. Current state of aerosol nucleation parameterizations for air-quality and climate modeling

    Science.gov (United States)

    Semeniuk, Kirill; Dastoor, Ashu

    2018-04-01

    Aerosol nucleation parameterization models commonly used in 3-D air quality and climate models have serious limitations. This includes classical nucleation theory based variants, empirical models and other formulations. Recent work based on detailed and extensive laboratory measurements and improved quantum chemistry computation has substantially advanced the state of nucleation parameterizations. In terms of inorganic nucleation involving BHN and THN including ion effects these new models should be considered as worthwhile replacements for the old models. However, the contribution of organic species to nucleation remains poorly quantified. New particle formation consists of a distinct post-nucleation growth regime which is characterized by a strong Kelvin curvature effect and is thus dependent on availability of very low volatility organic species or sulfuric acid. There have been advances in the understanding of the multiphase chemistry of biogenic and anthropogenic organic compounds which facilitate to overcome the initial aerosol growth barrier. Implementation of processes influencing new particle formation is challenging in 3-D models and there is a lack of comprehensive parameterizations. This review considers the existing models and recent innovations.

  17. A Heuristic Parameterization for the Integrated Vertical Overlap of Cumulus and Stratus

    Science.gov (United States)

    Park, Sungsu

    2017-10-01

    The author developed a heuristic parameterization to handle the contrasting vertical overlap structures of cumulus and stratus in an integrated way. The parameterization assumes that cumulus is maximum-randomly overlapped with adjacent cumulus; stratus is maximum-randomly overlapped with adjacent stratus; and radiation and precipitation areas at each model interface are grouped into four categories, that is, convective, stratiform, mixed, and clear areas. For simplicity, thermodynamic scalars within individual portions of cloud, radiation, and precipitation areas are assumed to be internally homogeneous. The parameterization was implemented into the Seoul National University Atmosphere Model version 0 (SAM0) in an offline mode and tested over the globe. The offline control simulation reasonably reproduces the online surface precipitation flux and longwave cloud radiative forcing (LWCF). Although the cumulus fraction is much smaller than the stratus fraction, cumulus dominantly contributes to precipitation production in the tropics. For radiation, however, stratus is dominant. Compared with the maximum overlap, the random overlap of stratus produces stronger LWCF and, surprisingly, more precipitation flux due to less evaporation of convective precipitation. Compared with the maximum overlap, the random overlap of cumulus simulates stronger LWCF and weaker precipitation flux. Compared with the control simulation with separate cumulus and stratus, the simulation with a single-merged cloud substantially enhances the LWCF in the tropical deep convection and midlatitude storm track regions. The process-splitting treatment of convective and stratiform precipitation with an independent precipitation approximation (IPA) simulates weaker surface precipitation flux than the control simulation in the tropical region.

  18. GHI calculation sensitivity on microphysics, land- and cumulus parameterization in WRF over the Reunion Island

    Science.gov (United States)

    De Meij, A.; Vinuesa, J.-F.; Maupas, V.

    2018-05-01

    The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.

  19. Tsunami damping by mangrove forest: a laboratory study using parameterized trees

    Directory of Open Access Journals (Sweden)

    A. Strusińska-Correia

    2013-02-01

    Full Text Available Tsunami attenuation by coastal vegetation was examined under laboratory conditions for mature mangroves Rhizophora sp. The developed novel tree parameterization concept, accounting for both bio-mechanical and structural tree properties, allowed to substitute the complex tree structure by a simplified tree model of identical hydraulic resistance. The most representative parameterized mangrove model was selected among the tested models with different frontal area and root density, based on hydraulic test results. The selected parameterized tree models were arranged in a forest model of different width and further tested systematically under varying incident tsunami conditions (solitary waves and tsunami bores. The damping performance of the forest models under these two flow regimes was compared in terms of wave height and force envelopes, wave transmission coefficient as well as drag and inertia coefficients. Unlike the previous studies, the results indicate a significant contribution of the foreshore topography to solitary wave energy reduction through wave breaking in comparison to that attributed to the forest itself. A similar rate of tsunami transmission (ca. 20% was achieved for both flow conditions (solitary waves and tsunami bores and the widest forest (75 m in prototype investigated. Drag coefficient CD attributed to the solitary waves tends to be constant (CD = 1.5 over the investigated range of the Reynolds number.

  20. Parameterization of Mixed Layer and Deep-Ocean Mesoscales Including Nonlinearity

    Science.gov (United States)

    Canuto, V. M.; Cheng, Y.; Dubovikov, M. S.; Howard, A. M.; Leboissetier, A.

    2018-01-01

    In 2011, Chelton et al. carried out a comprehensive census of mesoscales using altimetry data and reached the following conclusions: "essentially all of the observed mesoscale features are nonlinear" and "mesoscales do not move with the mean velocity but with their own drift velocity," which is "the most germane of all the nonlinear metrics."� Accounting for these results in a mesoscale parameterization presents conceptual and practical challenges since linear analysis is no longer usable and one needs a model of nonlinearity. A mesoscale parameterization is presented that has the following features: 1) it is based on the solutions of the nonlinear mesoscale dynamical equations, 2) it describes arbitrary tracers, 3) it includes adiabatic (A) and diabatic (D) regimes, 4) the eddy-induced velocity is the sum of a Gent and McWilliams (GM) term plus a new term representing the difference between drift and mean velocities, 5) the new term lowers the transfer of mean potential energy to mesoscales, 6) the isopycnal slopes are not as flat as in the GM case, 7) deep-ocean stratification is enhanced compared to previous parameterizations where being more weakly stratified allowed a large heat uptake that is not observed, 8) the strength of the Deacon cell is reduced. The numerical results are from a stand-alone ocean code with Coordinated Ocean-Ice Reference Experiment I (CORE-I) normal-year forcing.

  1. Intercomparison of Martian Lower Atmosphere Simulated Using Different Planetary Boundary Layer Parameterization Schemes

    Science.gov (United States)

    Natarajan, Murali; Fairlie, T. Duncan; Dwyer Cianciolo, Alicia; Smith, Michael D.

    2015-01-01

    We use the mesoscale modeling capability of Mars Weather Research and Forecasting (MarsWRF) model to study the sensitivity of the simulated Martian lower atmosphere to differences in the parameterization of the planetary boundary layer (PBL). Characterization of the Martian atmosphere and realistic representation of processes such as mixing of tracers like dust depend on how well the model reproduces the evolution of the PBL structure. MarsWRF is based on the NCAR WRF model and it retains some of the PBL schemes available in the earth version. Published studies have examined the performance of different PBL schemes in NCAR WRF with the help of observations. Currently such assessments are not feasible for Martian atmospheric models due to lack of observations. It is of interest though to study the sensitivity of the model to PBL parameterization. Typically, for standard Martian atmospheric simulations, we have used the Medium Range Forecast (MRF) PBL scheme, which considers a correction term to the vertical gradients to incorporate nonlocal effects. For this study, we have also used two other parameterizations, a non-local closure scheme called Yonsei University (YSU) PBL scheme and a turbulent kinetic energy closure scheme called Mellor- Yamada-Janjic (MYJ) PBL scheme. We will present intercomparisons of the near surface temperature profiles, boundary layer heights, and wind obtained from the different simulations. We plan to use available temperature observations from Mini TES instrument onboard the rovers Spirit and Opportunity in evaluating the model results.

  2. Assessment of the turbulence parameterization schemes for the Martian mesoscale simulations

    Science.gov (United States)

    Temel, Orkun; Karatekin, Ozgur; Van Beeck, Jeroen

    2016-07-01

    Turbulent transport within the Martian atmospheric boundary layer (ABL) is one of the most important physical processes in the Martian atmosphere due to the very thin structure of Martian atmosphere and super-adiabatic conditions during the diurnal cycle [1]. The realistic modeling of turbulent fluxes within the Martian ABL has a crucial effect on the many physical phenomena including dust devils [2], methane dispersion [3] and nocturnal jets [4]. Moreover, the surface heat and mass fluxes, which are related with the mass transport within the sub-surface of Mars, are being computed by the turbulence parameterization schemes. Therefore, in addition to the possible applications within the Martian boundary layer, parameterization of turbulence has an important effect on the biological research on Mars including the investigation of water cycle or sub-surface modeling. In terms of the turbulence modeling approaches being employed for the Martian ABL, the "planetary boundary layer (PBL) schemes" have been applied not only for the global circulation modeling but also for the mesoscale simulations [5]. The PBL schemes being used for Mars are the variants of the PBL schemes which had been developed for the Earth and these schemes are either based on the empirical determination of turbulent fluxes [6] or based on solving a one dimensional turbulent kinetic energy equation [7]. Even though, the Large Eddy Simulation techniques had also been applied with the regional models for Mars, it must be noted that these advanced models also use the features of these traditional PBL schemes for sub-grid modeling [8]. Therefore, assessment of these PBL schemes is vital for a better understanding the atmospheric processes of Mars. In this framework, this present study is devoted to the validation of different turbulence modeling approaches for the Martian ABL in comparison to Viking Lander [9] and MSL [10] datasets. The GCM/Mesoscale code being used is the PlanetWRF, the extended version

  3. Lumped Mass Modeling for Local-Mode-Suppressed Element Connectivity

    DEFF Research Database (Denmark)

    Joung, Young Soo; Yoon, Gil Ho; Kim, Yoon Young

    2005-01-01

    connectivity parameterization (ECP) is employed. On the way to the ultimate crashworthy structure optimization, we are now developing a local mode-free topology optimization formulation that can be implemented in the ECP method. In fact, the local mode-freeing strategy developed here can be also used directly...... experiencing large structural changes, appears to be still poor. In ECP, the nodes of the domain-discretizing elements are connected by zero-length one-dimensional elastic links having varying stiffness. For computational efficiency, every elastic link is now assumed to have two lumped masses at its ends....... Choosing appropriate penalization functions for lumped mass and link stiffness is important for local mode-free results. However, unless the objective and constraint functions are carefully selected, it is difficult to obtain clear black-and-white results. It is shown that the present formulation is also...

  4. Handbook of networking & connectivity

    CERN Document Server

    McClain, Gary R

    1994-01-01

    Handbook of Networking & Connectivity focuses on connectivity standards in use, including hardware and software options. The book serves as a guide for solving specific problems that arise in designing and maintaining organizational networks.The selection first tackles open systems interconnection, guide to digital communications, and implementing TCP/IP in an SNA environment. Discussions focus on elimination of the SNA backbone, routing SNA over internets, connectionless versus connection-oriented networks, internet concepts, application program interfaces, basic principles of layering, proto

  5. 78 FR 55684 - ConnectED Workshop

    Science.gov (United States)

    2013-09-11

    ... tools move everything from homework assignments to testing into the cloud. The workshop will explore possible strategies to connect virtually all of our students to next-generation broadband in a timely, cost-effective way. It will also share promising practices, from NTIA's Broadband Technology Opportunities...

  6. Joyce and Ulysses: integrated and user-friendly tools for the parameterization of intramolecular force fields from quantum mechanical data.

    Science.gov (United States)

    Barone, Vincenzo; Cacelli, Ivo; De Mitri, Nicola; Licari, Daniele; Monti, Susanna; Prampolini, Giacomo

    2013-03-21

    The Joyce program is augmented with several new features, including the user friendly Ulysses GUI, the possibility of complete excited state parameterization and a more flexible treatment of the force field electrostatic terms. A first validation is achieved by successfully comparing results obtained with Joyce2.0 to literature ones, obtained for the same set of benchmark molecules. The parameterization protocol is also applied to two other larger molecules, namely nicotine and a coumarin based dye. In the former case, the parameterized force field is employed in molecular dynamics simulations of solvated nicotine, and the solute conformational distribution at room temperature is discussed. Force fields parameterized with Joyce2.0, for both the dye's ground and first excited electronic states, are validated through the calculation of absorption and emission vertical energies with molecular mechanics optimized structures. Finally, the newly implemented procedure to handle polarizable force fields is discussed and applied to the pyrimidine molecule as a test case.

  7. The urban land use in the COSMO-CLM model: a comparison of three parameterizations for Berlin

    Directory of Open Access Journals (Sweden)

    Kristina Trusilova

    2016-05-01

    Full Text Available The regional non-hydrostatic climate model COSMO-CLM is increasingly being used on fine spatial scales of 1–5 km. Such applications require a detailed differentiation between the parameterization for natural and urban land uses. Since 2010, three parameterizations for urban land use have been incorporated into COSMO-CLM. These parameterizations vary in their complexity, required city parameters and their computational cost. We perform model simulations with the COSMO-CLM coupled to these three parameterizations for urban land in the same model domain of Berlin on a 1-km grid and compare results with available temperature observations. While all models capture the urban heat island, they differ in spatial detail, magnitude and the diurnal variation.

  8. Polynomial Chaos–Based Bayesian Inference of K-Profile Parameterization in a General Circulation Model of the Tropical Pacific

    KAUST Repository

    Sraj, Ihab; Zedler, Sarah E.; Knio, Omar; Jackson, Charles S.; Hoteit, Ibrahim

    2016-01-01

    The authors present a polynomial chaos (PC)-based Bayesian inference method for quantifying the uncertainties of the K-profile parameterization (KPP) within the MIT general circulation model (MITgcm) of the tropical Pacific. The inference

  9. Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.

    Science.gov (United States)

    Gubler, S.; Gruber, S.; Purves, R. S.

    2012-06-01

    As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR) and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR). In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB) stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM) in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night. We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD) and the relative root mean squared deviance (RMSD) of the clear-sky global SDR scatter between between -2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations to local conditions

  10. Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.

    Directory of Open Access Journals (Sweden)

    S. Gubler

    2012-06-01

    Full Text Available As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR. In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night.

    We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD and the relative root mean squared deviance (RMSD of the clear-sky global SDR scatter between between −2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations

  11. The Connected Traveler

    Energy Technology Data Exchange (ETDEWEB)

    Young, Stanley

    2017-04-24

    The Connected Traveler project is a multi-disciplinary undertaking that seeks to validate potential for transformative transportation system energy savings by incentivizing energy efficient travel behavior.

  12. Connections: All Issues

    Science.gov (United States)

    Goals Recycling Green Purchasing Pollution Prevention Reusing Water Resources Environmental Management Plateau, and more... Connections Newsletter December 2016 December 2016 Science-themed gifts available at

  13. Parameterization of Cherenkov Light Lateral Distribution Function as a Function of the Zenith Angle around the Knee Region

    OpenAIRE

    Abdulsttar, Marwah M.; Al-Rubaiee, A. A.; Ali, Abdul Halim Kh.

    2016-01-01

    Cherenkov light lateral distribution function (CLLDF) simulation was fulfilled using CORSIKA code for configurations of Tunka EAS array of different zenith angles. The parameterization of the CLLDF was carried out as a function of the distance from the shower core in extensive air showers (EAS) and zenith angle on the basis of the CORSIKA simulation of primary proton around the knee region with the energy 3.10^15 eV at different zenith angles. The parameterized CLLDF is verified in comparison...

  14. Ecological connectivity networks in rapidly expanding cities.

    Science.gov (United States)

    Nor, Amal Najihah M; Corstanje, Ron; Harris, Jim A; Grafius, Darren R; Siriwardena, Gavin M

    2017-06-01

    Urban expansion increases fragmentation of the landscape. In effect, fragmentation decreases connectivity, causes green space loss and impacts upon the ecology and function of green space. Restoration of the functionality of green space often requires restoring the ecological connectivity of this green space within the city matrix. However, identifying ecological corridors that integrate different structural and functional connectivity of green space remains vague. Assessing connectivity for developing an ecological network by using efficient models is essential to improve these networks under rapid urban expansion. This paper presents a novel methodological approach to assess and model connectivity for the Eurasian tree sparrow ( Passer montanus ) and Yellow-vented bulbul ( Pycnonotus goiavier ) in three cities (Kuala Lumpur, Malaysia; Jakarta, Indonesia and Metro Manila, Philippines). The approach identifies potential priority corridors for ecological connectivity networks. The study combined circuit models, connectivity analysis and least-cost models to identify potential corridors by integrating structure and function of green space patches to provide reliable ecological connectivity network models in the cities. Relevant parameters such as landscape resistance and green space structure (vegetation density, patch size and patch distance) were derived from an expert and literature-based approach based on the preference of bird behaviour. The integrated models allowed the assessment of connectivity for both species using different measures of green space structure revealing the potential corridors and least-cost pathways for both bird species at the patch sites. The implementation of improvements to the identified corridors could increase the connectivity of green space. This study provides examples of how combining models can contribute to the improvement of ecological networks in rapidly expanding cities and demonstrates the usefulness of such models for

  15. Ecological connectivity networks in rapidly expanding cities

    Directory of Open Access Journals (Sweden)

    Amal Najihah M. Nor

    2017-06-01

    Full Text Available Urban expansion increases fragmentation of the landscape. In effect, fragmentation decreases connectivity, causes green space loss and impacts upon the ecology and function of green space. Restoration of the functionality of green space often requires restoring the ecological connectivity of this green space within the city matrix. However, identifying ecological corridors that integrate different structural and functional connectivity of green space remains vague. Assessing connectivity for developing an ecological network by using efficient models is essential to improve these networks under rapid urban expansion. This paper presents a novel methodological approach to assess and model connectivity for the Eurasian tree sparrow (Passer montanus and Yellow-vented bulbul (Pycnonotus goiavier in three cities (Kuala Lumpur, Malaysia; Jakarta, Indonesia and Metro Manila, Philippines. The approach identifies potential priority corridors for ecological connectivity networks. The study combined circuit models, connectivity analysis and least-cost models to identify potential corridors by integrating structure and function of green space patches to provide reliable ecological connectivity network models in the cities. Relevant parameters such as landscape resistance and green space structure (vegetation density, patch size and patch distance were derived from an expert and literature-based approach based on the preference of bird behaviour. The integrated models allowed the assessment of connectivity for both species using different measures of green space structure revealing the potential corridors and least-cost pathways for both bird species at the patch sites. The implementation of improvements to the identified corridors could increase the connectivity of green space. This study provides examples of how combining models can contribute to the improvement of ecological networks in rapidly expanding cities and demonstrates the usefulness of such

  16. Influences of in-cloud aerosol scavenging parameterizations on aerosol concentrations and wet deposition in ECHAM5-HAM

    Directory of Open Access Journals (Sweden)

    B. Croft

    2010-02-01

    Full Text Available A diagnostic cloud nucleation scavenging scheme, which determines stratiform cloud scavenging ratios for both aerosol mass and number distributions, based on cloud droplet, and ice crystal number concentrations, is introduced into the ECHAM5-HAM global climate model. This scheme is coupled with a size-dependent in-cloud impaction scavenging parameterization for both cloud droplet-aerosol, and ice crystal-aerosol collisions. The aerosol mass scavenged in stratiform clouds is found to be primarily (>90% scavenged by cloud nucleation processes for all aerosol species, except for dust (50%. The aerosol number scavenged is primarily (>90% attributed to impaction. 99% of this impaction scavenging occurs in clouds with temperatures less than 273 K. Sensitivity studies are presented, which compare aerosol concentrations, burdens, and deposition for a variety of in-cloud scavenging approaches: prescribed fractions, a more computationally expensive prognostic aerosol cloud processing treatment, and the new diagnostic scheme, also with modified assumptions about in-cloud impaction and nucleation scavenging. Our results show that while uncertainties in the representation of in-cloud scavenging processes can lead to differences in the range of 20–30% for the predicted annual, global mean aerosol mass burdens, and near to 50% for accumulation mode aerosol number burden, the differences in predicted aerosol mass concentrations can be up to one order of magnitude, particularly for regions of the middle troposphere with temperatures below 273 K where mixed and ice phase clouds exist. Different parameterizations for impaction scavenging changed the predicted global, annual mean number removal attributed to ice clouds by seven-fold, and the global, annual dust mass removal attributed to impaction by two orders of magnitude. Closer agreement with observations of black carbon profiles from aircraft (increases near to one order of magnitude for mixed phase clouds

  17. A mass-flux cumulus parameterization scheme for large-scale models: description and test with observations

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Tongwen [China Meteorological Administration (CMA), National Climate Center (Beijing Climate Center), Beijing (China)

    2012-02-15

    A simple mass-flux cumulus parameterization scheme suitable for large-scale atmospheric models is presented. The scheme is based on a bulk-cloud approach and has the following properties: (1) Deep convection is launched at the level of maximum moist static energy above the top of the boundary layer. It is triggered if there is positive convective available potential energy (CAPE) and relative humidity of the air at the lifting level of convection cloud is greater than 75%; (2) Convective updrafts for mass, dry static energy, moisture, cloud liquid water and momentum are parameterized by a one-dimensional entrainment/detrainment bulk-cloud model. The lateral entrainment of the environmental air into the unstable ascending parcel before it rises to the lifting condensation level is considered. The entrainment/detrainment amount for the updraft cloud parcel is separately determined according to the increase/decrease of updraft parcel mass with altitude, and the mass change for the adiabatic ascent cloud parcel with altitude is derived from a total energy conservation equation of the whole adiabatic system in which involves the updraft cloud parcel and the environment; (3) The convective downdraft is assumed saturated and originated from the level of minimum environmental saturated equivalent potential temperature within the updraft cloud; (4) The mass flux at the base of convective cloud is determined by a closure scheme suggested by Zhang (J Geophys Res 107(D14)), in which the increase/decrease of CAPE due to changes of the thermodynamic states in the free troposphere resulting from convection approximately balances the decrease/increase resulting from large-scale processes. Evaluation of the proposed convection scheme is performed by using a single column model (SCM) forced by the Atmospheric Radiation Measurement Program's (ARM) summer 1995 and 1997 Intensive Observing Period (IOP) observations, and field observations from the Global Atmospheric Research

  18. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    Science.gov (United States)

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4

  19. New parameterization of external and induced fields in geomagnetic field modeling, and a candidate model for IGRF 2005

    DEFF Research Database (Denmark)

    Olsen, Nils; Sabaka, T.J.; Lowes, F.

    2005-01-01

    When deriving spherical harmonic models of the Earth's magnetic field, low-degree external field contributions are traditionally considered by assuming that their expansion coefficient q(1)(0) varies linearly with the D-st-index, while induced contributions are considered assuming a constant ratio...... Q(1) of induced to external coefficients. A value of Q(1) = 0.27 was found from Magsat data and has been used by several authors when deriving recent field models from Orsted and CHAMP data. We describe a new approach that considers external and induced field based on a separation of D-st = E-st + I......-st into external (E-st) and induced (I-st) parts using a 1D model of mantle conductivity. The temporal behavior of q(1)(0) and of the corresponding induced coefficient are parameterized by E-st and I-st, respectively. In addition, we account for baseline-instabilities of D-st by estimating a value of q(1...

  20. Parameterizing road construction in route-based road weather models: can ground-penetrating radar provide any answers?

    International Nuclear Information System (INIS)

    Hammond, D S; Chapman, L; Thornes, J E

    2011-01-01

    A ground-penetrating radar (GPR) survey of a 32 km mixed urban and rural study route is undertaken to assess the usefulness of GPR as a tool for parameterizing road construction in a route-based road weather forecast model. It is shown that GPR can easily identify even the smallest of bridges along the route, which previous thermal mapping surveys have identified as thermal singularities with implications for winter road maintenance. Using individual GPR traces measured at each forecast point along the route, an inflexion point detection algorithm attempts to identify the depth of the uppermost subsurface layers at each forecast point for use in a road weather model instead of existing ordinal road-type classifications. This approach has the potential to allow high resolution modelling of road construction and bridge decks on a scale previously not possible within a road weather model, but initial results reveal that significant future research will be required to unlock the full potential that this technology can bring to the road weather industry. (technical design note)