WorldWideScience

Sample records for connectivity parameterization approach

  1. Theoretical aspects of the internal element connectivity parameterization approach for topology optimization

    DEFF Research Database (Denmark)

    Yoon, Gil Ho; Kim, Y.Y.; Langelaar, M.

    2008-01-01

    The internal element connectivity parameterization (I-ECP) method is an alternative approach to overcome numerical instabilities associated with low-stiffness element states in non-linear problems. In I-ECP, elements are connected by zero-length links while their link stiffness values are varied....... Therefore, it is important to interpolate link stiffness properly to obtain stably converging results. The main objective of this work is two-fold (1) the investigation of the relationship between the link stiffness and the stiffness of a domain-discretizing patch by using a discrete model and a homogenized...

  2. CLOUD PARAMETERIZATIONS, CLOUD PHYSICS, AND THEIR CONNECTIONS: AN OVERVIEW

    International Nuclear Information System (INIS)

    LIU, Y.; DAUM, P.H.; CHAI, S.K.; LIU, F.

    2002-01-01

    This paper consists of three parts. The first part is concerned with the parameterization of cloud microphysics in climate models. We demonstrate the crucial importance of spectral dispersion of the cloud droplet size distribution in determining radiative properties of clouds (e.g., effective radius), and underline the necessity of specifying spectral dispersion in the parameterization of cloud microphysics. It is argued that the inclusion of spectral dispersion makes the issue of cloud parameterization essentially equivalent to that of the droplet size distribution function, bringing cloud parameterization to the forefront of cloud physics. The second part is concerned with theoretical investigations into the spectral shape of droplet size distributions in cloud physics. After briefly reviewing the mainstream theories (including entrainment and mixing theories, and stochastic theories), we discuss their deficiencies and the need for a paradigm shift from reductionist approaches to systems approaches. A systems theory that has recently been formulated by utilizing ideas from statistical physics and information theory is discussed, along with the major results derived from it. It is shown that the systems formalism not only easily explains many puzzles that have been frustrating the mainstream theories, but also reveals such new phenomena as scale-dependence of cloud droplet size distributions. The third part is concerned with the potential applications of the systems theory to the specification of spectral dispersion in terms of predictable variables and scale-dependence under different fluctuating environments

  3. Parameterization of Fuel-Optimal Synchronous Approach Trajectories to Tumbling Targets

    Directory of Open Access Journals (Sweden)

    David Charles Sternberg

    2018-04-01

    Full Text Available Docking with potentially tumbling Targets is a common element of many mission architectures, including on-orbit servicing and active debris removal. This paper studies synchronized docking trajectories as a way to ensure the Chaser satellite remains on the docking axis of the tumbling Target, thereby reducing collision risks and enabling persistent onboard sensing of the docking location. Chaser satellites have limited computational power available to them and the time allowed for the determination of a fuel optimal trajectory may be limited. Consequently, parameterized trajectories that approximate the fuel optimal trajectory while following synchronous approaches may be used to provide a computationally efficient means of determining near optimal trajectories to a tumbling Target. This paper presents a method of balancing the computation cost with the added fuel expenditure required for parameterization, including the selection of a parameterization scheme, the number of parameters in the parameterization, and a means of incorporating the dynamics of a tumbling satellite into the parameterization process. Comparisons of the parameterized trajectories are made with the fuel optimal trajectory, which is computed through the numerical propagation of Euler’s equations. Additionally, various tumble types are considered to demonstrate the efficacy of the presented computation scheme. With this parameterized trajectory determination method, Chaser satellites may perform terminal approach and docking maneuvers with both fuel and computational efficiency.

  4. Application of the dual Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    1999-01-01

    Different applications of the parameterization of all systems stabilized by a given controller, i.e. the dual Youla parameterization, are considered in this paper. It will be shown how the parameterization can be applied in connection with controller design, adaptive controllers, model validation...

  5. Tuning controllers using the dual Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2000-01-01

    This paper describes the application of the Youla parameterization of all stabilizing controllers and the dual Youla parameterization of all systems stabilized by a given controller in connection with tuning of controllers. In the uncertain case, it is shown that the use of the Youla parameteriza......This paper describes the application of the Youla parameterization of all stabilizing controllers and the dual Youla parameterization of all systems stabilized by a given controller in connection with tuning of controllers. In the uncertain case, it is shown that the use of the Youla...

  6. Capturing the Interplay of Dynamics and Networks through Parameterizations of Laplacian Operators

    Science.gov (United States)

    2016-08-24

    we describe an umbrella framework that unifies some of the well known measures, connecting the ideas of centrality , communities and dynamical processes...change of basis. Parameterized centrality also leads to the definition of parameterized volume for subsets of vertices. Parameterized conductance...behind this definition is to establish a direct connection between centrality and community measures, as we will later demonstrate with the notion of

  7. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    Science.gov (United States)

    Huang, Dong; Liu, Yangang

    2014-12-01

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.

  8. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    International Nuclear Information System (INIS)

    Huang, Dong; Liu, Yangang

    2014-01-01

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models. (letter)

  9. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    Science.gov (United States)

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  10. Aerodynamic Shape Optimization Design of Wing-Body Configuration Using a Hybrid FFD-RBF Parameterization Approach

    Science.gov (United States)

    Liu, Yuefeng; Duan, Zhuoyi; Chen, Song

    2017-10-01

    Aerodynamic shape optimization design aiming at improving the efficiency of an aircraft has always been a challenging task, especially when the configuration is complex. In this paper, a hybrid FFD-RBF surface parameterization approach has been proposed for designing a civil transport wing-body configuration. This approach is simple and efficient, with the FFD technique used for parameterizing the wing shape and the RBF interpolation approach used for handling the wing body junction part updating. Furthermore, combined with Cuckoo Search algorithm and Kriging surrogate model with expected improvement adaptive sampling criterion, an aerodynamic shape optimization design system has been established. Finally, the aerodynamic shape optimization design on DLR F4 wing-body configuration has been carried out as a study case, and the result has shown that the approach proposed in this paper is of good effectiveness.

  11. Resolving kinematic redundancy with constraints using the FSP (Full Space Parameterization) approach

    International Nuclear Information System (INIS)

    Pin, F.G.; Tulloch, F.A.

    1996-01-01

    A solution method is presented for the motion planning and control of kinematically redundant serial-link manipulators in the presence of motion constraints such as joint limits or obstacles. Given a trajectory for the end-effector, the approach utilizes the recently proposed Full Space Parameterization (FSP) method to generate a parameterized expression for the entire space of solutions of the unconstrained system. At each time step, a constrained optimization technique is then used to analytically find the specific joint motion solution that satisfies the desired task objective and all the constraints active during the time step. The method is applicable to systems operating in a priori known environments or in unknown environments with sensor-based obstacle detection. The derivation of the analytical solution is first presented for a general type of kinematic constraint and is then applied to the problem of motion planning for redundant manipulators with joint limits and obstacle avoidance. Sample results using planar and 3-D manipulators with various degrees of redundancy are presented to illustrate the efficiency and wide applicability of constrained motion planning using the FSP approach

  12. Approaches in highly parameterized inversion: bgaPEST, a Bayesian geostatistical approach implementation with PEST: documentation and instructions

    Science.gov (United States)

    Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.

    2013-01-01

    The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.

  13. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    Science.gov (United States)

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Gain scheduling using the Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    1999-01-01

    Gain scheduling controllers are considered in this paper. The gain scheduling problem where the scheduling parameter vector cannot be measured directly, but needs to be estimated is considered. An estimation of the scheduling vector has been derived by using the Youla parameterization. The use...... in connection with H_inf gain scheduling controllers....

  15. Infrared radiation parameterizations in numerical climate models

    Science.gov (United States)

    Chou, Ming-Dah; Kratz, David P.; Ridgway, William

    1991-01-01

    This study presents various approaches to parameterizing the broadband transmission functions for utilization in numerical climate models. One-parameter scaling is applied to approximate a nonhomogeneous path with an equivalent homogeneous path, and the diffuse transmittances are either interpolated from precomputed tables or fit by analytical functions. Two-parameter scaling is applied to parameterizing the carbon dioxide and ozone transmission functions in both the lower and middle atmosphere. Parameterizations are given for the nitrous oxide and methane diffuse transmission functions.

  16. Reliable control using the primary and dual Youla parameterizations

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, J.

    2002-01-01

    Different aspects of modeling faults in dynamic systems are considered in connection with reliable control (RC). The fault models include models with additive faults, multiplicative faults and structural changes in the models due to faults in the systems. These descriptions are considered...... in connection with reliable control and feedback control with fault rejection. The main emphasis is on fault modeling. A number of fault diagnosis problems, reliable control problems, and feedback control with fault rejection problems are formulated/considered, again, mainly from a fault modeling point of view....... Reliability is introduced by means of the (primary) Youla parameterization of all stabilizing controllers, where an additional loop is closed around a diagnostic signal. In order to quantify the level of reliability, the dual Youla parameterization is introduced which can be used to analyze how large faults...

  17. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    International Nuclear Information System (INIS)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J

    2008-01-01

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization

  18. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    Energy Technology Data Exchange (ETDEWEB)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J [Atmospheric Science and Global Change Division, Pacific Northwest National Laboratory, PO Box 999, MSIN K9-30, Richland, WA (United States)], E-mail: William.Gustafson@pnl.gov

    2008-04-15

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization.

  19. Chebyshev-Taylor Parameterization of Stable/Unstable Manifolds for Periodic Orbits: Implementation and Applications

    Science.gov (United States)

    Mireles James, J. D.; Murray, Maxime

    2017-12-01

    This paper develops a Chebyshev-Taylor spectral method for studying stable/unstable manifolds attached to periodic solutions of differential equations. The work exploits the parameterization method — a general functional analytic framework for studying invariant manifolds. Useful features of the parameterization method include the fact that it can follow folds in the embedding, recovers the dynamics on the manifold through a simple conjugacy, and admits a natural notion of a posteriori error analysis. Our approach begins by deriving a recursive system of linear differential equations describing the Taylor coefficients of the invariant manifold. We represent periodic solutions of these equations as solutions of coupled systems of boundary value problems. We discuss the implementation and performance of the method for the Lorenz system, and for the planar circular restricted three- and four-body problems. We also illustrate the use of the method as a tool for computing cycle-to-cycle connecting orbits.

  20. Robust H∞ Control for Singular Time-Delay Systems via Parameterized Lyapunov Functional Approach

    Directory of Open Access Journals (Sweden)

    Li-li Liu

    2014-01-01

    Full Text Available A new version of delay-dependent bounded real lemma for singular systems with state delay is established by parameterized Lyapunov-Krasovskii functional approach. In order to avoid generating nonconvex problem formulations in control design, a strategy that introduces slack matrices and decouples the system matrices from the Lyapunov-Krasovskii parameter matrices is used. Examples are provided to demonstrate that the results in this paper are less conservative than the existing corresponding ones in the literature.

  1. Parameterization of mixing by secondary circulation in estuaries

    Science.gov (United States)

    Basdurak, N. B.; Huguenard, K. D.; Valle-Levinson, A.; Li, M.; Chant, R. J.

    2017-07-01

    Eddy viscosity parameterizations that depend on a gradient Richardson number Ri have been most pertinent to the open ocean. Parameterizations applicable to stratified coastal regions typically require implementation of a numerical model. Two novel parameterizations of the vertical eddy viscosity, based on Ri, are proposed here for coastal waters. One turbulence closure considers temporal changes in stratification and bottom stress and is coined the "regular fit." The alternative approach, named the "lateral fit," incorporates variability of lateral flows that are prevalent in estuaries. The two turbulence parameterization schemes are tested using data from a Self-Contained Autonomous Microstructure Profiler (SCAMP) and an Acoustic Doppler Current Profiler (ADCP) collected in the James River Estuary. The "regular fit" compares favorably to SCAMP-derived vertical eddy viscosity values but only at relatively small values of gradient Ri. On the other hand, the "lateral fit" succeeds at describing the lateral variability of eddy viscosity over a wide range of Ri. The modifications proposed to Ri-dependent eddy viscosity parameterizations allow applicability to stratified coastal regions, particularly in wide estuaries, without requiring implementation of a numerical model.

  2. Rural postman parameterized by the number of components of required edges

    DEFF Research Database (Denmark)

    Gutin, Gregory; Wahlström, Magnus; Yeo, Anders

    2017-01-01

    In the Directed Rural Postman Problem (DRPP), given a strongly connected directed multigraph D=(V,A) with nonnegative integral weights on the arcs, a subset R of required arcs and a nonnegative integer ℓ, decide whether D has a closed directed walk containing every arc of R and of weight at most ...... suppresses polynomial factors. Using an algebraic approach, we prove that DRPP has a randomized algorithm of running time O⁎(2k) when ℓ is bounded by a polynomial in the number of vertices in D. The same result holds for the undirected version of DRPP........ Let k be the number of weakly connected components in the subgraph of D induced by R. Sorge et al. [30] asked whether the DRPP is fixed-parameter tractable (FPT) when parameterized by k, i.e., whether there is an algorithm of running time O⁎(f(k)) where f is a function of k only and the O⁎ notation...

  3. Trajectory parameterization-A new approach to the study of the cosmic ray penumbra

    International Nuclear Information System (INIS)

    Cooke, D.J.

    1982-01-01

    A new approach to the examination of the structure of the cosmic ray penumbra has been developed, which, while utilizing the speed, efficiency and ''real'' geomagnetic field modeling capabilities of the digital computer, yields an analytical insight equivalent to that of the earlier and elegant approaches of Stormer, and of Lemaitre and Vallarta. The method involves an assigning of parameters to trajectory features in order to allow the structure and properties of the penumbra to be systematically related to trajectory configuration by means of an automated set of computer procedures. Initial use of this ''trajectory parameterization'' technique to explore the form and stability of the penumbra has yielded new and important knowledge of the penumbra and its phenomenology. Among the new findings is the discovery of isolated forbidden penumbral ''islands.''

  4. Tool-driven Design and Automated Parameterization for Real-time Generic Drivetrain Models

    Directory of Open Access Journals (Sweden)

    Schwarz Christina

    2015-01-01

    Full Text Available Real-time dynamic drivetrain modeling approaches have a great potential for development cost reduction in the automotive industry. Even though real-time drivetrain models are available, these solutions are specific to single transmission topologies. In this paper an environment for parameterization of a solution is proposed based on a generic method applicable to all types of gear transmission topologies. This enables tool-guided modeling by non- experts in the fields of mechanic engineering and control theory leading to reduced development and testing efforts. The approach is demonstrated for an exemplary automatic transmission using the environment for automated parameterization. Finally, the parameterization is validated via vehicle measurement data.

  5. Constructing IGA-suitable planar parameterization from complex CAD boundary by domain partition and global/local optimization

    Science.gov (United States)

    Xu, Gang; Li, Ming; Mourrain, Bernard; Rabczuk, Timon; Xu, Jinlan; Bordas, Stéphane P. A.

    2018-01-01

    In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries consisting of a set of B-spline curves. Instead of forming the computational domain by a simple boundary, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including B\\'ezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior B\\'ezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C1=G1-continuity constraints on the interface of neighboring B\\'ezier patches with respect to each quad in the quadrangulation, the high-quality B\\'ezier patch parameterization is obtained by a C1-constrained local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.

  6. A Symbolic Computation Approach to Parameterizing Controller for Polynomial Hamiltonian Systems

    Directory of Open Access Journals (Sweden)

    Zhong Cao

    2014-01-01

    Full Text Available This paper considers controller parameterization method of H∞ control for polynomial Hamiltonian systems (PHSs, which involves internal stability and external disturbance attenuation. The aims of this paper are to design a controller with parameters to insure that the systems are H∞ stable and propose an algorithm for solving parameters of the controller with symbolic computation. The proposed parameterization method avoids solving Hamilton-Jacobi-Isaacs equations, and thus the obtained controllers with parameters are relatively simple in form and easy in operation. Simulation with a numerical example shows that the controller is effective as it can optimize H∞ control by adjusting parameters. All these results are expected to be of use in the study of H∞ control for nonlinear systems with perturbations.

  7. Parameterized post-Newtonian cosmology

    International Nuclear Information System (INIS)

    Sanghai, Viraj A A; Clifton, Timothy

    2017-01-01

    Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC). (paper)

  8. Parameterized post-Newtonian cosmology

    Science.gov (United States)

    Sanghai, Viraj A. A.; Clifton, Timothy

    2017-03-01

    Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC).

  9. Parameterized examination in econometrics

    Science.gov (United States)

    Malinova, Anna; Kyurkchiev, Vesselin; Spasov, Georgi

    2018-01-01

    The paper presents a parameterization of basic types of exam questions in Econometrics. This algorithm is used to automate and facilitate the process of examination, assessment and self-preparation of a large number of students. The proposed parameterization of testing questions reduces the time required to author tests and course assignments. It enables tutors to generate a large number of different but equivalent dynamic questions (with dynamic answers) on a certain topic, which are automatically assessed. The presented methods are implemented in DisPeL (Distributed Platform for e-Learning) and provide questions in the areas of filtering and smoothing of time-series data, forecasting, building and analysis of single-equation econometric models. Questions also cover elasticity, average and marginal characteristics, product and cost functions, measurement of monopoly power, supply, demand and equilibrium price, consumer and product surplus, etc. Several approaches are used to enable the required numerical computations in DisPeL - integration of third-party mathematical libraries, developing our own procedures from scratch, and wrapping our legacy math codes in order to modernize and reuse them.

  10. Development of a parameterization scheme of mesoscale convective systems

    International Nuclear Information System (INIS)

    Cotton, W.R.

    1994-01-01

    The goal of this research is to develop a parameterization scheme of mesoscale convective systems (MCS) including diabatic heating, moisture and momentum transports, cloud formation, and precipitation. The approach is to: Perform explicit cloud-resolving simulation of MCSs; Perform statistical analyses of simulated MCSs to assist in fabricating a parameterization, calibrating coefficients, etc.; Test the parameterization scheme against independent field data measurements and in numerical weather prediction (NWP) models emulating general circulation model (GCM) grid resolution. Thus far we have formulated, calibrated, implemented and tested a deep convective engine against explicit Florida sea breeze convection and in coarse-grid regional simulations of mid-latitude and tropical MCSs. Several explicit simulations of MCSs have been completed, and several other are in progress. Analysis code is being written and run on the explicitly simulated data

  11. Exploring the potential of machine learning to break deadlock in convection parameterization

    Science.gov (United States)

    Pritchard, M. S.; Gentine, P.

    2017-12-01

    We explore the potential of modern machine learning tools (via TensorFlow) to replace parameterization of deep convection in climate models. Our strategy begins by generating a large ( 1 Tb) training dataset from time-step level (30-min) output harvested from a one-year integration of a zonally symmetric, uniform-SST aquaplanet integration of the SuperParameterized Community Atmosphere Model (SPCAM). We harvest the inputs and outputs connecting each of SPCAM's 8,192 embedded cloud-resolving model (CRM) arrays to its host climate model's arterial thermodynamic state variables to afford 143M independent training instances. We demonstrate that this dataset is sufficiently large to induce preliminary convergence for neural network prediction of desired outputs of SP, i.e. CRM-mean convective heating and moistening profiles. Sensitivity of the machine learning convergence to the nuances of the TensorFlow implementation are discussed, as well as results from pilot tests from the neural network operating inline within the SPCAM as a replacement to the (super)parameterization of convection.

  12. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    Science.gov (United States)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  13. Normalization of the parameterized Courant-Snyder matrix for symplectic factorization of a parameterized Taylor map

    International Nuclear Information System (INIS)

    Yan, Y.T.

    1991-01-01

    The transverse motion of charged particles in a circular accelerator can be well represented by a one-turn high-order Taylor map. For particles without energy deviation, the one-turn Taylor map is a 4-dimensional polynomials of four variables. The four variables are the transverse canonical coordinates and their conjugate momenta. To include the energy deviation (off-momentum) effects, the map has to be parameterized with a smallness factor representing the off-momentum and so the Taylor map becomes a 4-dimensional polynomials of five variables. It is for this type of parameterized Taylor map that a mehtod is presented for converting it into a parameterized Dragt-Finn factorization map. Parameterized nonlinear normal form and parameterized kick factorization can thus be obtained with suitable modification of the existing technique

  14. Modeling the regional impact of ship emissions on NOx and ozone levels over the Eastern Atlantic and Western Europe using ship plume parameterization

    Directory of Open Access Journals (Sweden)

    P. Pisoft

    2010-07-01

    Full Text Available In general, regional and global chemistry transport models apply instantaneous mixing of emissions into the model's finest resolved scale. In case of a concentrated source, this could result in erroneous calculation of the evolution of both primary and secondary chemical species. Several studies discussed this issue in connection with emissions from ships and aircraft. In this study, we present an approach to deal with the non-linear effects during dispersion of NOx emissions from ships. It represents an adaptation of the original approach developed for aircraft NOx emissions, which uses an exhaust tracer to trace the amount of the emitted species in the plume and applies an effective reaction rate for the ozone production/destruction during the plume's dilution into the background air. In accordance with previous studies examining the impact of international shipping on the composition of the troposphere, we found that the contribution of ship induced surface NOx to the total reaches 90% over remote ocean and makes 10–30% near coastal regions. Due to ship emissions, surface ozone increases by up to 4–6 ppbv making 10% contribution to the surface ozone budget. When applying the ship plume parameterization, we show that the large scale NOx decreases and the ship NOx contribution is reduced by up to 20–25%. A similar decrease was found in the case of O3. The plume parameterization suppressed the ship induced ozone production by 15–30% over large areas of the studied region. To evaluate the presented parameterization, nitrogen monoxide measurements over the English Channel were compared with modeled values and it was found that after activating the parameterization the model accuracy increases.

  15. On parameterization of the inverse problem for estimating aquifer properties using tracer data

    International Nuclear Information System (INIS)

    Kowalsky, M. B.; Finsterle, Stefan A.; Williams, Kenneth H.; Murray, Christopher J.; Commer, Michael; Newcomer, Darrell R.; Englert, Andreas L.; Steefel, Carl I.; Hubbard, Susan

    2012-01-01

    We consider a field-scale tracer experiment conducted in 2007 in a shallow uranium-contaminated aquifer at Rifle, Colorado. In developing a reliable approach for inferring hydrological properties at the site through inverse modeling of the tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance. We present an approach for hydrological inversion of the tracer data and explore, using a 2D synthetic example at first, how parameterization affects the solution, and how additional characterization data could be incorporated to reduce uncertainty. Specifically, we examine sensitivity of the results to the configuration of pilot points used in a geostatistical parameterization, and to the sampling frequency and measurement error of the concentration data. A reliable solution of the inverse problem is found when the pilot point configuration is carefully implemented. In addition, we examine the use of a zonation parameterization, in which the geometry of the geological facies is known (e.g., from geophysical data or core data), to reduce the non-uniqueness of the solution and the number of unknown parameters to be estimated. When zonation information is only available for a limited region, special treatment in the remainder of the model is necessary, such as using a geostatistical parameterization. Finally, inversion of the actual field data is performed using 2D and 3D models, and results are compared with slug test data.

  16. On quaternion based parameterization of orientation in computer vision and robotics

    Directory of Open Access Journals (Sweden)

    G. Terzakis

    2014-04-01

    Full Text Available The problem of orientation parameterization for applications in computer vision and robotics is examined in detail herein. The necessary intuition and formulas are provided for direct practical use in any existing algorithm that seeks to minimize a cost function in an iterative fashion. Two distinct schemes of parameterization are analyzed: The first scheme concerns the traditional axis-angle approach, while the second employs stereographic projection from unit quaternion sphere to the 3D real projective space. Performance measurements are taken and a comparison is made between the two approaches. Results suggests that there exist several benefits in the use of stereographic projection that include rational expressions in the rotation matrix derivatives, improved accuracy, robustness to random starting points and accelerated convergence.

  17. A Thermal Infrared Radiation Parameterization for Atmospheric Studies

    Science.gov (United States)

    Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)

    2001-01-01

    This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.

  18. A novel non-uniform control vector parameterization approach with time grid refinement for flight level tracking optimal control problems.

    Science.gov (United States)

    Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua

    2018-02-01

    High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Analysis of sensitivity to different parameterization schemes for a subtropical cyclone

    Science.gov (United States)

    Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.

    2018-05-01

    A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.

  20. Parameterization Of Solar Radiation Using Neural Network

    International Nuclear Information System (INIS)

    Jiya, J. D.; Alfa, B.

    2002-01-01

    This paper presents a neural network technique for parameterization of global solar radiation. The available data from twenty-one stations is used for training the neural network and the data from other ten stations is used to validate the neural model. The neural network utilizes latitude, longitude, altitude, sunshine duration and period number to parameterize solar radiation values. The testing data was not used in the training to demonstrate the performance of the neural network in unknown stations to parameterize solar radiation. The results indicate a good agreement between the parameterized solar radiation values and actual measured values

  1. Parameterization of a Hydrological Model for a Large, Ungauged Urban Catchment

    Directory of Open Access Journals (Sweden)

    Gerald Krebs

    2016-10-01

    Full Text Available Urbanization leads to the replacement of natural areas by impervious surfaces and affects the catchment hydrological cycle with adverse environmental impacts. Low impact development tools (LID that mimic hydrological processes of natural areas have been developed and applied to mitigate these impacts. Hydrological simulations are one possibility to evaluate the LID performance but the associated small-scale processes require a highly spatially distributed and explicit modeling approach. However, detailed data for model development are often not available for large urban areas, hampering the model parameterization. In this paper we propose a methodology to parameterize a hydrological model to a large, ungauged urban area by maintaining at the same time a detailed surface discretization for direct parameter manipulation for LID simulation and a firm reliance on available data for model conceptualization. Catchment delineation was based on a high-resolution digital elevation model (DEM and model parameterization relied on a novel model regionalization approach. The impact of automated delineation and model regionalization on simulation results was evaluated for three monitored study catchments (5.87–12.59 ha. The simulated runoff peak was most sensitive to accurate catchment discretization and calibration, while both the runoff volume and the fit of the hydrograph were less affected.

  2. Better models are more effectively connected models

    Science.gov (United States)

    Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John

    2016-04-01

    can be represented in models: either by allowing it to emerge from model behaviour or by parameterizing it inside model structures; and on the appropriate scale at which processes should be represented explicitly or implicitly. It will also explore how modellers themselves approach connectivity through the results of a community survey. Finally, it will present the outline of an international modelling exercise aimed at assessing how different modelling concepts can capture connectivity in real catchments.

  3. Stable Kernel Representations and the Youla Parameterization for Nonlinear Systems

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    In this paper a general approach is taken to yield a characterization of the class of stable plant controller pairs, which is a generalization of the Youla parameterization for linear systems. This is based on the idea of representing the input-output pairs of the plant and controller as elements of

  4. Neutrosophic Parameterized Soft Relations and Their Applications

    Directory of Open Access Journals (Sweden)

    Irfan Deli

    2014-06-01

    Full Text Available The aim of this paper is to introduce the concept of relation on neutrosophic parameterized soft set (NP- soft sets theory. We have studied some related properties and also put forward some propositions on neutrosophic parameterized soft relation with proofs and examples. Finally the notions of symmetric, transitive, reflexive, and equivalence neutrosophic parameterized soft set relations have been established in our work. Finally a decision making method on NP-soft sets is presented.

  5. Modulation wave approach to the structural parameterization and Rietveld refinement of low carnegieite

    International Nuclear Information System (INIS)

    Withers, R.L.; Thompson, J.G.

    1993-01-01

    The crystal structure of low carnegieite, NaAlSiO 4 [M r =142.05, orthorhombic, Pb2 1 a, a=10.261(1), b=14.030(2), c=5.1566(6) A, D x =2.542 g cm -3 , Z=4, Cu Kα 1 , λ=1.5406 A, μ=77.52 cm -1 , F(000)=559.85], is determined via Rietveld refinement from powder data, R p =0.057, R wp =0.076, R Bragg =0.050. Given that there are far too many parameters to be determined via unconstrained Rietveld refinement, a group theoretical or modulation wave approach is used in order to parameterize the structural deviation of low carnegieite from its underlying C9 aristotype. Appropriate crystal chemical constraints are applied in order to provide two distinct plausible starting models for the structure of the aluminosilicate framework. The correct starting model for the aluminosilicate framework as well as the ordering and positions of the non-framework Na atoms are then determined via Rietveld refinement. At all stages, chemical plausibility is checked via the use of the bond-length-bond-valence formalism. The JCPDS file number for low carnegieite is 44-1496. (orig.)

  6. Impact of model structure and parameterization on Penman-Monteith type evaporation models

    KAUST Repository

    Ershadi, A.; McCabe, Matthew; Evans, J.P.; Wood, E.F.

    2015-01-01

    Overall, the results illustrate the sensitivity of Penman-Monteith type models to model structure, parameterization choice and biome type. A particular challenge in flux estimation relates to developing robust and broadly applicable model formulations. With many choices available for use, providing guidance on the most appropriate scheme to employ is required to advance approaches for routine global scale flux estimates, undertake hydrometeorological assessments or develop hydrological forecasting tools, amongst many other applications. In such cases, a multi-model ensemble or biome-specific tiled evaporation product may be an appropriate solution, given the inherent variability in model and parameterization choice that is observed within single product estimates.

  7. Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application

    Science.gov (United States)

    Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni

    2018-06-01

    Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.

  8. Collaborative Project. 3D Radiative Transfer Parameterization Over Mountains/Snow for High-Resolution Climate Models. Fast physics and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liou, Kuo-Nan [Univ. of California, Los Angeles, CA (United States)

    2016-02-09

    Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracing computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS) for the

  9. Carbody structural lightweighting based on implicit parameterized model

    Science.gov (United States)

    Chen, Xin; Ma, Fangwu; Wang, Dengfeng; Xie, Chen

    2014-05-01

    Most of recent research on carbody lightweighting has focused on substitute material and new processing technologies rather than structures. However, new materials and processing techniques inevitably lead to higher costs. Also, material substitution and processing lightweighting have to be realized through body structural profiles and locations. In the huge conventional workload of lightweight optimization, model modifications involve heavy manual work, and it always leads to a large number of iteration calculations. As a new technique in carbody lightweighting, the implicit parameterization is used to optimize the carbody structure to improve the materials utilization rate in this paper. The implicit parameterized structural modeling enables the use of automatic modification and rapid multidisciplinary design optimization (MDO) in carbody structure, which is impossible in the traditional structure finite element method (FEM) without parameterization. The structural SFE parameterized model is built in accordance with the car structural FE model in concept development stage, and it is validated by some structural performance data. The validated SFE structural parameterized model can be used to generate rapidly and automatically FE model and evaluate different design variables group in the integrated MDO loop. The lightweighting result of body-in-white (BIW) after the optimization rounds reveals that the implicit parameterized model makes automatic MDO feasible and can significantly improve the computational efficiency of carbody structural lightweighting. This paper proposes the integrated method of implicit parameterized model and MDO, which has the obvious practical advantage and industrial significance in the carbody structural lightweighting design.

  10. Impact of model structure and parameterization on Penman-Monteith type evaporation models

    KAUST Repository

    Ershadi, A.

    2015-04-12

    sites, where the simpler aerodynamic resistance approach of Mu et al. (2011) showed improved performance. Overall, the results illustrate the sensitivity of Penman-Monteith type models to model structure, parameterization choice and biome type. A particular challenge in flux estimation relates to developing robust and broadly applicable model formulations. With many choices available for use, providing guidance on the most appropriate scheme to employ is required to advance approaches for routine global scale flux estimates, undertake hydrometeorological assessments or develop hydrological forecasting tools, amongst many other applications. In such cases, a multi-model ensemble or biome-specific tiled evaporation product may be an appropriate solution, given the inherent variability in model and parameterization choice that is observed within single product estimates.

  11. Active Subspaces of Airfoil Shape Parameterizations

    Science.gov (United States)

    Grey, Zachary J.; Constantine, Paul G.

    2018-05-01

    Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.

  12. Spectral cumulus parameterization based on cloud-resolving model

    Science.gov (United States)

    Baba, Yuya

    2018-02-01

    We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.

  13. Why Did the Bear Cross the Road? Comparing the Performance of Multiple Resistance Surfaces and Connectivity Modeling Methods

    Directory of Open Access Journals (Sweden)

    Samuel A. Cushman

    2014-12-01

    Full Text Available There have been few assessments of the performance of alternative resistance surfaces, and little is known about how connectivity modeling approaches differ in their ability to predict organism movements. In this paper, we evaluate the performance of four connectivity modeling approaches applied to two resistance surfaces in predicting the locations of highway crossings by American black bears in the northern Rocky Mountains, USA. We found that a resistance surface derived directly from movement data greatly outperformed a resistance surface produced from analysis of genetic differentiation, despite their heuristic similarities. Our analysis also suggested differences in the performance of different connectivity modeling approaches. Factorial least cost paths appeared to slightly outperform other methods on the movement-derived resistance surface, but had very poor performance on the resistance surface obtained from multi-model landscape genetic analysis. Cumulative resistant kernels appeared to offer the best combination of high predictive performance and sensitivity to differences in resistance surface parameterization. Our analysis highlights that even when two resistance surfaces include the same variables and have a high spatial correlation of resistance values, they may perform very differently in predicting animal movement and population connectivity.

  14. Consensus clustering approach to group brain connectivity matrices

    Directory of Open Access Journals (Sweden)

    Javier Rasero

    2017-10-01

    Full Text Available A novel approach rooted on the notion of consensus clustering, a strategy developed for community detection in complex networks, is proposed to cope with the heterogeneity that characterizes connectivity matrices in health and disease. The method can be summarized as follows: (a define, for each node, a distance matrix for the set of subjects by comparing the connectivity pattern of that node in all pairs of subjects; (b cluster the distance matrix for each node; (c build the consensus network from the corresponding partitions; and (d extract groups of subjects by finding the communities of the consensus network thus obtained. Different from the previous implementations of consensus clustering, we thus propose to use the consensus strategy to combine the information arising from the connectivity patterns of each node. The proposed approach may be seen either as an exploratory technique or as an unsupervised pretraining step to help the subsequent construction of a supervised classifier. Applications on a toy model and two real datasets show the effectiveness of the proposed methodology, which represents heterogeneity of a set of subjects in terms of a weighted network, the consensus matrix.

  15. Modelling heterogeneous ice nucleation on mineral dust and soot with parameterizations based on laboratory experiments

    Science.gov (United States)

    Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.

    2016-12-01

    Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.

  16. Inheritance versus parameterization

    DEFF Research Database (Denmark)

    Ernst, Erik

    2013-01-01

    This position paper argues that inheritance and parameterization differ in their fundamental structure, even though they may emulate each other in many ways. Based on this, we claim that certain mechanisms, e.g., final classes, are in conflict with the nature of inheritance, and hence causes...

  17. The connection-set algebra--a novel formalism for the representation of connectivity structure in neuronal network models.

    Science.gov (United States)

    Djurfeldt, Mikael

    2012-07-01

    The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.

  18. A general framework for thermodynamically consistent parameterization and efficient sampling of enzymatic reactions.

    Directory of Open Access Journals (Sweden)

    Pedro Saa

    2015-04-01

    Full Text Available Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol, a transition region (-2> ΔGr >-20 kJ/mol and a constant elasticity region (ΔGr <-20 kJ/mol. We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only

  19. A stochastic parameterization for deep convection using cellular automata

    Science.gov (United States)

    Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.

    2012-12-01

    Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in

  20. Elastic orthorhombic anisotropic parameter inversion: An analysis of parameterization

    KAUST Repository

    Oh, Juwon

    2016-09-15

    The resolution of a multiparameter full-waveform inversion (FWI) is highly influenced by the parameterization used in the inversion algorithm, as well as the data quality and the sensitivity of the data to the elastic parameters because the scattering patterns of the partial derivative wavefields (PDWs) vary with parameterization. For this reason, it is important to identify an optimal parameterization for elastic orthorhombic FWI by analyzing the radiation patterns of the PDWs for many reasonable model parameterizations. We have promoted a parameterization that allows for the separation of the anisotropic properties in the radiation patterns. The central parameter of this parameterization is the horizontal P-wave velocity, with an isotropic scattering potential, influencing the data at all scales and directions. This parameterization decouples the influence of the scattering potential given by the P-wave velocity perturbation fromthe polar changes described by two dimensionless parameter perturbations and from the azimuthal variation given by three additional dimensionless parameters perturbations. In addition, the scattering potentials of the P-wave velocity perturbation are also decoupled from the elastic influences given by one S-wave velocity and two additional dimensionless parameter perturbations. The vertical S-wave velocity is chosen with the best resolution obtained from S-wave reflections and converted waves, little influence on P-waves in conventional surface seismic acquisition. The influence of the density on observed data can be absorbed by one anisotropic parameter that has a similar radiation pattern. The additional seven dimensionless parameters describe the polar and azimuth variations in the P- and S-waves that we may acquire, with some of the parameters having distinct influences on the recorded data on the earth\\'s surface. These characteristics of the new parameterization offer the potential for a multistage inversion from high symmetry

  1. Elastic orthorhombic anisotropic parameter inversion: An analysis of parameterization

    KAUST Repository

    Oh, Juwon; Alkhalifah, Tariq Ali

    2016-01-01

    The resolution of a multiparameter full-waveform inversion (FWI) is highly influenced by the parameterization used in the inversion algorithm, as well as the data quality and the sensitivity of the data to the elastic parameters because the scattering patterns of the partial derivative wavefields (PDWs) vary with parameterization. For this reason, it is important to identify an optimal parameterization for elastic orthorhombic FWI by analyzing the radiation patterns of the PDWs for many reasonable model parameterizations. We have promoted a parameterization that allows for the separation of the anisotropic properties in the radiation patterns. The central parameter of this parameterization is the horizontal P-wave velocity, with an isotropic scattering potential, influencing the data at all scales and directions. This parameterization decouples the influence of the scattering potential given by the P-wave velocity perturbation fromthe polar changes described by two dimensionless parameter perturbations and from the azimuthal variation given by three additional dimensionless parameters perturbations. In addition, the scattering potentials of the P-wave velocity perturbation are also decoupled from the elastic influences given by one S-wave velocity and two additional dimensionless parameter perturbations. The vertical S-wave velocity is chosen with the best resolution obtained from S-wave reflections and converted waves, little influence on P-waves in conventional surface seismic acquisition. The influence of the density on observed data can be absorbed by one anisotropic parameter that has a similar radiation pattern. The additional seven dimensionless parameters describe the polar and azimuth variations in the P- and S-waves that we may acquire, with some of the parameters having distinct influences on the recorded data on the earth's surface. These characteristics of the new parameterization offer the potential for a multistage inversion from high symmetry

  2. New Parameterizations for Neutral and Ion-Induced Sulfuric Acid-Water Particle Formation in Nucleation and Kinetic Regimes

    Science.gov (United States)

    Määttänen, Anni; Merikanto, Joonas; Henschel, Henning; Duplissy, Jonathan; Makkonen, Risto; Ortega, Ismael K.; Vehkamäki, Hanna

    2018-01-01

    We have developed new parameterizations of electrically neutral homogeneous and ion-induced sulfuric acid-water particle formation for large ranges of environmental conditions, based on an improved model that has been validated against a particle formation rate data set produced by Cosmics Leaving OUtdoor Droplets (CLOUD) experiments at European Organization for Nuclear Research (CERN). The model uses a thermodynamically consistent version of the Classical Nucleation Theory normalized using quantum chemical data. Unlike the earlier parameterizations for H2SO4-H2O nucleation, the model is applicable to extreme dry conditions where the one-component sulfuric acid limit is approached. Parameterizations are presented for the critical cluster sulfuric acid mole fraction, the critical cluster radius, the total number of molecules in the critical cluster, and the particle formation rate. If the critical cluster contains only one sulfuric acid molecule, a simple formula for kinetic particle formation can be used: this threshold has also been parameterized. The parameterization for electrically neutral particle formation is valid for the following ranges: temperatures 165-400 K, sulfuric acid concentrations 104-1013 cm-3, and relative humidities 0.001-100%. The ion-induced particle formation parameterization is valid for temperatures 195-400 K, sulfuric acid concentrations 104-1016 cm-3, and relative humidities 10-5-100%. The new parameterizations are thus applicable for the full range of conditions in the Earth's atmosphere relevant for binary sulfuric acid-water particle formation, including both tropospheric and stratospheric conditions. They are also suitable for describing particle formation in the atmosphere of Venus.

  3. A Solar Radiation Parameterization for Atmospheric Studies. Volume 15

    Science.gov (United States)

    Chou, Ming-Dah; Suarez, Max J. (Editor)

    1999-01-01

    The solar radiation parameterization (CLIRAD-SW) developed at the Goddard Climate and Radiation Branch for application to atmospheric models are described. It includes the absorption by water vapor, O3, O2, CO2, clouds, and aerosols and the scattering by clouds, aerosols, and gases. Depending upon the nature of absorption, different approaches are applied to different absorbers. In the ultraviolet and visible regions, the spectrum is divided into 8 bands, and single O3 absorption coefficient and Rayleigh scattering coefficient are used for each band. In the infrared, the spectrum is divided into 3 bands, and the k-distribution method is applied for water vapor absorption. The flux reduction due to O2 is derived from a simple function, while the flux reduction due to CO2 is derived from precomputed tables. Cloud single-scattering properties are parameterized, separately for liquid drops and ice, as functions of water amount and effective particle size. A maximum-random approximation is adopted for the overlapping of clouds at different heights. Fluxes are computed using the Delta-Eddington approximation.

  4. Parameterization analysis and inversion for orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2018-05-01

    Accounting for azimuthal anisotropy is necessary for the processing and inversion of wide-azimuth and wide-aperture seismic data because wave speeds naturally depend on the wave propagation direction. Orthorhombic anisotropy is considered the most effective anisotropic model that approximates the azimuthal anisotropy we observe in seismic data. In the framework of full wave form inversion (FWI), the large number of parameters describing orthorhombic media exerts a considerable trade-off and increases the non-linearity of the inversion problem. Choosing a suitable parameterization for the model, and identifying which parameters in that parameterization could be well resolved, are essential to a successful inversion. In this thesis, I derive the radiation patterns for different acoustic orthorhombic parameterization. Analyzing the angular dependence of the scattering of the parameters of different parameterizations starting with the conventionally used notation, I assess the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. In order to build practical inversion strategies, I suggest new parameters (called deviation parameters) for a new parameterization style in orthorhombic media. The novel parameters denoted ∈d, ƞd and δd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. The main feature of the deviation parameters consists of keeping the scattering of the vertical transversely isotropic (VTI) parameters stationary with azimuth. Using these scattering features, we can condition FWI to invert for the parameters which the data are sensitive to, at different stages, scales, and locations in the model. With this parameterization, the data are mainly sensitive to the scattering of 3 parameters (out of six that describe an acoustic orthorhombic medium): the horizontal velocity in the x1 direction, ∈1 which provides scattering mainly near

  5. Droplet Nucleation: Physically-Based Parameterizations and Comparative Evaluation

    Directory of Open Access Journals (Sweden)

    Steve Ghan

    2011-10-01

    Full Text Available One of the greatest sources of uncertainty in simulations of climate and climate change is the influence of aerosols on the optical properties of clouds. The root of this influence is the droplet nucleation process, which involves the spontaneous growth of aerosol into cloud droplets at cloud edges, during the early stages of cloud formation, and in some cases within the interior of mature clouds. Numerical models of droplet nucleation represent much of the complexity of the process, but at a computational cost that limits their application to simulations of hours or days. Physically-based parameterizations of droplet nucleation are designed to quickly estimate the number nucleated as a function of the primary controlling parameters: the aerosol number size distribution, hygroscopicity and cooling rate. Here we compare and contrast the key assumptions used in developing each of the most popular parameterizations and compare their performances under a variety of conditions. We find that the more complex parameterizations perform well under a wider variety of nucleation conditions, but all parameterizations perform well under the most common conditions. We then discuss the various applications of the parameterizations to cloud-resolving, regional and global models to study aerosol effects on clouds at a wide range of spatial and temporal scales. We compare estimates of anthropogenic aerosol indirect effects using two different parameterizations applied to the same global climate model, and find that the estimates of indirect effects differ by only 10%. We conclude with a summary of the outstanding challenges remaining for further development and application.

  6. Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data

    International Nuclear Information System (INIS)

    Somerville, R.C.J.; Iacobellis, S.F.

    2005-01-01

    Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional

  7. A multiresolution spatial parameterization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions

    Directory of Open Access Journals (Sweden)

    J. Ray

    2014-09-01

    Full Text Available The characterization of fossil-fuel CO2 (ffCO2 emissions is paramount to carbon cycle studies, but the use of atmospheric inverse modeling approaches for this purpose has been limited by the highly heterogeneous and non-Gaussian spatiotemporal variability of emissions. Here we explore the feasibility of capturing this variability using a low-dimensional parameterization that can be implemented within the context of atmospheric CO2 inverse problems aimed at constraining regional-scale emissions. We construct a multiresolution (i.e., wavelet-based spatial parameterization for ffCO2 emissions using the Vulcan inventory, and examine whether such a~parameterization can capture a realistic representation of the expected spatial variability of actual emissions. We then explore whether sub-selecting wavelets using two easily available proxies of human activity (images of lights at night and maps of built-up areas yields a low-dimensional alternative. We finally implement this low-dimensional parameterization within an idealized inversion, where a sparse reconstruction algorithm, an extension of stagewise orthogonal matching pursuit (StOMP, is used to identify the wavelet coefficients. We find that (i the spatial variability of fossil-fuel emission can indeed be represented using a low-dimensional wavelet-based parameterization, (ii that images of lights at night can be used as a proxy for sub-selecting wavelets for such analysis, and (iii that implementing this parameterization within the described inversion framework makes it possible to quantify fossil-fuel emissions at regional scales if fossil-fuel-only CO2 observations are available.

  8. An energetically consistent vertical mixing parameterization in CCSM4

    DEFF Research Database (Denmark)

    Nielsen, Søren Borg; Jochum, Markus; Eden, Carsten

    2018-01-01

    An energetically consistent stratification-dependent vertical mixing parameterization is implemented in the Community Climate System Model 4 and forced with energy conversion from the barotropic tides to internal waves. The structures of the resulting dissipation and diffusivity fields are compared......, however, depends greatly on the details of the vertical mixing parameterizations, where the new energetically consistent parameterization results in low thermocline diffusivities and a sharper and shallower thermocline. It is also investigated if the ocean state is more sensitive to a change in forcing...

  9. Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone

    Science.gov (United States)

    Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.

    2017-12-01

    The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.

  10. Parameterization of the dielectric function of semiconductor nanocrystals

    Energy Technology Data Exchange (ETDEWEB)

    Petrik, P., E-mail: petrik@mfa.kfki.hu

    2014-11-15

    Optical methods like spectroscopic ellipsometry are sensitive to the structural properties of semiconductor films such as crystallinity or grain size. The imaginary part of the dielectric function is proportional to the joint density of electronic states. Consequently, the analysis of the dielectric function around the critical point energies provides useful information about the electron band structure and all related parameters like the grain structure, band gap, temperature, composition, phase structure, and carrier mobility. In this work an attempt is made to present a selection of the approaches to parameterize and analyze the dielectric function of semiconductors, as well as some applications.

  11. Parameterization and measurements of helical magnetic fields

    International Nuclear Information System (INIS)

    Fischer, W.; Okamura, M.

    1997-01-01

    Magnetic fields with helical symmetry can be parameterized using multipole coefficients (a n , b n ). We present a parameterization that gives the familiar multipole coefficients (a n , b n ) for straight magnets when the helical wavelength tends to infinity. To measure helical fields all methods used for straight magnets can be employed. We show how to convert the results of those measurements to obtain the desired helical multipole coefficients (a n , b n )

  12. Focal species and landscape "naturalness" corridor models offer complementary approaches for connectivity conservation planning

    Science.gov (United States)

    Meade Krosby; Ian Breckheimer; D. John Pierce; Peter H. Singleton; Sonia A. Hall; Karl C. Halupka; William L. Gaines; Robert A. Long; Brad H. McRae; Brian L. Cosentino; Joanne P. Schuett-Hames

    2015-01-01

    Context   The dual threats of habitat fragmentation and climate change have led to a proliferation of approaches for connectivity conservation planning. Corridor analyses have traditionally taken a focal species approach, but the landscape ‘‘naturalness’’ approach of modeling connectivity among areas of low human modification has gained popularity...

  13. Stochastic parameterizing manifolds and non-Markovian reduced equations stochastic manifolds for nonlinear SPDEs II

    CERN Document Server

    Chekroun, Mickaël D; Wang, Shouhong

    2015-01-01

    In this second volume, a general approach is developed to provide approximate parameterizations of the "small" scales by the "large" ones for a broad class of stochastic partial differential equations (SPDEs). This is accomplished via the concept of parameterizing manifolds (PMs), which are stochastic manifolds that improve, for a given realization of the noise, in mean square error the partial knowledge of the full SPDE solution when compared to its projection onto some resolved modes. Backward-forward systems are designed to give access to such PMs in practice. The key idea consists of representing the modes with high wave numbers as a pullback limit depending on the time-history of the modes with low wave numbers. Non-Markovian stochastic reduced systems are then derived based on such a PM approach. The reduced systems take the form of stochastic differential equations involving random coefficients that convey memory effects. The theory is illustrated on a stochastic Burgers-type equation.

  14. Amplification of intrinsic emittance due to rough metal cathodes: Formulation of a parameterization model

    Energy Technology Data Exchange (ETDEWEB)

    Charles, T.K. [School of Physics and Astronomy, Monash University, Clayton, Victoria, 3800 (Australia); Australian Synchrotron, 800 Blackburn Road, Clayton, Victoria, 3168 (Australia); Paganin, D.M. [School of Physics and Astronomy, Monash University, Clayton, Victoria, 3800 (Australia); Dowd, R.T. [Australian Synchrotron, 800 Blackburn Road, Clayton, Victoria, 3168 (Australia)

    2016-08-21

    Intrinsic emittance is often the limiting factor for brightness in fourth generation light sources and as such, a good understanding of the factors affecting intrinsic emittance is essential in order to be able to decrease it. Here we present a parameterization model describing the proportional increase in emittance induced by cathode surface roughness. One major benefit behind the parameterization approach presented here is that it takes the complexity of a Monte Carlo model and reduces the results to a straight-forward empirical model. The resulting models describe the proportional increase in transverse momentum introduced by surface roughness, and are applicable to various metal types, photon wavelengths, applied electric fields, and cathode surface terrains. The analysis includes the increase in emittance due to changes in the electric field induced by roughness as well as the increase in transverse momentum resultant from the spatially varying surface normal. We also compare the results of the Parameterization Model to an Analytical Model which employs various approximations to produce a more compact expression with the cost of a reduction in accuracy.

  15. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    Science.gov (United States)

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Menangkal Serangan SQL Injection Dengan Parameterized Query

    Directory of Open Access Journals (Sweden)

    Yulianingsih Yulianingsih

    2016-06-01

    Full Text Available Semakin meningkat pertumbuhan layanan informasi maka semakin tinggi pula tingkat kerentanan keamanan dari suatu sumber informasi. Melalui tulisan ini disajikan penelitian yang dilakukan secara eksperimen yang membahas tentang kejahatan penyerangan database secara SQL Injection. Penyerangan dilakukan melalui halaman autentikasi dikarenakan halaman ini merupakan pintu pertama akses yang seharusnya memiliki pertahanan yang cukup. Kemudian dilakukan eksperimen terhadap metode Parameterized Query untuk mendapatkan solusi terhadap permasalahan tersebut.   Kata kunci— Layanan Informasi, Serangan, eksperimen, SQL Injection, Parameterized Query.

  17. A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction

    Science.gov (United States)

    Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur

    2009-07-01

    For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).

  18. ANALYSIS OF PARAMETERIZATION VALUE REDUCTION OF SOFT SETS AND ITS ALGORITHM

    Directory of Open Access Journals (Sweden)

    Mohammed Adam Taheir Mohammed

    2016-02-01

    Full Text Available In this paper, the parameterization value reduction of soft sets and its algorithm in decision making are studied and described. It is based on parameterization reduction of soft sets. The purpose of this study is to investigate the inherited disadvantages of parameterization reduction of soft sets and its algorithm. The algorithms presented in this study attempt to reduce the value of least parameters from soft set. Through the analysis, two techniques have been described. Through this study, it is found that parameterization reduction of soft sets and its algorithm has yielded a different and inconsistency in suboptimal result.

  19. An explicit parameterization for casting constraints in gradient driven topology optimization

    DEFF Research Database (Denmark)

    Gersborg, Allan Roulund; Andreasen, Casper Schousboe

    2011-01-01

    From a practical point of view it is often desirable to limit the complexity of a topology optimization design such that casting/milling type manufacturing techniques can be applied. In the context of gradient driven topology optimization this work studies how castable designs can be obtained...... by use of a Heaviside design parameterization in a specified casting direction. This reduces the number of design variables considerably and the approach is simple to implement....

  20. Automatic Generation of Symbolic Model for Parameterized Synchronous Systems

    Institute of Scientific and Technical Information of China (English)

    Wei-Wen Xu

    2004-01-01

    With the purpose of making the verification of parameterized system more general and easier, in this paper, a new and intuitive language PSL (Parameterized-system Specification Language) is proposed to specify a class of parameterized synchronous systems. From a PSL script, an automatic method is proposed to generate a constraint-based symbolic model. The model can concisely symbolically represent the collections of global states by counting the number of processes in a given state. Moreover, a theorem has been proved that there is a simulation relation between the original system and its symbolic model. Since the abstract and symbolic techniques are exploited in the symbolic model, state-explosion problem in traditional verification methods is efficiently avoided. Based on the proposed symbolic model, a reachability analysis procedure is implemented using ANSI C++ on UNIX platform. Thus, a complete tool for verifying the parameterized synchronous systems is obtained and tested for some cases. The experimental results show that the method is satisfactory.

  1. Connecting the failure of K-theory inside and above vegetation canopies and ejection-sweep cycles by a large eddy simulation

    International Nuclear Information System (INIS)

    Banerjee, Tirtha; De Roo, Frederik; Mauder, Matthias

    2017-01-01

    Parameterizations of biosphere-atmosphere interaction processes in climate models and other hydrological applications require characterization of turbulent transport of momentum and scalars between vegetation canopies and the atmosphere, which is often modeled using a turbulent analogy to molecular diffusion processes. However, simple flux-gradient approaches (K-theory) fail for canopy turbulence. One cause is turbulent transport by large coherent eddies at the canopy scale, which can be linked to sweep-ejection events, and bear signatures of non-local organized eddy motions. K-theory, that parameterizes the turbulent flux or stress proportional to the local concentration or velocity gradient, fails to account for these non-local organized motions. The connection to sweep-ejection cycles and the local turbulent flux can be traced back to the turbulence triple moment (C ′ W ′ W ′ )-bar. In this work, we use large-eddy simulation to investigate the diagnostic connection between the failure of K-theory and sweep-ejection motions. Analyzed schemes are quadrant analysis (QA) and a complete and incomplete cumulant expansion (CEM and ICEM) method. The latter approaches introduce a turbulence timescale in the modeling. Furthermore, we find that the momentum flux needs a different formulation for the turbulence timescale than the sensible heat flux. In conclusion, accounting for buoyancy in stratified conditions is also deemed to be important in addition to accounting for non-local events to predict the correct momentum or scalar fluxes.

  2. Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone

    Science.gov (United States)

    Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo

    2017-12-01

    The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.

  3. Role of connected mobility concept for twenty-first-century cities—Trial approach for conceptualization of connected mobility through case studies

    Directory of Open Access Journals (Sweden)

    Fumihiko Nakamura

    2014-07-01

    The author defined the idea of “mobility design” in the scope of urban transportation and explored the concept of connected mobility through case studies that the author has been involved in or researched. Although many important connections in and approaches to urban transportation have come to light, the process of actually working on such projects has uncovered many issues to address such as sharing and social capital. The ability to design mobility as a connected entity and pursue our research topics from that perspective will be vital to overcoming the issues highlighted above and helping the concept of connected mobility flourish.

  4. Parameterizing Coefficients of a POD-Based Dynamical System

    Science.gov (United States)

    Kalb, Virginia L.

    2010-01-01

    A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter

  5. The Mind-Body Connection - Complementary and Alternative Approaches to Health

    Science.gov (United States)

    ... Navigation Bar Home Current Issue Past Issues The Mind-Body Connection Complementary and Alternative Approaches to Health ... Also, a 2007 study found that Tai Chi boosts resistance to the shingles virus in older adults." ...

  6. The CCPP-ARM Parameterization Testbed (CAPT): Where Climate Simulation Meets Weather Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J

    2003-11-21

    To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands, in particular, that the GCM parameterizations of unresolved processes should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provied that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be similarly tested. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the USDOE is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM. Numerical weather prediction methods show promise for improving parameterizations in climate GCMs.

  7. Parameterized Analysis of Paging and List Update Algorithms

    DEFF Research Database (Denmark)

    Dorrigiv, Reza; Ehmsen, Martin R.; López-Ortiz, Alejandro

    2015-01-01

    that a larger cache leads to a better performance. We also apply the parameterized analysis framework to list update and show that certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results....... set model and express the performance of well known algorithms in terms of this parameter. This explicitly introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly...... express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their experimentally observed relative strengths. It also reflects the intuition...

  8. Dimensionless parameterization of lidar for laser remote sensing of the atmosphere and its application to systems with SiPM and PMT detectors.

    Science.gov (United States)

    Agishev, Ravil; Comerón, Adolfo; Rodriguez, Alejandro; Sicard, Michaël

    2014-05-20

    In this paper, we show a renewed approach to the generalized methodology for atmospheric lidar assessment, which uses the dimensionless parameterization as a core component. It is based on a series of our previous works where the problem of universal parameterization over many lidar technologies were described and analyzed from different points of view. The modernized dimensionless parameterization concept applied to relatively new silicon photomultiplier detectors (SiPMs) and traditional photomultiplier (PMT) detectors for remote-sensing instruments allowed predicting the lidar receiver performance with sky background available. The renewed approach can be widely used to evaluate a broad range of lidar system capabilities for a variety of lidar remote-sensing applications as well as to serve as a basis for selection of appropriate lidar system parameters for a specific application. Such a modernized methodology provides a generalized, uniform, and objective approach for evaluation of a broad range of lidar types and systems (aerosol, Raman, DIAL) operating on different targets (backscatter or topographic) and under intense sky background conditions. It can be used within the lidar community to compare different lidar instruments.

  9. Transhepatic approach for extracardiac inferior cavopulmonary connection stent fenestration.

    LENUS (Irish Health Repository)

    Kenny, Damien

    2012-02-01

    We report on a 3-year-old male who underwent transcatheter stent fenestration of the inferior portion of an extracardiac total cavopulmonary connection in the setting of hypoplastic left heart syndrome. Transhepatic approach, following an unsuccessful attempt from the femoral vein facilitated delivery of a diabolo-shaped stent.

  10. Static Decoupling in fault detection

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    1998-01-01

    An algebraic approach is given for a design of a static residual weighting factor in connection with fault detection. A complete parameterization is given of the weighting factor which will minimize a given performance index......An algebraic approach is given for a design of a static residual weighting factor in connection with fault detection. A complete parameterization is given of the weighting factor which will minimize a given performance index...

  11. The package PAKPDF 1.1 of parameterizations of parton distribution functions in the proton

    International Nuclear Information System (INIS)

    Charchula, K.

    1992-01-01

    A FORTRAN package containing parameterizations of parton distribution functions (PDFs) in the proton is described, allows an easy access to PDFs provided by several recent parameterizations and to some parameters characterizing particular parameterization. Some comments about the use of various parameterizations are also included. (orig.)

  12. Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds

    Science.gov (United States)

    Yun, Yuxing; Penner, Joyce E.

    2012-04-01

    A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.

  13. Building bridges connections and challenges in modern approaches to numerical partial differential equations

    CERN Document Server

    Brezzi, Franco; Cangiani, Andrea; Georgoulis, Emmanuil

    2016-01-01

    This volume contains contributed survey papers from the main speakers at the LMS/EPSRC Symposium “Building bridges: connections and challenges in modern approaches to numerical partial differential equations”. This meeting took place in July 8-16, 2014, and its main purpose was to gather specialists in emerging areas of numerical PDEs, and explore the connections between the different approaches. The type of contributions ranges from the theoretical foundations of these new techniques, to the applications of them, to new general frameworks and unified approaches that can cover one, or more than one, of these emerging techniques.

  14. Parameterized Concurrent Multi-Party Session Types

    Directory of Open Access Journals (Sweden)

    Minas Charalambides

    2012-08-01

    Full Text Available Session types have been proposed as a means of statically verifying implementations of communication protocols. Although prior work has been successful in verifying some classes of protocols, it does not cope well with parameterized, multi-actor scenarios with inherent asynchrony. For example, the sliding window protocol is inexpressible in previously proposed session type systems. This paper describes System-A, a new typing language which overcomes many of the expressiveness limitations of prior work. System-A explicitly supports asynchrony and parallelism, as well as multiple forms of parameterization. We define System-A and show how it can be used for the static verification of a large class of asynchronous communication protocols.

  15. Invariant box-parameterization of neutrino oscillations

    International Nuclear Information System (INIS)

    Weiler, Thomas J.; Wagner, DJ

    1998-01-01

    The model-independent 'box' parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing-matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n≥3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements

  16. Parameterized equation for the estimation of the hydraulic conductivity function not saturated in ferralsols south of Havana

    International Nuclear Information System (INIS)

    González Robaina, Felicita; López Seijas, Teresa

    2008-01-01

    The modeling of the processes involved in the movement of water in soil solutions generally requires the general equation of water flow for the condition of saturation, or Darcy - Buckinghan approach. In this approach the hydraulic - soil moisture (K(0)) conductivity function is a fundamental property of the soil to determine for each field condition. Several methods reported in the literature for determining the hydraulic conductivity are based on simplifications of assuming unit gradient method or a fixed ratio K(0). In recent years related to the search for simple, rapid and inexpensive methods to measure this relationship in the field using a lot of work aftershocks reported. One of these methods is the parameterized equation proposed by Reichardt, using the parameters of the equations describing the process of internal drainage and explain the exponential nature of the relationship K(0). The objective of this work is to estimate the K(0), with the method of the parameterized equation. To do the test results of internal drainage on a Ferralsol area south of Havana will be used. The results show that the parameterized equation provides an estimation of K(0) for those similar to the methods that assume unit gradient conditions

  17. Usage of Parameterized Fatigue Spectra and Physics-Based Systems Engineering Models for Wind Turbine Component Sizing: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, Taylor; Guo, Yi; Veers, Paul; Dykes, Katherine; Damiani, Rick

    2016-01-26

    Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrum is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.

  18. Functional connectivity analysis of fMRI data using parameterized regions-of-interest.

    NARCIS (Netherlands)

    Weeda, W.D.; Waldorp, L.J.; Grasman, R.P.P.P.; van Gaal, S.; Huizenga, H.M.

    2011-01-01

    Connectivity analysis of fMRI data requires correct specification of regions-of-interest (ROIs). Selection of ROIs based on outcomes of a GLM analysis may be hindered by conservativeness of the multiple comparison correction, while selection based on brain anatomy may be biased due to inconsistent

  19. Multimodel Uncertainty Changes in Simulated River Flows Induced by Human Impact Parameterizations

    Science.gov (United States)

    Liu, Xingcai; Tang, Qiuhong; Cui, Huijuan; Mu, Mengfei; Gerten Dieter; Gosling, Simon; Masaki, Yoshimitsu; Satoh, Yusuke; Wada, Yoshihide

    2017-01-01

    Human impacts increasingly affect the global hydrological cycle and indeed dominate hydrological changes in some regions. Hydrologists have sought to identify the human-impact-induced hydrological variations via parameterizing anthropogenic water uses in global hydrological models (GHMs). The consequently increased model complexity is likely to introduce additional uncertainty among GHMs. Here, using four GHMs, between-model uncertainties are quantified in terms of the ratio of signal to noise (SNR) for average river flow during 1971-2000 simulated in two experiments, with representation of human impacts (VARSOC) and without (NOSOC). It is the first quantitative investigation of between-model uncertainty resulted from the inclusion of human impact parameterizations. Results show that the between-model uncertainties in terms of SNRs in the VARSOC annual flow are larger (about 2 for global and varied magnitude for different basins) than those in the NOSOC, which are particularly significant in most areas of Asia and northern areas to the Mediterranean Sea. The SNR differences are mostly negative (-20 to 5, indicating higher uncertainty) for basin-averaged annual flow. The VARSOC high flow shows slightly lower uncertainties than NOSOC simulations, with SNR differences mostly ranging from -20 to 20. The uncertainty differences between the two experiments are significantly related to the fraction of irrigation areas of basins. The large additional uncertainties in VARSOC simulations introduced by the inclusion of parameterizations of human impacts raise the urgent need of GHMs development regarding a better understanding of human impacts. Differences in the parameterizations of irrigation, reservoir regulation and water withdrawals are discussed towards potential directions of improvements for future GHM development. We also discuss the advantages of statistical approaches to reduce the between-model uncertainties, and the importance of calibration of GHMs for not only

  20. Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology

    Science.gov (United States)

    Jin, Z.; Azzari, G.; Lobell, D. B.

    2016-12-01

    Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.

  1. Structural test of the parameterized-backbone method for protein design.

    Science.gov (United States)

    Plecs, Joseph J; Harbury, Pehr B; Kim, Peter S; Alber, Tom

    2004-09-03

    Designing new protein folds requires a method for simultaneously optimizing the conformation of the backbone and the side-chains. One approach to this problem is the use of a parameterized backbone, which allows the systematic exploration of families of structures. We report the crystal structure of RH3, a right-handed, three-helix coiled coil that was designed using a parameterized backbone and detailed modeling of core packing. This crystal structure was determined using another rationally designed feature, a metal-binding site that permitted experimental phasing of the X-ray data. RH3 adopted the intended fold, which has not been observed previously in biological proteins. Unanticipated structural asymmetry in the trimer was a principal source of variation within the RH3 structure. The sequence of RH3 differs from that of a previously characterized right-handed tetramer, RH4, at only one position in each 11 amino acid sequence repeat. This close similarity indicates that the design method is sensitive to the core packing interactions that specify the protein structure. Comparison of the structures of RH3 and RH4 indicates that both steric overlap and cavity formation provide strong driving forces for oligomer specificity.

  2. Model-driven harmonic parameterization of the cortical surface: HIP-HOP.

    Science.gov (United States)

    Auzias, G; Lefèvre, J; Le Troter, A; Fischer, C; Perrot, M; Régis, J; Coulon, O

    2013-05-01

    In the context of inter subject brain surface matching, we present a parameterization of the cortical surface constrained by a model of cortical organization. The parameterization is defined via an harmonic mapping of each hemisphere surface to a rectangular planar domain that integrates a representation of the model. As opposed to previous landmark-based registration methods we do not match folds between individuals but instead optimize the fit between cortical sulci and specific iso-coordinate axis in the model. This strategy overcomes some limitation to sulcus-based registration techniques such as topological variability in sulcal landmarks across subjects. Experiments on 62 subjects with manually traced sulci are presented and compared with the result of the Freesurfer software. The evaluation involves a measure of dispersion of sulci with both angular and area distortions. We show that the model-based strategy can lead to a natural, efficient and very fast (less than 5 min per hemisphere) method for defining inter subjects correspondences. We discuss how this approach also reduces the problems inherent to anatomically defined landmarks and open the way to the investigation of cortical organization through the notion of orientation and alignment of structures across the cortex.

  3. Invariant box parameterization of neutrino oscillations

    International Nuclear Information System (INIS)

    Weiler, T.J.; Wagner, D.

    1998-01-01

    The model-independent 'box' parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n≥3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements. copyright 1998 American Institute of Physics

  4. A Unified Current Loop Tuning Approach for Grid-Connected Photovoltaic Inverters

    Directory of Open Access Journals (Sweden)

    Weiyi Zhang

    2016-09-01

    Full Text Available High level penetration of renewable energy sources has reshaped modern electrical grids. For the future grid, distributed renewable power generation plants can be integrated in a larger scale. Control of grid-connected converters is required to achieve fast power reference tracking and further to present grid-supporting and fault ride-through performance. Among all of the aspects for converter control, the inner current loop for grid-connected converters characterizes the system performance considerably. This paper proposes a unified current loop tuning approach for grid-connected converters that is generally applicable in different cases. A direct discrete-time domain tuning procedure is used, and particularly, the selection of the phase margin and crossover frequency is analyzed, which acts as the main difference compared with the existing studies. As a general method, the approximation in the modeling of the controller and grid filter is avoided. The effectiveness of the tuning approach is validated in both simulation and experimental results with respect to power reference tracking, frequency and voltage supporting.

  5. Assessing the performance of wave breaking parameterizations in shallow waters in spectral wave models

    Science.gov (United States)

    Lin, Shangfei; Sheng, Jinyu

    2017-12-01

    Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.

  6. Parameterized representation of macroscopic cross section for PWR reactor

    International Nuclear Information System (INIS)

    Fiel, João Cláudio Batista; Carvalho da Silva, Fernando; Senra Martinez, Aquilino; Leal, Luiz C.

    2015-01-01

    Highlights: • This work describes a parameterized representation of the homogenized macroscopic cross section for PWR reactor. • Parameterization enables a quick determination of problem-dependent cross-sections to be used in few group calculations. • This work allows generating group cross-section data to perform PWR core calculations without computer code calculations. - Abstract: The purpose of this work is to describe, by means of Chebyshev polynomials, a parameterized representation of the homogenized macroscopic cross section for PWR fuel element as a function of soluble boron concentration, moderator temperature, fuel temperature, moderator density and 235 92 U enrichment. The cross-section data analyzed are fission, scattering, total, transport, absorption and capture. The parameterization enables a quick and easy determination of problem-dependent cross-sections to be used in few group calculations. The methodology presented in this paper will allow generation of group cross-section data from stored polynomials to perform PWR core calculations without the need to generate them based on computer code calculations using standard steps. The results obtained by the proposed methodology when compared with results from the SCALE code calculations show very good agreement

  7. Parameterization of planetary wave breaking in the middle atmosphere

    Science.gov (United States)

    Garcia, Rolando R.

    1991-01-01

    A parameterization of planetary wave breaking in the middle atmosphere has been developed and tested in a numerical model which includes governing equations for a single wave and the zonal-mean state. The parameterization is based on the assumption that wave breaking represents a steady-state equilibrium between the flux of wave activity and its dissipation by nonlinear processes, and that the latter can be represented as linear damping of the primary wave. With this and the additional assumption that the effect of breaking is to prevent further amplitude growth, the required dissipation rate is readily obtained from the steady-state equation for wave activity; diffusivity coefficients then follow from the dissipation rate. The assumptions made in the derivation are equivalent to those commonly used in parameterizations for gravity wave breaking, but the formulation in terms of wave activity helps highlight the central role of the wave group velocity in determining the dissipation rate. Comparison of model results with nonlinear calculations of wave breaking and with diagnostic determinations of stratospheric diffusion coefficients reveals remarkably good agreement, and suggests that the parameterization could be useful for simulating inexpensively, but realistically, the effects of planetary wave transport.

  8. An Evaluation of Lightning Flash Rate Parameterizations Based on Observations of Colorado Storms during DC3

    Science.gov (United States)

    Basarab, B.; Fuchs, B.; Rutledge, S. A.

    2013-12-01

    to observed flash rates. For the 6 June storm, a preliminary analysis of aircraft observations of storm inflow and outflow is presented in order to place flash rates (and other lightning statistics) in the context of storm chemistry. An approach to a possibly improved LNOx parameterization scheme using different lightning metrics such as flash area will be discussed.

  9. The Grell-Freitas Convective Parameterization: Recent Developments and Applications Within the NASA GEOS Global Model

    Science.gov (United States)

    Freitas, S.; Grell, G. A.; Molod, A.

    2017-12-01

    We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization (Grell and Freitas, 2014) is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Scale dependence for deep convection is implemented either through using the method described by Arakawa et al (2011), or through lateral spreading of the subsidence terms. Aerosol effects are included though the dependence of autoconversion and evaporation on the CCN number concentration.Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, congestus, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Also, a beta-pdf is employed now to represent the normalized mass flux profile. This opens up an additional venue to apply stochasticism in the scheme.

  10. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    Science.gov (United States)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  11. A new parameterization for waveform inversion in acoustic orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2016-05-26

    Orthorhombic anisotropic model inversion is extra challenging because of the multiple parameter nature of the inversion problem. The high number of parameters required to describe the medium exerts considerable trade-off and additional nonlinearity to a full-waveform inversion (FWI) application. Choosing a suitable set of parameters to describe the model and designing an effective inversion strategy can help in mitigating this problem. Using the Born approximation, which is the central ingredient of the FWI update process, we have derived radiation patterns for the different acoustic orthorhombic parameterizations. Analyzing the angular dependence of scattering (radiation patterns) of the parameters of different parameterizations starting with the often used Thomsen-Tsvankin parameterization, we have assessed the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. The analysis led us to introduce new parameters ϵd, δd, and ηd, which have azimuthally dependent radiation patterns, but keep the scattering potential of the transversely isotropic parameters stationary with azimuth (azimuth independent). The novel parameters ϵd, δd, and ηd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. Therefore, these deviation parameters offer a new parameterization style for an acoustic orthorhombic medium described by six parameters: three vertical transversely isotropic (VTI) parameters, two deviation parameters, and one parameter describing the anisotropy in the horizontal symmetry plane. The main feature of any parameterization based on the deviation parameters, is the azimuthal independency of the modeled data with respect to the VTI parameters, which allowed us to propose practical inversion strategies based on our experience with the VTI parameters. This feature of the new parameterization style holds for even the long-wavelength components of

  12. Comparing parameterized versus measured microphysical properties of tropical convective cloud bases during the ACRIDICON–CHUVA campaign

    Directory of Open Access Journals (Sweden)

    R. C. Braga

    2017-06-01

    Full Text Available The objective of this study is to validate parameterizations that were recently developed for satellite retrievals of cloud condensation nuclei supersaturation spectra, NCCN(S, at cloud base alongside more traditional parameterizations connecting NCCN(S with cloud base updrafts and drop concentrations. This was based on the HALO aircraft measurements during the ACRIDICON–CHUVA campaign over the Amazon region, which took place in September 2014. The properties of convective clouds were measured with a cloud combination probe (CCP, a cloud and aerosol spectrometer (CAS-DPOL, and a CCN counter onboard the HALO aircraft. An intercomparison of the cloud drop size distributions (DSDs and the cloud water content (CWC derived from the different instruments generally shows good agreement within the instrumental uncertainties. To this end, the directly measured cloud drop concentrations (Nd near cloud base were compared with inferred values based on the measured cloud base updraft velocity (Wb and NCCN(S spectra. The measurements of Nd at cloud base were also compared with drop concentrations (Na derived on the basis of an adiabatic assumption and obtained from the vertical evolution of cloud drop effective radius (re above cloud base. The measurements of NCCN(S and Wb reproduced the observed Nd within the measurements uncertainties when the old (1959 Twomey's parameterization was used. The agreement between the measured and calculated Nd was only within a factor of 2 with attempts to use cloud base S, as obtained from the measured Wb, Nd, and NCCN(S. This underscores the yet unresolved challenge of aircraft measurements of S in clouds. Importantly, the vertical evolution of re with height reproduced the observation-based nearly adiabatic cloud base drop concentrations, Na. The combination of these results provides aircraft observational support for the various components of the satellite-retrieved methodology that was recently developed to

  13. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.

    Science.gov (United States)

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-10-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.

  14. Parameterization of a fuzzy classifier for the diagnosis of an industrial process

    International Nuclear Information System (INIS)

    Toscano, R.; Lyonnet, P.

    2002-01-01

    The aim of this paper is to present a classifier based on a fuzzy inference system. For this classifier, we propose a parameterization method, which is not necessarily based on an iterative training. This approach can be seen as a pre-parameterization, which allows the determination of the rules base and the parameters of the membership functions. We also present a continuous and derivable version of the previous classifier and suggest an iterative learning algorithm based on a gradient method. An example using the learning basis IRIS, which is a benchmark for classification problems, is presented showing the performances of this classifier. Finally this classifier is applied to the diagnosis of a DC motor showing the utility of this method. However in many cases the total knowledge necessary to the synthesis of the fuzzy diagnosis system (FDS) is not, in general, directly available. It must be extracted from an often-considerable mass of information. For this reason, a general methodology for the design of a FDS is presented and illustrated on a non-linear plant

  15. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    Science.gov (United States)

    Donahue, Aaron S.; Caldwell, Peter M.

    2018-02-01

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effect of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.

  16. USING PARAMETERIZATION OF OBJECTS IN AUTODESK INVENTOR IN DESIGNING STRUCTURAL CONNECTORS

    Directory of Open Access Journals (Sweden)

    Gabriel Borowski

    2015-05-01

    Full Text Available The article presents the parameterization of objects used for designing the type of elements as structural connectors and making modifications of their characteristics. The design process was carried out using Autodesk Inventor 2015. We show the latest software tools, which were used for parameterization and modeling selected types of structural connectors. We also show examples of the use of parameterization facilities in the process of constructing some details and making changes to geometry with holding of the shape the element. The presented method of Inventor usage has enabled fast and efficient creation of new objects based on sketches created.

  17. Parameterized Disturbance Observer Based Controller to Reduce Cyclic Loads of Wind Turbine

    Directory of Open Access Journals (Sweden)

    Raja M. Imran

    2018-05-01

    Full Text Available This paper is concerned with bump-less transfer of parameterized disturbance observer based controller with individual pitch control strategy to reduce cyclic loads of wind turbine in full load operation. Cyclic loads are generated due to wind shear and tower shadow effects. Multivariable disturbance observer based linear controllers are designed with objective to reduce output power fluctuation, tower oscillation and drive-train torsion using optimal control theory. Linear parameterized controllers are designed by using a smooth scheduling mechanism between the controllers. The proposed parameterized controller with individual pitch was tested on nonlinear Fatigue, Aerodynamics, Structures, and Turbulence (FAST code model of National Renewable Energy Laboratory (NREL’s 5 MW wind turbine. The closed-loop system performance was assessed by comparing the simulation results of proposed controller with a fixed gain and parameterized controller with collective pitch for full load operation of wind turbine. Simulations are performed with step wind to see the behavior of the system with wind shear and tower shadow effects. Then, turbulent wind is applied to see the smooth transition of the controllers. It can be concluded from the results that the proposed parameterized control shows smooth transition from one controller to another controller. Moreover, 3p and 6p harmonics are well mitigated as compared to fixed gain DOBC and parameterized DOBC with collective pitch.

  18. Multisite Evaluation of APEX for Water Quality: I. Best Professional Judgment Parameterization.

    Science.gov (United States)

    Baffaut, Claire; Nelson, Nathan O; Lory, John A; Senaviratne, G M M M Anomaa; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S

    2017-11-01

    The Agricultural Policy Environmental eXtender (APEX) model is capable of estimating edge-of-field water, nutrient, and sediment transport and is used to assess the environmental impacts of management practices. The current practice is to fully calibrate the model for each site simulation, a task that requires resources and data not always available. The objective of this study was to compare model performance for flow, sediment, and phosphorus transport under two parameterization schemes: a best professional judgment (BPJ) parameterization based on readily available data and a fully calibrated parameterization based on site-specific soil, weather, event flow, and water quality data. The analysis was conducted using 12 datasets at four locations representing poorly drained soils and row-crop production under different tillage systems. Model performance was based on the Nash-Sutcliffe efficiency (NSE), the coefficient of determination () and the regression slope between simulated and measured annualized loads across all site years. Although the BPJ model performance for flow was acceptable (NSE = 0.7) at the annual time step, calibration improved it (NSE = 0.9). Acceptable simulation of sediment and total phosphorus transport (NSE = 0.5 and 0.9, respectively) was obtained only after full calibration at each site. Given the unacceptable performance of the BPJ approach, uncalibrated use of APEX for planning or management purposes may be misleading. Model calibration with water quality data prior to using APEX for simulating sediment and total phosphorus loss is essential. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  19. A simple parameterization for the rising velocity of bubbles in a liquid pool

    International Nuclear Information System (INIS)

    Park, Sung Hoon; Park, Chang Hwan; Lee, Jin Yong; Lee, Byung Chul

    2017-01-01

    The determination of the shape and rising velocity of gas bubbles in a liquid pool is of great importance in analyzing the radioactive aerosol emissions from nuclear power plant accidents in terms of the fission product release rate and the pool scrubbing efficiency of radioactive aerosols. This article suggests a simple parameterization for the gas bubble rising velocity as a function of the volume-equivalent bubble diameter; this parameterization does not require prior knowledge of bubble shape. This is more convenient than previously suggested parameterizations because it is given as a single explicit formula. It is also shown that a bubble shape diagram, which is very similar to the Grace's diagram, can be easily generated using the parameterization suggested in this article. Furthermore, the boundaries among the three bubble shape regimes in the E_o–R_e plane and the condition for the bypass of the spheroidal regime can be delineated directly from the parameterization formula. Therefore, the parameterization suggested in this article appears to be useful not only in easily determining the bubble rising velocity (e.g., in postulated severe accident analysis codes) but also in understanding the trend of bubble shape change due to bubble growth

  20. Parameterized neural networks for high-energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Baldi, Pierre; Sadowski, Peter [University of California, Department of Computer Science, Irvine, CA (United States); Cranmer, Kyle [NYU, Department of Physics, New York, NY (United States); Faucett, Taylor; Whiteson, Daniel [University of California, Department of Physics and Astronomy, Irvine, CA (United States)

    2016-05-15

    We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results. (orig.)

  1. Parameterized neural networks for high-energy physics

    International Nuclear Information System (INIS)

    Baldi, Pierre; Sadowski, Peter; Cranmer, Kyle; Faucett, Taylor; Whiteson, Daniel

    2016-01-01

    We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results. (orig.)

  2. On the Dependence of Cloud Feedbacks on Physical Parameterizations in WRF Aquaplanet Simulations

    Science.gov (United States)

    Cesana, Grégory; Suselj, Kay; Brient, Florent

    2017-10-01

    We investigate the effects of physical parameterizations on cloud feedback uncertainty in response to climate change. For this purpose, we construct an ensemble of eight aquaplanet simulations using the Weather Research and Forecasting (WRF) model. In each WRF-derived simulation, we replace only one parameterization at a time while all other parameters remain identical. By doing so, we aim to (i) reproduce cloud feedback uncertainty from state-of-the-art climate models and (ii) understand how parametrizations impact cloud feedbacks. Our results demonstrate that this ensemble of WRF simulations, which differ only in physical parameterizations, replicates the range of cloud feedback uncertainty found in state-of-the-art climate models. We show that microphysics and convective parameterizations govern the magnitude and sign of cloud feedbacks, mostly due to tropical low-level clouds in subsidence regimes. Finally, this study highlights the advantages of using WRF to analyze cloud feedback mechanisms owing to its plug-and-play parameterization capability.

  3. A Coordinated Effort to Improve Parameterization of High-Latitude Cloud and Radiation Processes

    International Nuclear Information System (INIS)

    J. O. Pinto; A.H. Lynch

    2004-01-01

    The goal of this project is the development and evaluation of improved parameterization of arctic cloud and radiation processes and implementation of the parameterizations into a climate model. Our research focuses specifically on the following issues: (1) continued development and evaluation of cloud microphysical parameterizations, focusing on issues of particular relevance for mixed phase clouds; and (2) evaluation of the mesoscale simulation of arctic cloud system life cycles

  4. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    Directory of Open Access Journals (Sweden)

    Erin E Poor

    Full Text Available Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent and expert-based (Analytic Hierarchy Process. We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  5. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    Science.gov (United States)

    Poor, Erin E; Loucks, Colby; Jakes, Andrew; Urban, Dean L

    2012-01-01

    Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  6. Monte Carlo-based subgrid parameterization of vertical velocity and stratiform cloud microphysics in ECHAM5.5-HAM2

    Directory of Open Access Journals (Sweden)

    J. Tonttila

    2013-08-01

    Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.

  7. A review of the theoretical basis for bulk mass flux convective parameterization

    Directory of Open Access Journals (Sweden)

    R. S. Plant

    2010-04-01

    Full Text Available Most parameterizations for precipitating convection in use today are bulk schemes, in which an ensemble of cumulus elements with different properties is modelled as a single, representative entraining-detraining plume. We review the underpinning mathematical model for such parameterizations, in particular by comparing it with spectral models in which elements are not combined into the representative plume. The chief merit of a bulk model is that the representative plume can be described by an equation set with the same structure as that which describes each element in a spectral model. The equivalence relies on an ansatz for detrained condensate introduced by Yanai et al. (1973 and on a simplified microphysics. There are also conceptual differences in the closure of bulk and spectral parameterizations. In particular, we show that the convective quasi-equilibrium closure of Arakawa and Schubert (1974 for spectral parameterizations cannot be carried over to a bulk parameterization in a straightforward way. Quasi-equilibrium of the cloud work function assumes a timescale separation between a slow forcing process and a rapid convective response. But, for the natural bulk analogue to the cloud-work function, the relevant forcing is characterised by a different timescale, and so its quasi-equilibrium entails a different physical constraint. Closures of bulk parameterizations that use a parcel value of CAPE do not suffer from this timescale issue. However, the Yanai et al. (1973 ansatz must be invoked as a necessary ingredient of those closures.

  8. A simple parameterization for the rising velocity of bubbles in a liquid pool

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sung Hoon [Dept. of Environmental Engineering, Sunchon National University, Suncheon (Korea, Republic of); Park, Chang Hwan; Lee, Jin Yong; Lee, Byung Chul [FNC Technology, Co., Ltd., Yongin (Korea, Republic of)

    2017-06-15

    The determination of the shape and rising velocity of gas bubbles in a liquid pool is of great importance in analyzing the radioactive aerosol emissions from nuclear power plant accidents in terms of the fission product release rate and the pool scrubbing efficiency of radioactive aerosols. This article suggests a simple parameterization for the gas bubble rising velocity as a function of the volume-equivalent bubble diameter; this parameterization does not require prior knowledge of bubble shape. This is more convenient than previously suggested parameterizations because it is given as a single explicit formula. It is also shown that a bubble shape diagram, which is very similar to the Grace's diagram, can be easily generated using the parameterization suggested in this article. Furthermore, the boundaries among the three bubble shape regimes in the E{sub o}–R{sub e} plane and the condition for the bypass of the spheroidal regime can be delineated directly from the parameterization formula. Therefore, the parameterization suggested in this article appears to be useful not only in easily determining the bubble rising velocity (e.g., in postulated severe accident analysis codes) but also in understanding the trend of bubble shape change due to bubble growth.

  9. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    Science.gov (United States)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  10. On The Importance of Connecting Laboratory Measurements of Ice Crystal Growth with Model Parameterizations: Predicting Ice Particle Properties

    Science.gov (United States)

    Harrington, J. Y.

    2017-12-01

    Parameterizing the growth of ice particles in numerical models is at an interesting cross-roads. Most parameterizations developed in the past, including some that I have developed, parse model ice into numerous categories based primarily on the growth mode of the particle. Models routinely possess smaller ice, snow crystals, aggregates, graupel, and hail. The snow and ice categories in some models are further split into subcategories to account for the various shapes of ice. There has been a relatively recent shift towards a new class of microphysical models that predict the properties of ice particles instead of using multiple categories and subcategories. Particle property models predict the physical characteristics of ice, such as aspect ratio, maximum dimension, effective density, rime density, effective area, and so forth. These models are attractive in the sense that particle characteristics evolve naturally in time and space without the need for numerous (and somewhat artificial) transitions among pre-defined classes. However, particle property models often require fundamental parameters that are typically derived from laboratory measurements. For instance, the evolution of particle shape during vapor depositional growth requires knowledge of the growth efficiencies for the various axis of the crystals, which in turn depends on surface parameters that can only be determined in the laboratory. The evolution of particle shapes and density during riming, aggregation, and melting require data on the redistribution of mass across a crystals axis as that crystal collects water drops, ice crystals, or melts. Predicting the evolution of particle properties based on laboratory-determined parameters has a substantial influence on the evolution of some cloud systems. Radiatively-driven cirrus clouds show a broader range of competition between heterogeneous nucleation and homogeneous freezing when ice crystal properties are predicted. Even strongly convective squall

  11. The applicability of the viscous α-parameterization of gravitational instability in circumstellar disks

    Science.gov (United States)

    Vorobyov, E. I.

    2010-01-01

    We study numerically the applicability of the effective-viscosity approach for simulating the effect of gravitational instability (GI) in disks of young stellar objects with different disk-to-star mass ratios ξ . We adopt two α-parameterizations for the effective viscosity based on Lin and Pringle [Lin, D.N.C., Pringle, J.E., 1990. ApJ 358, 515] and Kratter et al. [Kratter, K.M., Matzner, Ch.D., Krumholz, M.R., 2008. ApJ 681, 375] and compare the resultant disk structure, disk and stellar masses, and mass accretion rates with those obtained directly from numerical simulations of self-gravitating disks around low-mass (M∗ ∼ 1.0M⊙) protostars. We find that the effective viscosity can, in principle, simulate the effect of GI in stellar systems with ξ≲ 0.2- 0.3 , thus corroborating a similar conclusion by Lodato and Rice [Lodato, G., Rice, W.K.M., 2004. MNRAS 351, 630] that was based on a different α-parameterization. In particular, the Kratter et al.'s α-parameterization has proven superior to that of Lin and Pringle's, because the success of the latter depends crucially on the proper choice of the α-parameter. However, the α-parameterization generally fails in stellar systems with ξ≳ 0.3 , particularly in the Classes 0 and I phases of stellar evolution, yielding too small stellar masses and too large disk-to-star mass ratios. In addition, the time-averaged mass accretion rates onto the star are underestimated in the early disk evolution and greatly overestimated in the late evolution. The failure of the α-parameterization in the case of large ξ is caused by a growing strength of low-order spiral modes in massive disks. Only in the late Class II phase, when the magnitude of spiral modes diminishes and the mode-to-mode interaction ensues, may the effective viscosity be used to simulate the effect of GI in stellar systems with ξ≳ 0.3 . A simple modification of the effective viscosity that takes into account disk fragmentation can somewhat improve

  12. Parameterization of ion-induced nucleation rates based on ambient observations

    Directory of Open Access Journals (Sweden)

    T. Nieminen

    2011-04-01

    Full Text Available Atmospheric ions participate in the formation of new atmospheric aerosol particles, yet their exact role in this process has remained unclear. Here we derive a new simple parameterization for ion-induced nucleation or, more precisely, for the formation rate of charged 2-nm particles. The parameterization is semi-empirical in the sense that it is based on comprehensive results of one-year-long atmospheric cluster and particle measurements in the size range ~1–42 nm within the EUCAARI (European Integrated project on Aerosol Cloud Climate and Air Quality interactions project. Data from 12 field sites across Europe measured with different types of air ion and cluster mobility spectrometers were used in our analysis, with more in-depth analysis made using data from four stations with concomitant sulphuric acid measurements. The parameterization is given in two slightly different forms: a more accurate one that requires information on sulfuric acid and nucleating organic vapor concentrations, and a simpler one in which this information is replaced with the global radiation intensity. These new parameterizations are applicable to all large-scale atmospheric models containing size-resolved aerosol microphysics, and a scheme to calculate concentrations of sulphuric acid, condensing organic vapours and cluster ions.

  13. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    Science.gov (United States)

    Khan, Tanvir R.; Perlinger, Judith A.

    2017-10-01

    Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of

  14. Parameterization of radiocaesium soil-plant transfer using soil characteristics

    International Nuclear Information System (INIS)

    Konoplev, A. V.; Drissner, J.; Klemt, E.; Konopleva, I. V.; Zibold, G.

    1996-01-01

    A model of radionuclide soil-plant transfer is proposed to parameterize the transfer factor by soil and soil solution characteristics. The model is tested with experimental data on the aggregated transfer factor T ag and soil parameters for 8 forest sites in Baden-Wuerttemberg. It is shown that the integral soil-plant transfer factor can be parameterized through radiocaesium exchangeability, capacity of selective sorption sites and ion composition of the soil solution or the water extract. A modified technique of (FES) measurement for soils with interlayer collapse is proposed. (author)

  15. A Multimodal Approach for Determining Brain Networks by Jointly Modeling Functional and Structural Connectivity

    Directory of Open Access Journals (Sweden)

    Wenqiong eXue

    2015-02-01

    Full Text Available Recent innovations in neuroimaging technology have provided opportunities for researchers to investigate connectivity in the human brain by examining the anatomical circuitry as well as functional relationships between brain regions. Existing statistical approaches for connectivity generally examine resting-state or task-related functional connectivity (FC between brain regions or separately examine structural linkages. As a means to determine brain networks, we present a unified Bayesian framework for analyzing FC utilizing the knowledge of associated structural connections, which extends an approach by Patel et al.(2006a that considers only functional data. We introduce an FC measure that rests upon assessments of functional coherence between regional brain activity identified from functional magnetic resonance imaging (fMRI data. Our structural connectivity (SC information is drawn from diffusion tensor imaging (DTI data, which is used to quantify probabilities of SC between brain regions. We formulate a prior distribution for FC that depends upon the probability of SC between brain regions, with this dependence adhering to structural-functional links revealed by our fMRI and DTI data. We further characterize the functional hierarchy of functionally connected brain regions by defining an ascendancy measure that compares the marginal probabilities of elevated activity between regions. In addition, we describe topological properties of the network, which is composed of connected region pairs, by performing graph theoretic analyses. We demonstrate the use of our Bayesian model using fMRI and DTI data from a study of auditory processing. We further illustrate the advantages of our method by comparisons to methods that only incorporate functional information.

  16. Using graph approach for managing connectivity in integrative landscape modelling

    Science.gov (United States)

    Rabotin, Michael; Fabre, Jean-Christophe; Libres, Aline; Lagacherie, Philippe; Crevoisier, David; Moussa, Roger

    2013-04-01

    In cultivated landscapes, a lot of landscape elements such as field boundaries, ditches or banks strongly impact water flows, mass and energy fluxes. At the watershed scale, these impacts are strongly conditionned by the connectivity of these landscape elements. An accurate representation of these elements and of their complex spatial arrangements is therefore of great importance for modelling and predicting these impacts.We developped in the framework of the OpenFLUID platform (Software Environment for Modelling Fluxes in Landscapes) a digital landscape representation that takes into account the spatial variabilities and connectivities of diverse landscape elements through the application of the graph theory concepts. The proposed landscape representation consider spatial units connected together to represent the flux exchanges or any other information exchanges. Each spatial unit of the landscape is represented as a node of a graph and relations between units as graph connections. The connections are of two types - parent-child connection and up/downstream connection - which allows OpenFLUID to handle hierarchical graphs. Connections can also carry informations and graph evolution during simulation is possible (connections or elements modifications). This graph approach allows a better genericity on landscape representation, a management of complex connections and facilitate development of new landscape representation algorithms. Graph management is fully operational in OpenFLUID for developers or modelers ; and several graph tools are available such as graph traversal algorithms or graph displays. Graph representation can be managed i) manually by the user (for example in simple catchments) through XML-based files in easily editable and readable format or ii) by using methods of the OpenFLUID-landr library which is an OpenFLUID library relying on common open-source spatial libraries (ogr vector, geos topologic vector and gdal raster libraries). Open

  17. Ozonolysis of α-pinene: parameterization of secondary organic aerosol mass fraction

    Directory of Open Access Journals (Sweden)

    R. K. Pathak

    2007-07-01

    Full Text Available Existing parameterizations tend to underpredict the α-pinene aerosol mass fraction (AMF or yield by a factor of 2–5 at low organic aerosol concentrations (<5 µg m−3. A wide range of smog chamber results obtained at various conditions (low/high NOx, presence/absence of UV radiation, dry/humid conditions, and temperatures ranging from 15–40°C collected by various research teams during the last decade are used to derive new parameterizations of the SOA formation from α-pinene ozonolysis. Parameterizations are developed by fitting experimental data to a basis set of saturation concentrations (from 10−2 to 104 µg m−3 using an absorptive equilibrium partitioning model. Separate parameterizations for α-pinene SOA mass fractions are developed for: 1 Low NOx, dark, and dry conditions, 2 Low NOx, UV, and dry conditions, 3 Low NOx, dark, and high RH conditions, 4 High NOx, dark, and dry conditions, 5 High NOx, UV, and dry conditions. According to the proposed parameterizations the α-pinene SOA mass fractions in an atmosphere with 5 µg m−3 of organic aerosol range from 0.032 to 0.1 for reacted α-pinene concentrations in the 1 ppt to 5 ppb range.

  18. Distance parameterization for efficient seismic history matching with the ensemble Kalman Filter

    NARCIS (Netherlands)

    Leeuwenburgh, O.; Arts, R.

    2012-01-01

    The Ensemble Kalman Filter (EnKF), in combination with travel-time parameterization, provides a robust and flexible method for quantitative multi-model history matching to time-lapse seismic data. A disadvantage of the parameterization in terms of travel-times is that it requires simulation of

  19. Device-Free Localization via an Extreme Learning Machine with Parameterized Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    Jie Zhang

    2017-04-01

    Full Text Available Device-free localization (DFL is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF DFL system, radio transmitters (RTs and radio receivers (RXs are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN, support vector machine (SVM, back propagation neural network (BPNN, as well as the well-known radio tomographic imaging (RTI DFL approach.

  20. Measuring the magnetic connectivity of the geosynchronous region of the magnetosphere

    International Nuclear Information System (INIS)

    Thomsen, M.; Hones, E.; McComas, D.; Reeves, G.; Weiss, L.

    1996-01-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The purpose of this project was to determine the magnetic connectivity of the geosynchronous region of the magnetosphere to the auroral zone in the polar ionosphere in order to test and refine current magnetospheric magnetic field models. The authors used plasma data from LANL instruments on three geosynchronous satellites and from USAF instruments on three low-altitude, polar-orbiting, DMSP satellites. Magnetic connectivity is tested by comparing plasma energy spectra at DMSP and geosynchronous satellites when they are in near conjunction. The times of closest conjugacy are used to evaluate the field models. They developed the tools for each step of the process and applied them to the study of a one-week test set of conjunctions. They automated the analysis tools and applied them to four months of two-satellite observations. This produced a database of about 130 definitive magnetic conjunctions. They compared this database with the predictions of the widely-used Tsyganenko magnetic field model and showed that in most cases one of the various parameterizations of the model could reproduce the observed field line connection. Further, they explored various measurables (e.g., magnetospheric activity indices or the geosynchronous field orientation) that might point to the appropriate parameterization of the model for these conjunctions, and ultimately, for arbitrary times

  1. An individual-based modelling approach to estimate landscape connectivity for bighorn sheep (Ovis canadensis

    Directory of Open Access Journals (Sweden)

    Corrie H. Allen

    2016-05-01

    Full Text Available Background. Preserving connectivity, or the ability of a landscape to support species movement, is among the most commonly recommended strategies to reduce the negative effects of climate change and human land use development on species. Connectivity analyses have traditionally used a corridor-based approach and rely heavily on least cost path modeling and circuit theory to delineate corridors. Individual-based models are gaining popularity as a potentially more ecologically realistic method of estimating landscape connectivity. However, this remains a relatively unexplored approach. We sought to explore the utility of a simple, individual-based model as a land-use management support tool in identifying and implementing landscape connectivity. Methods. We created an individual-based model of bighorn sheep (Ovis canadensis that simulates a bighorn sheep traversing a landscape by following simple movement rules. The model was calibrated for bighorn sheep in the Okanagan Valley, British Columbia, Canada, a region containing isolated herds that are vital to conservation of the species in its northern range. Simulations were run to determine baseline connectivity between subpopulations in the study area. We then applied the model to explore two land management scenarios on simulated connectivity: restoring natural fire regimes and identifying appropriate sites for interventions that would increase road permeability for bighorn sheep. Results. This model suggests there are no continuous areas of good habitat between current subpopulations of sheep in the study area; however, a series of stepping-stones or circuitous routes could facilitate movement between subpopulations and into currently unoccupied, yet suitable, bighorn habitat. Restoring natural fire regimes or mimicking fire with prescribed burns and tree removal could considerably increase bighorn connectivity in this area. Moreover, several key road crossing sites that could benefit from

  2. The importance of parameterization when simulating the hydrologic response of vegetative land-cover change

    Science.gov (United States)

    White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John

    2017-08-01

    Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management

  3. Air quality modeling: evaluation of chemical and meteorological parameterizations

    International Nuclear Information System (INIS)

    Kim, Youngseob

    2011-01-01

    The influence of chemical mechanisms and meteorological parameterizations on pollutant concentrations calculated with an air quality model is studied. The influence of the differences between two gas-phase chemical mechanisms on the formation of ozone and aerosols in Europe is low on average. For ozone, the large local differences are mainly due to the uncertainty associated with the kinetics of nitrogen monoxide (NO) oxidation reactions on the one hand and the representation of different pathways for the oxidation of aromatic compounds on the other hand. The aerosol concentrations are mainly influenced by the selection of all major precursors of secondary aerosols and the explicit treatment of chemical regimes corresponding to the nitrogen oxides (NO x ) levels. The influence of the meteorological parameterizations on the concentrations of aerosols and their vertical distribution is evaluated over the Paris region in France by comparison to lidar data. The influence of the parameterization of the dynamics in the atmospheric boundary layer is important; however, it is the use of an urban canopy model that improves significantly the modeling of the pollutant vertical distribution (author) [fr

  4. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    Directory of Open Access Journals (Sweden)

    A. Endalamaw

    2017-09-01

    Full Text Available Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which better represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW in Interior Alaska: one nearly permafrost-free (LowP sub-basin and one permafrost-dominated (HighP sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC mesoscale hydrological model to simulate runoff, evapotranspiration (ET, and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub

  5. Impact of cloud parameterization on the numerical simulation of a super cyclone

    Energy Technology Data Exchange (ETDEWEB)

    Deshpande, M.S.; Pattnaik, S.; Salvekar, P.S. [Indian Institute of Tropical Meteorology, Pune (India)

    2012-07-01

    This study examines the role of parameterization of convection and explicit moisture processes on the simulated track, intensity and inner core structure of Orissa super cyclone (1999) in Bay of Bengal (north Indian Ocean). Sensitivity experiments are carried out to examine the impact of cumulus parameterization schemes (CPS) using MM5 model (Version 3.7) in a two-way nested domain (D1 and D2) configuration at horizontal resolutions (45-15 km). Three different cumulus parameterization schemes, namely Grell (Gr), Betts-Miller (BM) and updated Kain Fritsch (KF2), are tested. It is noted that track and intensity both are very sensitive to CPS and comparatively, KF2 predicts them reasonably well. Particularly, the rapid intensification phase of the super cyclone is best simulated by KF2 compared to other CPS. To examine the effect of the cumulus parameterization scheme at high resolution (5 km), the three-domain configuration (45-15-5 km resolution) is utilized. Based on initial results, KF2 scheme is used for both the domains (D1 and D2). Two experiments are conducted: one in which KF2 is used as CPS and another in which no CPS is used in the third domain. The intensity is well predicted when no CPS is used in the innermost domain. The sensitivity experiments are also carried out to examine the impact from microphysics parameterization schemes (MPS). Four cloud microphysics parameterization schemes, namely mixed phase (MP), Goddard microphysics with Graupel (GG), Reisner Graupel (RG) and Schultz (Sc), are tested in these experiments. It is noted that the tropical cyclone tracks and intensity variation have considerable sensitivity to the varying cloud microphysical parameterization schemes. The MPS of MP and Sc could very well capture the rapid intensification phase. The final intensity is well predicted by MP, which is overestimated by Sc. The MPS of GG and RG underestimates the intensity. (orig.)

  6. Polynomial parameterized representation of macroscopic cross section for PWR reactor

    International Nuclear Information System (INIS)

    Fiel, Joao Claudio B.

    2015-01-01

    The purpose of this work is to describe, by means of Tchebychev polynomial, a parameterized representation of the homogenized macroscopic cross section for PWR fuel element as a function of soluble boron concentration, moderator temperature, fuel temperature, moderator density and 235 U 92 enrichment. Analyzed cross sections are: fission, scattering, total, transport, absorption and capture. This parameterization enables a quick and easy determination of the problem-dependent cross-sections to be used in few groups calculations. The methodology presented here will enable to provide cross-sections values to perform PWR core calculations without the need to generate them based on computer code calculations using standard steps. The results obtained by parameterized cross-sections functions, when compared with the cross-section generated by SCALE code calculations, or when compared with K inf , generated by MCNPX code calculations, show a difference of less than 0.7 percent. (author)

  7. Analyses of the stratospheric dynamics simulated by a GCM with a stochastic nonorographic gravity wave parameterization

    Science.gov (United States)

    Serva, Federico; Cagnazzo, Chiara; Riccio, Angelo

    2016-04-01

    version of the model, the default and a new stochastic version, in which the value of the perturbation field at launching level is not constant and uniform, but extracted at each time-step and grid-point from a given PDF. With this approach we are trying to add further variability to the effects given by the deterministic NOGW parameterization: the impact on the simulated climate will be assessed focusing on the Quasi-Biennial Oscillation of the equatorial stratosphere (known to be driven also by gravity waves) and on the variability of the mid-to-high latitudes atmosphere. The different characteristics of the circulation will be compared with recent reanalysis products in order to determine the advantages of the stochastic approach over the traditional deterministic scheme.

  8. Developing a stochastic parameterization to incorporate plant trait variability into ecohydrologic modeling

    Science.gov (United States)

    Liu, S.; Ng, G. H. C.

    2017-12-01

    The global plant database has revealed that plant traits can vary more within a plant functional type (PFT) than among different PFTs, indicating that the current paradigm in ecohydrogical models of specifying fixed parameters based solely on plant functional type (PFT) could potentially bias simulations. Although some recent modeling studies have attempted to incorporate this observed plant trait variability, many failed to consider uncertainties due to sparse global observation, or they omitted spatial and/or temporal variability in the traits. Here we present a stochastic parameterization for prognostic vegetation simulations that are stochastic in time and space in order to represent plant trait plasticity - the process by which trait differences arise. We have developed the new PFT parameterization within the Community Land Model 4.5 (CLM 4.5) and tested the method for a desert shrubland watershed in the Mojave Desert, where fixed parameterizations cannot represent acclimation to desert conditions. Spatiotemporally correlated plant trait parameters were first generated based on TRY statistics and were then used to implement ensemble runs for the study area. The new PFT parameterization was then further conditioned on field measurements of soil moisture and remotely sensed observations of leaf-area-index to constrain uncertainties in the sparse global database. Our preliminary results show that incorporating data-conditioned, variable PFT parameterizations strongly affects simulated soil moisture and water fluxes, compared with default simulations. The results also provide new insights about correlations among plant trait parameters and between traits and environmental conditions in the desert shrubland watershed. Our proposed stochastic PFT parameterization method for ecohydrological models has great potential in advancing our understanding of how terrestrial ecosystems are predicted to adapt to variable environmental conditions.

  9. Connecting Competences and Pedagogical Approaches for Sustainable Development in Higher Education: A Literature Review and Framework Proposal

    Directory of Open Access Journals (Sweden)

    Rodrigo Lozano

    2017-10-01

    Full Text Available Research into and practice of Higher Education for Sustainable Development (HESD have been increasing during the last two decades. These have focused on providing sustainability education to future generations of professionals. In this context, there has been considerable progress in the incorporation of SD in universities’ curricula. Most of these efforts have focussed on the design and delivery of sustainability-oriented competences. Some peer-reviewed articles have proposed different pedagogical approaches to better deliver SD in these courses; however, there has been limited research on the connection between how courses are delivered (pedagogical approaches and how they may affect sustainability competences. This paper analyses competences and pedagogical approaches, using hermeneutics to connect these in a framework based on twelve competences and twelve pedagogical approaches found in the literature. The framework connects the course aims to delivery in HESD by highlighting the connections between pedagogical approaches and competences in a matrix structure. The framework is aimed at helping educators in creating and updating their courses to provide a more complete, holistic, and systemic sustainability education to future leaders, decision makers, educators, and change agents. To better develop mind-sets and actions of future generations, we must provide students with a complete set of sustainability competences.

  10. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    Directory of Open Access Journals (Sweden)

    T. R. Khan

    2017-10-01

    Full Text Available Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001 (Z01, Petroff and Zhang (2010 (PZ10, Kouznetsov and Sofiev (2012 (KS12, Zhang and He (2014 (ZH14, and Zhang and Shao (2014 (ZS14, respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy; the influence of imprecision in input parameter values on the modeled Vd (uncertainty; and identification of the most influential parameter(s (sensitivity. The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs: grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy, we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp  =  0.001 to 1.0

  11. Dispersal and habitat connectivity in complex heterogeneous landscapes: an analysis with a GIS based random walk model

    NARCIS (Netherlands)

    Schippers, P.; Verboom, J.; Knaapen, J.P.; Apeldoorn, van R.

    1996-01-01

    A grid-based random walk model has been developed to simulate animal dispersal, taking landscape heterogeneity and linear barriers such as roads and rivers into account. The model can be used to estimate connectivity and has been parameterized for thebadger in the central part of the Netherlands.

  12. Characterizing the Surface Connectivity of Depressional Wetlands: Linking Remote Sensing and Hydrologic Modeling Approaches

    Science.gov (United States)

    Christensen, J.; Evenson, G. R.; Vanderhoof, M.; Wu, Q.; Golden, H. E.; Lane, C.

    2017-12-01

    Surface connectivity of wetlands in the 700,000 km2 Prairie Pothole Region of North America (PPR) can occur through fill-spill and fill-merge mechanisms, with some wetlands eventually spilling into stream/river systems. These wetland-to-wetland and wetland-to-stream connections vary both spatially and temporally in PPR watersheds and are important to understanding hydrologic and biogeochemical processes in the landscape. To explore how to best characterize spatial and temporal variability in aquatic connectivity, we compared three approaches, 1) hydrological modeling alone, 2) remotely-sensed data alone, and 3) integrating remotely-sensed data into a hydrological model. These approaches were tested in the Pipestem Creek Watershed, North Dakota across a drought to deluge cycle (1990-2011). A Soil and Water Assessment Tool (SWAT) model was modified to include the water storage capacity of individual non-floodplain wetlands identified in the National Wetland Inventory (NWI) dataset. The SWAT-NWI model simulated the water balance and storage of each wetland and the temporal variability of their hydrologic connections between wetlands during the 21-year study period. However, SWAT-NWI only accounted for fill-spill, and did not allow for the expansion and merging of wetlands situated within larger depressions. Alternatively, we assessed the occurrence of fill-merge mechanisms using inundation maps derived from Landsat images on 19 cloud-free days during the 21 years. We found fill-merge mechanisms to be prevalent across the Pipestem watershed during times of deluge. The SWAT-NWI model was then modified to use LiDAR-derived depressions that account for the potential maximum depression extent, including the merging of smaller wetlands. The inundation maps were used to evaluate the ability of the SWAT-depression model to simulate fill-merge dynamics in addition to fill-spill dynamics throughout the study watershed. Ultimately, using remote sensing to inform and validate

  13. Parameterized Linear Longitudinal Airship Model

    Science.gov (United States)

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  14. A parameterization method and application in breast tomosynthesis dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xinhua; Zhang, Da; Liu, Bob [Division of Diagnostic Imaging Physics and Webster Center for Advanced Research and Education in Radiation, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States)

    2013-09-15

    Purpose: To present a parameterization method based on singular value decomposition (SVD), and to provide analytical parameterization of the mean glandular dose (MGD) conversion factors from eight references for evaluating breast tomosynthesis dose in the Mammography Quality Standards Act (MQSA) protocol and in the UK, European, and IAEA dosimetry protocols.Methods: MGD conversion factor is usually listed in lookup tables for the factors such as beam quality, breast thickness, breast glandularity, and projection angle. The authors analyzed multiple sets of MGD conversion factors from the Hologic Selenia Dimensions quality control manual and seven previous papers. Each data set was parameterized using a one- to three-dimensional polynomial function of 2–16 terms. Variable substitution was used to improve accuracy. A least-squares fit was conducted using the SVD.Results: The differences between the originally tabulated MGD conversion factors and the results computed using the parameterization algorithms were (a) 0.08%–0.18% on average and 1.31% maximum for the Selenia Dimensions quality control manual, (b) 0.09%–0.66% on average and 2.97% maximum for the published data by Dance et al. [Phys. Med. Biol. 35, 1211–1219 (1990); ibid. 45, 3225–3240 (2000); ibid. 54, 4361–4372 (2009); ibid. 56, 453–471 (2011)], (c) 0.74%–0.99% on average and 3.94% maximum for the published data by Sechopoulos et al. [Med. Phys. 34, 221–232 (2007); J. Appl. Clin. Med. Phys. 9, 161–171 (2008)], and (d) 0.66%–1.33% on average and 2.72% maximum for the published data by Feng and Sechopoulos [Radiology 263, 35–42 (2012)], excluding one sample in (d) that does not follow the trends in the published data table.Conclusions: A flexible parameterization method is presented in this paper, and was applied to breast tomosynthesis dosimetry. The resultant data offer easy and accurate computations of MGD conversion factors for evaluating mean glandular breast dose in the MQSA

  15. An approach to the conversion of the power generated by an offshore wind power farm connected into seawave power generator

    Energy Technology Data Exchange (ETDEWEB)

    Franzitta, Vicenzo; Messineo, Antonio; Trapanese, Marco

    2011-07-01

    The development of renewable energy systems has been undergoing for the past decades but sea wave's energy resource has been under-utilized. This under-utilization has several reasons: the energy concentration is low in sea waves, extraction of this energy requires leading edge technologies and conversion of the energy into electrical energy is difficult. This study compares two different methods to connect the sea waves' generator to the network and to the offshore wind power farm. The first method consists in a decentralized approach: each generator is connected to the grid through an AC converter. The second method is a partially centralized approach: a rectifier is connected to each generator, all of the generators are then connected together to a common DC bus and power is then converted in AC to be connected to the grid. This study has shown that the partially centralized approach is more reliable and efficient than the decentralized approach.

  16. A probabilistic approach to quantifying spatial patterns of flow regimes and network-scale connectivity

    Science.gov (United States)

    Garbin, Silvia; Alessi Celegon, Elisa; Fanton, Pietro; Botter, Gianluca

    2017-04-01

    The temporal variability of river flow regime is a key feature structuring and controlling fluvial ecological communities and ecosystem processes. In particular, streamflow variability induced by climate/landscape heterogeneities or other anthropogenic factors significantly affects the connectivity between streams with notable implication for river fragmentation. Hydrologic connectivity is a fundamental property that guarantees species persistence and ecosystem integrity in riverine systems. In riverine landscapes, most ecological transitions are flow-dependent and the structure of flow regimes may affect ecological functions of endemic biota (i.e., fish spawning or grazing of invertebrate species). Therefore, minimum flow thresholds must be guaranteed to support specific ecosystem services, like fish migration, aquatic biodiversity and habitat suitability. In this contribution, we present a probabilistic approach aiming at a spatially-explicit, quantitative assessment of hydrologic connectivity at the network-scale as derived from river flow variability. Dynamics of daily streamflows are estimated based on catchment-scale climatic and morphological features, integrating a stochastic, physically based approach that accounts for the stochasticity of rainfall with a water balance model and a geomorphic recession flow model. The non-exceedance probability of ecologically meaningful flow thresholds is used to evaluate the fragmentation of individual stream reaches, and the ensuing network-scale connectivity metrics. A multi-dimensional Poisson Process for the stochastic generation of rainfall is used to evaluate the impact of climate signature on reach-scale and catchment-scale connectivity. The analysis shows that streamflow patterns and network-scale connectivity are influenced by the topology of the river network and the spatial variability of climatic properties (rainfall, evapotranspiration). The framework offers a robust basis for the prediction of the impact of

  17. A shallow convection parameterization for the non-hydrostatic MM5 mesoscale model

    Energy Technology Data Exchange (ETDEWEB)

    Seaman, N.L.; Kain, J.S.; Deng, A. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    A shallow convection parameterization suitable for the Pennsylvannia State University (PSU)/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) is being developed at PSU. The parameterization is based on parcel perturbation theory developed in conjunction with a 1-D Mellor Yamada 1.5-order planetary boundary layer scheme and the Kain-Fritsch deep convection model.

  18. Actual and Idealized Crystal Field Parameterizations for the Uranium Ions in UF 4

    Science.gov (United States)

    Gajek, Z.; Mulak, J.; Krupa, J. C.

    1993-12-01

    The crystal field parameters for the actual coordination symmetries of the uranium ions in UF 4, C2 and C1, and for their idealizations to D2, C2 v , D4, D4 d , and the Archimedean antiprism point symmetries are given. They have been calculated by means of both the perturbative ab initio model and the angular overlap model and are referenced to the recent results fitted by Carnall's group. The equivalency of some different sets of parameters has been verified with the standardization procedure. The adequacy of several idealized approaches has been tested by comparison of the corresponding splitting patterns of the 3H 4 ground state. Our results support the parameterization given by Carnall. Furthermore, the parameterization of the crystal field potential and the splitting diagram for the symmetryless uranium ion U( C1) are given. Having at our disposal the crystal field splittings for the two kinds of uranium ions in UF 4, U( C2) and U( C1), we calculate the model plots of the paramagnetic susceptibility χ( T) and the magnetic entropy associated with the Schottky anomaly Δ S( T) for UF 4.

  19. Aerosol-Cloud-Precipitation Interactions in WRF Model:Sensitivity to Autoconversion Parameterization

    Institute of Scientific and Technical Information of China (English)

    解小宁; 刘晓东

    2015-01-01

    Cloud-to-rain autoconversion process is an important player in aerosol loading, cloud morphology, and precipitation variations because it can modulate cloud microphysical characteristics depending on the par-ticipation of aerosols, and aff ects the spatio-temporal distribution and total amount of precipitation. By applying the Kessler, the Khairoutdinov-Kogan (KK), and the Dispersion autoconversion parameterization schemes in a set of sensitivity experiments, the indirect eff ects of aerosols on clouds and precipitation are investigated for a deep convective cloud system in Beijing under various aerosol concentration backgrounds from 50 to 10000 cm−3. Numerical experiments show that aerosol-induced precipitation change is strongly dependent on autoconversion parameterization schemes. For the Kessler scheme, the average cumulative precipitation is enhanced slightly with increasing aerosols, whereas surface precipitation is reduced signifi-cantly with increasing aerosols for the KK scheme. Moreover, precipitation varies non-monotonically for the Dispersion scheme, increasing with aerosols at lower concentrations and decreasing at higher concentrations. These diff erent trends of aerosol-induced precipitation change are mainly ascribed to diff erences in rain wa-ter content under these three autoconversion parameterization schemes. Therefore, this study suggests that accurate parameterization of cloud microphysical processes, particularly the cloud-to-rain autoconversion process, is needed for improving the scientifi c understanding of aerosol-cloud-precipitation interactions.

  20. Gravitational wave tests of general relativity with the parameterized post-Einsteinian framework

    International Nuclear Information System (INIS)

    Cornish, Neil; Sampson, Laura; Yunes, Nicolas; Pretorius, Frans

    2011-01-01

    Gravitational wave astronomy has tremendous potential for studying extreme astrophysical phenomena and exploring fundamental physics. The waves produced by binary black hole mergers will provide a pristine environment in which to study strong-field dynamical gravity. Extracting detailed information about these systems requires accurate theoretical models of the gravitational wave signals. If gravity is not described by general relativity, analyses that are based on waveforms derived from Einstein's field equations could result in parameter biases and a loss of detection efficiency. A new class of ''parameterized post-Einsteinian'' waveforms has been proposed to cover this eventuality. Here, we apply the parameterized post-Einsteinian approach to simulated data from a network of advanced ground-based interferometers and from a future space-based interferometer. Bayesian inference and model selection are used to investigate parameter biases, and to determine the level at which departures from general relativity can be detected. We find that in some cases the parameter biases from assuming the wrong theory can be severe. We also find that gravitational wave observations will beat the existing bounds on deviations from general relativity derived from the orbital decay of binary pulsars by a large margin across a wide swath of parameter space.

  1. Finite element analysis of an extended end-plate connection using the T-stub approach

    Energy Technology Data Exchange (ETDEWEB)

    Muresan, Ioana Cristina; Balc, Roxana [Technical University of Cluj-Napoca, Faculty of Civil Engineering. 15 C Daicoviciu Str., 400020, Cluj-Napoca (Romania)

    2015-03-10

    Beam-to-column end-plate bolted connections are usually used as moment-resistant connections in steel framed structures. For this joint type, the deformability is governed by the deformation capacity of the column flange and end-plate under tension and elongation of the bolts. All these elements around the beam tension flange form the tension region of the joint, which can be modeled by means of equivalent T-stubs. In this paper a beam-to-column end-plate bolted connection is substituted with a T-stub of appropriate effective length and it is analyzed using the commercially available finite element software ABAQUS. The performance of the model is validated by comparing the behavior of the T-stub from the numerical simulation with the behavior of the connection as a whole. The moment-rotation curve of the T-stub obtained from the numerical simulation is compared with the behavior of the whole extended end-plate connection, obtained by numerical simulation, experimental tests and analytical approach.

  2. Parameterization of pion production and reaction cross sections at LAMPF energies

    International Nuclear Information System (INIS)

    Burman, R.L.; Smith, E.S.

    1989-05-01

    A parameterization of pion production and reaction cross sections is developed for eventual use in modeling neutrino production by protons in a beam stop. Emphasis is placed upon smooth parameterizations for proton energies up to 800 MeV, for all pion energies and angles, and for a wide range of materials. The resulting representations of the data are well-behaved and can be used for extrapolation to regions where there are no measurements. 22 refs., 16 figs., 2 tabs

  3. Parameterized Shower Simulation in Lelaps: a Comparison with Geant4

    International Nuclear Information System (INIS)

    Langeveld, Willy G.J.

    2003-01-01

    The detector simulation toolkit Lelaps[1] simulates electromagnetic and hadronic showers in calorimetric detector elements of high-energy particle detectors using a parameterization based on the algorithms originally developed by Grindhammer and Peters[2] and Bock et al.[3]. The primary motivations of the present paper are to verify the implementation of the parameterization, to explore regions of energy where the parameterization is valid and to serve as a basis for further improvement of the algorithm. To this end, we compared the Lelaps simulation to a detailed simulation provided by Geant4[4]. A number of different calorimeters, both electromagnetic and hadronic, were implemented in both programs. Longitudinal and radial shower profiles and their fluctuations were obtained from Geant4 over a wide energy range and compared with those obtained from Lelaps. Generally the longitudinal shower profiles are found to be in good agreement in a large part of the energy range, with poorer results at energies below about 300 MeV. Radial profiles agree well in homogeneous detectors, but are somewhat deficient in segmented ones. These deficiencies are discussed

  4. A test harness for accelerating physics parameterization advancements into operations

    Science.gov (United States)

    Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.

    2017-12-01

    The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a

  5. Parameterized and resolved Southern Ocean eddy compensation

    Science.gov (United States)

    Poulsen, Mads B.; Jochum, Markus; Nuterman, Roman

    2018-04-01

    The ability to parameterize Southern Ocean eddy effects in a forced coarse resolution ocean general circulation model is assessed. The transient model response to a suite of different Southern Ocean wind stress forcing perturbations is presented and compared to identical experiments performed with the same model in 0.1° eddy-resolving resolution. With forcing of present-day wind stress magnitude and a thickness diffusivity formulated in terms of the local stratification, it is shown that the Southern Ocean residual meridional overturning circulation in the two models is different in structure and magnitude. It is found that the difference in the upper overturning cell is primarily explained by an overly strong subsurface flow in the parameterized eddy-induced circulation while the difference in the lower cell is mainly ascribed to the mean-flow overturning. With a zonally constant decrease of the zonal wind stress by 50% we show that the absolute decrease in the overturning circulation is insensitive to model resolution, and that the meridional isopycnal slope is relaxed in both models. The agreement between the models is not reproduced by a 50% wind stress increase, where the high resolution overturning decreases by 20%, but increases by 100% in the coarse resolution model. It is demonstrated that this difference is explained by changes in surface buoyancy forcing due to a reduced Antarctic sea ice cover, which strongly modulate the overturning response and ocean stratification. We conclude that the parameterized eddies are able to mimic the transient response to altered wind stress in the high resolution model, but partly misrepresent the unperturbed Southern Ocean meridional overturning circulation and associated heat transports.

  6. A QCQP Approach for OPF in Multiphase Radial Networks with Wye and Delta Connections: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zamzam, Ahmed, S.; Zhaoy, Changhong; Dall' Anesey, Emiliano; Sidiropoulos, Nicholas D.

    2017-06-27

    This paper examines the AC Optimal Power Flow (OPF) problem for multiphase distribution networks featuring renewable energy resources (RESs). We start by outlining a power flow model for radial multiphase systems that accommodates wye-connected and delta-connected RESs and non-controllable energy assets. We then formalize an AC OPF problem that accounts for both types of connections. Similar to various AC OPF renditions, the resultant problem is a non convex quadratically-constrained quadratic program. However, the so-called Feasible Point Pursuit-Successive Convex Approximation algorithm is leveraged to obtain a feasible and yet locally-optimal solution. The merits of the proposed solution approach are demonstrated using two unbalanced multiphase distribution feeders with both wye and delta connections.

  7. Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds

    Science.gov (United States)

    Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail

    2011-01-01

    Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.

  8. Robust parameterization of elastic and absorptive electron atomic scattering factors

    International Nuclear Information System (INIS)

    Peng, L.M.; Ren, G.; Dudarev, S.L.; Whelan, M.J.

    1996-01-01

    A robust algorithm and computer program have been developed for the parameterization of elastic and absorptive electron atomic scattering factors. The algorithm is based on a combined modified simulated-annealing and least-squares method, and the computer program works well for fitting both elastic and absorptive atomic scattering factors with five Gaussians. As an application of this program, the elastic electron atomic scattering factors have been parameterized for all neutral atoms and for s up to 6 A -1 . Error analysis shows that the present results are considerably more accurate than the previous analytical fits in terms of the mean square value of the deviation between the numerical and fitted scattering factors. Parameterization for absorptive atomic scattering factors has been made for 17 important materials with the zinc blende structure over the temperature range 1 to 1000 K, where appropriate, and for temperature ranges for which accurate Debye-Waller factors are available. For other materials, the parameterization of the absorptive electron atomic scattering factors can be made using the program by supplying the atomic number of the element, the Debye-Waller factor and the acceleration voltage. For ions or when more accurate numerical results for neutral atoms are available, the program can read in the numerical values of the elastic scattering factors and return the parameters for both the elastic and absorptive scattering factors. The computer routines developed have been tested both on computer workstations and desktop PC computers, and will be made freely available via electronic mail or on floppy disk upon request. (orig.)

  9. Improvement in the Modeled Representation of North American Monsoon Precipitation Using a Modified Kain–Fritsch Convective Parameterization Scheme

    KAUST Repository

    Luong, Thang

    2018-01-22

    A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers) to simulate organized convection, principally mesoscale convective systems. With the use of convective parameterization, high precipitation biases in model simulations are typically observed over the peaks of mountain ranges. To address this issue, the Kain–Fritsch (KF) cumulus parameterization scheme has been modified with new diagnostic equations to compute the updraft velocity, the convective available potential energy closure assumption, and the convective trigger function. The scheme has been adapted for use in the Weather Research and Forecasting (WRF). A numerical weather prediction-type simulation is conducted for the North American Monsoon Experiment Intensive Observing Period 2 and a regional climate simulation is performed, by dynamically downscaling. In both of these applications, there are notable improvements in the WRF model-simulated precipitation due to the better representation of organized, propagating convection. The use of the modified KF scheme for atmospheric model simulations may provide a more computationally economical alternative to improve the representation of organized convection, as compared to convective-permitting simulations at the kilometer scale or a super-parameterization approach.

  10. Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation

    Directory of Open Access Journals (Sweden)

    Christian Nowke

    2018-06-01

    Full Text Available Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed.

  11. Statistical dynamical subgrid-scale parameterizations for geophysical flows

    International Nuclear Information System (INIS)

    O'Kane, T J; Frederiksen, J S

    2008-01-01

    Simulations of both atmospheric and oceanic circulations at given finite resolutions are strongly dependent on the form and strengths of the dynamical subgrid-scale parameterizations (SSPs) and in particular are sensitive to subgrid-scale transient eddies interacting with the retained scale topography and the mean flow. In this paper, we present numerical results for SSPs of the eddy-topographic force, stochastic backscatter, eddy viscosity and eddy-mean field interaction using an inhomogeneous statistical turbulence model based on a quasi-diagonal direct interaction approximation (QDIA). Although the theoretical description on which our model is based is for general barotropic flows, we specifically focus on global atmospheric flows where large-scale Rossby waves are present. We compare and contrast the closure-based results with an important earlier heuristic SSP of the eddy-topographic force, based on maximum entropy or statistical canonical equilibrium arguments, developed specifically for general ocean circulation models (Holloway 1992 J. Phys. Oceanogr. 22 1033-46). Our results demonstrate that where strong zonal flows and Rossby waves are present, such as in the atmosphere, maximum entropy arguments are insufficient to accurately parameterize the subgrid contributions due to eddy-eddy, eddy-topographic and eddy-mean field interactions. We contrast our atmospheric results with findings for the oceans. Our study identifies subgrid-scale interactions that are currently not parameterized in numerical atmospheric climate models, which may lead to systematic defects in the simulated circulations.

  12. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    Science.gov (United States)

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    2018-03-01

    Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^'), and velocity-impedance-II (α″, β″ and I_S^'). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density

  13. Firefly Algorithm for Polynomial Bézier Surface Parameterization

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2013-01-01

    reality, medical imaging, computer graphics, computer animation, and many others. Very often, the preferred approximating surface is polynomial, usually described in parametric form. This leads to the problem of determining suitable parametric values for the data points, the so-called surface parameterization. In real-world settings, data points are generally irregularly sampled and subjected to measurement noise, leading to a very difficult nonlinear continuous optimization problem, unsolvable with standard optimization techniques. This paper solves the parameterization problem for polynomial Bézier surfaces by applying the firefly algorithm, a powerful nature-inspired metaheuristic algorithm introduced recently to address difficult optimization problems. The method has been successfully applied to some illustrative examples of open and closed surfaces, including shapes with singularities. Our results show that the method performs very well, being able to yield the best approximating surface with a high degree of accuracy.

  14. Cumulus parameterizations in chemical transport models

    Science.gov (United States)

    Mahowald, Natalie M.; Rasch, Philip J.; Prinn, Ronald G.

    1995-12-01

    Global three-dimensional chemical transport models (CTMs) are valuable tools for studying processes controlling the distribution of trace constituents in the atmosphere. A major uncertainty in these models is the subgrid-scale parametrization of transport by cumulus convection. This study seeks to define the range of behavior of moist convective schemes and point toward more reliable formulations for inclusion in chemical transport models. The emphasis is on deriving convective transport from meteorological data sets (such as those from the forecast centers) which do not routinely include convective mass fluxes. Seven moist convective parameterizations are compared in a column model to examine the sensitivity of the vertical profile of trace gases to the parameterization used in a global chemical transport model. The moist convective schemes examined are the Emanuel scheme [Emanuel, 1991], the Feichter-Crutzen scheme [Feichter and Crutzen, 1990], the inverse thermodynamic scheme (described in this paper), two versions of a scheme suggested by Hack [Hack, 1994], and two versions of a scheme suggested by Tiedtke (one following the formulation used in the ECMWF (European Centre for Medium-Range Weather Forecasting) and ECHAM3 (European Centre and Hamburg Max-Planck-Institut) models [Tiedtke, 1989], and one formulated as in the TM2 (Transport Model-2) model (M. Heimann, personal communication, 1992). These convective schemes vary in the closure used to derive the mass fluxes, as well as the cloud model formulation, giving a broad range of results. In addition, two boundary layer schemes are compared: a state-of-the-art nonlocal boundary layer scheme [Holtslag and Boville, 1993] and a simple adiabatic mixing scheme described in this paper. Three tests are used to compare the moist convective schemes against observations. Although the tests conducted here cannot conclusively show that one parameterization is better than the others, the tests are a good measure of the

  15. Constraining Unsaturated Hydraulic Parameters Using the Latin Hypercube Sampling Method and Coupled Hydrogeophysical Approach

    Science.gov (United States)

    Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.

    2017-12-01

    The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.

  16. IR OPTICS MEASUREMENT WITH LINEAR COUPLING'S ACTION-ANGLE PARAMETERIZATION

    International Nuclear Information System (INIS)

    LUO, Y.; BAI, M.; PILAT, R.; SATOGATA, T.; TRBOJEVIC, D.

    2005-01-01

    A parameterization of linear coupling in action-angle coordinates is convenient for analytical calculations and interpretation of turn-by-turn (TBT) beam position monitor (BPM) data. We demonstrate how to use this parameterization to extract the twiss and coupling parameters in interaction regions (IRs), using BPMs on each side of the long IR drift region. The example of TBT BPM analysis was acquired at the Relativistic Heavy Ion Collider (RHIC), using an AC dipole to excite a single eigenmode. Besides the full treatment, a fast estimate of beta*, the beta function at the interaction point (IP), is provided, along with the phase advance between these BPMs. We also calculate and measure the waist of the beta function and the local optics

  17. Evaluating parameterizations of aerodynamic resistance to heat transfer using field measurements

    Directory of Open Access Journals (Sweden)

    Shaomin Liu

    2007-01-01

    Full Text Available Parameterizations of aerodynamic resistance to heat and water transfer have a significant impact on the accuracy of models of land – atmosphere interactions and of estimated surface fluxes using spectro-radiometric data collected from aircrafts and satellites. We have used measurements from an eddy correlation system to derive the aerodynamic resistance to heat transfer over a bare soil surface as well as over a maize canopy. Diurnal variations of aerodynamic resistance have been analyzed. The results showed that the diurnal variation of aerodynamic resistance during daytime (07:00 h–18:00 h was significant for both the bare soil surface and the maize canopy although the range of variation was limited. Based on the measurements made by the eddy correlation system, a comprehensive evaluation of eight popularly used parameterization schemes of aerodynamic resistance was carried out. The roughness length for heat transfer is a crucial parameter in the estimation of aerodynamic resistance to heat transfer and can neither be taken as a constant nor be neglected. Comparing with the measurements, the parameterizations by Choudhury et al. (1986, Viney (1991, Yang et al. (2001 and the modified forms of Verma et al. (1976 and Mahrt and Ek (1984 by inclusion of roughness length for heat transfer gave good agreements with the measurements, while the parameterizations by Hatfield et al. (1983 and Xie (1988 showed larger errors even though the roughness length for heat transfer has been taken into account.

  18. A subgrid parameterization scheme for precipitation

    Directory of Open Access Journals (Sweden)

    S. Turner

    2012-04-01

    Full Text Available With increasing computing power, the horizontal resolution of numerical weather prediction (NWP models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation formation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed probability density function (PDF of relative humidity spatial variability within the model grid box, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II and fair weather cumulus (RICO and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.

  19. A parameterization scheme for the x-ray linear attenuation coefficient and energy absorption coefficient.

    Science.gov (United States)

    Midgley, S M

    2004-01-21

    A novel parameterization of x-ray interaction cross-sections is developed, and employed to describe the x-ray linear attenuation coefficient and mass energy absorption coefficient for both elements and mixtures. The new parameterization scheme addresses the Z-dependence of elemental cross-sections (per electron) using a simple function of atomic number, Z. This obviates the need for a complicated mathematical formalism. Energy dependent coefficients describe the Z-direction curvature of the cross-sections. The composition dependent quantities are the electron density and statistical moments describing the elemental distribution. We show that it is possible to describe elemental cross-sections for the entire periodic table and at energies above the K-edge (from 6 keV to 125 MeV), with an accuracy of better than 2% using a parameterization containing not more than five coefficients. For the biologically important elements 1 coefficients. At higher energies, the parameterization uses fewer coefficients with only two coefficients needed at megavoltage energies.

  20. Lumped Mass Modeling for Local-Mode-Suppressed Element Connectivity

    DEFF Research Database (Denmark)

    Joung, Young Soo; Yoon, Gil Ho; Kim, Yoon Young

    2005-01-01

    connectivity parameterization (ECP) is employed. On the way to the ultimate crashworthy structure optimization, we are now developing a local mode-free topology optimization formulation that can be implemented in the ECP method. In fact, the local mode-freeing strategy developed here can be also used directly...... experiencing large structural changes, appears to be still poor. In ECP, the nodes of the domain-discretizing elements are connected by zero-length one-dimensional elastic links having varying stiffness. For computational efficiency, every elastic link is now assumed to have two lumped masses at its ends....... Choosing appropriate penalization functions for lumped mass and link stiffness is important for local mode-free results. However, unless the objective and constraint functions are carefully selected, it is difficult to obtain clear black-and-white results. It is shown that the present formulation is also...

  1. Connected Green function approach to symmetry breaking in Φ1+14-theory

    International Nuclear Information System (INIS)

    Haeuser, J.M.; Cassing, W.; Peter, A.; Thoma, M.H.

    1995-01-01

    Using the cluster expansions for n-point Green functions we derive a closed set of dynamical equations of motion for connected equal-time Green functions by neglecting all connected functions higher than 4 th order for the λΦ 4 -theory in 1+1 dimensions. We apply the equations to the investigation of spontaneous symmetry breaking, i.e. to the evaluation of the effective potential at temperature T=0. Within our momentum space discretization we obtain a second order phase transition (in agreement with the Simon-Griffith theorem) and a critical coupling of λ crit /4m 2 =2.446 ascompared to a first order phase transition and λ crit /4m 2 =2.568 from the Gaussian effective potential approach. (orig.)

  2. Elastic FWI for VTI media: A synthetic parameterization study

    KAUST Repository

    Kamath, Nishant

    2016-09-06

    A major challenge for multiparameter full-waveform inversion (FWI) is the inherent trade-offs (or cross-talk) between model parameters. Here, we perform FWI of multicomponent data generated for a synthetic VTI (transversely isotropic with a vertical symmetry axis) model based on a geologic section of the Valhall field. A horizontal displacement source, which excites intensive shear waves in the conventional offset range, helps provide more accurate updates to the SV-wave vertical velocity. We test three model parameterizations, which exhibit different radiation patterns and, therefore, create different parameter trade-offs. The results show that the choice of parameterization for FWI depends on the availability of long-offset data, the quality of the initial model for the anisotropy coefficients, and the parameter that needs to be resolved with the highest accuracy.

  3. Complementary Network-Based Approaches for Exploring Genetic Structure and Functional Connectivity in Two Vulnerable, Endemic Ground Squirrels

    Directory of Open Access Journals (Sweden)

    Victoria H. Zero

    2017-06-01

    Full Text Available The persistence of small populations is influenced by genetic structure and functional connectivity. We used two network-based approaches to understand the persistence of the northern Idaho ground squirrel (Urocitellus brunneus and the southern Idaho ground squirrel (U. endemicus, two congeners of conservation concern. These graph theoretic approaches are conventionally applied to social or transportation networks, but here are used to study population persistence and connectivity. Population graph analyses revealed that local extinction rapidly reduced connectivity for the southern species, while connectivity for the northern species could be maintained following local extinction. Results from gravity models complemented those of population graph analyses, and indicated that potential vegetation productivity and topography drove connectivity in the northern species. For the southern species, development (roads and small-scale topography reduced connectivity, while greater potential vegetation productivity increased connectivity. Taken together, the results of the two network-based methods (population graph analyses and gravity models suggest the need for increased conservation action for the southern species, and that management efforts have been effective at maintaining habitat quality throughout the current range of the northern species. To prevent further declines, we encourage the continuation of management efforts for the northern species, whereas conservation of the southern species requires active management and additional measures to curtail habitat fragmentation. Our combination of population graph analyses and gravity models can inform conservation strategies of other species exhibiting patchy distributions.

  4. Impact of cloud microphysics and cumulus parameterization on ...

    Indian Academy of Sciences (India)

    2007-10-09

    Oct 9, 2007 ... Bangladesh. Weather Research and Forecast (WRF–ARW version) modelling system with six dif- .... tem intensified rapidly into a land depression over southern part of ... Impact of cloud microphysics and cumulus parameterization on heavy rainfall. 261 .... tent and temperature and is represented as a sum.

  5. The urban land use in the COSMO-CLM model: a comparison of three parameterizations for Berlin

    Directory of Open Access Journals (Sweden)

    Kristina Trusilova

    2016-05-01

    Full Text Available The regional non-hydrostatic climate model COSMO-CLM is increasingly being used on fine spatial scales of 1–5 km. Such applications require a detailed differentiation between the parameterization for natural and urban land uses. Since 2010, three parameterizations for urban land use have been incorporated into COSMO-CLM. These parameterizations vary in their complexity, required city parameters and their computational cost. We perform model simulations with the COSMO-CLM coupled to these three parameterizations for urban land in the same model domain of Berlin on a 1-km grid and compare results with available temperature observations. While all models capture the urban heat island, they differ in spatial detail, magnitude and the diurnal variation.

  6. Parameterizing the competition between homogeneous and heterogeneous freezing in cirrus cloud formation – monodisperse ice nuclei

    Directory of Open Access Journals (Sweden)

    D. Barahona

    2009-01-01

    Full Text Available We present a parameterization of cirrus cloud formation that computes the ice crystal number and size distribution under the presence of homogeneous and heterogeneous freezing. The parameterization is very simple to apply and is derived from the analytical solution of the cloud parcel equations, assuming that the ice nuclei population is monodisperse and chemically homogeneous. In addition to the ice distribution, an analytical expression is provided for the limiting ice nuclei number concentration that suppresses ice formation from homogeneous freezing. The parameterization is evaluated against a detailed numerical parcel model, and reproduces numerical simulations over a wide range of conditions with an average error of 6±33%. The parameterization also compares favorably against other formulations that require some form of numerical integration.

  7. Development of a cloud microphysical model and parameterizations to describe the effect of CCN on warm cloud

    Directory of Open Access Journals (Sweden)

    N. Kuba

    2006-01-01

    Full Text Available First, a hybrid cloud microphysical model was developed that incorporates both Lagrangian and Eulerian frameworks to study quantitatively the effect of cloud condensation nuclei (CCN on the precipitation of warm clouds. A parcel model and a grid model comprise the cloud model. The condensation growth of CCN in each parcel is estimated in a Lagrangian framework. Changes in cloud droplet size distribution arising from condensation and coalescence are calculated on grid points using a two-moment bin method in a semi-Lagrangian framework. Sedimentation and advection are estimated in the Eulerian framework between grid points. Results from the cloud model show that an increase in the number of CCN affects both the amount and the area of precipitation. Additionally, results from the hybrid microphysical model and Kessler's parameterization were compared. Second, new parameterizations were developed that estimate the number and size distribution of cloud droplets given the updraft velocity and the number of CCN. The parameterizations were derived from the results of numerous numerical experiments that used the cloud microphysical parcel model. The input information of CCN for these parameterizations is only several values of CCN spectrum (they are given by CCN counter for example. It is more convenient than conventional parameterizations those need values concerned with CCN spectrum, C and k in the equation of N=CSk, or, breadth, total number and median radius, for example. The new parameterizations' predictions of initial cloud droplet size distribution for the bin method were verified by using the aforesaid hybrid microphysical model. The newly developed parameterizations will save computing time, and can effectively approximate components of cloud microphysics in a non-hydrostatic cloud model. The parameterizations are useful not only in the bin method in the regional cloud-resolving model but also both for a two-moment bulk microphysical model and

  8. A scheme for parameterizing ice cloud water content in general circulation models

    Science.gov (United States)

    Heymsfield, Andrew J.; Donner, Leo J.

    1989-01-01

    A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.

  9. Comparison of Four Mixed Layer Mesoscale Parameterizations and the Equation for an Arbitrary Tracer

    Science.gov (United States)

    Canuto, V. M.; Dubovikov, M. S.

    2011-01-01

    In this paper we discuss two issues, the inter-comparison of four mixed layer mesoscale parameterizations and the search for the eddy induced velocity for an arbitrary tracer. It must be stressed that our analysis is limited to mixed layer mesoscales since we do not treat sub-mesoscales and small turbulent mixing. As for the first item, since three of the four parameterizations are expressed in terms of a stream function and a residual flux of the RMT formalism (residual mean theory), while the fourth is expressed in terms of vertical and horizontal fluxes, we needed a formalism to connect the two formulations. The standard RMT representation developed for the deep ocean cannot be extended to the mixed layer since its stream function does not vanish at the ocean's surface. We develop a new RMT representation that satisfies the surface boundary condition. As for the general form of the eddy induced velocity for an arbitrary tracer, thus far, it has been assumed that there is only the one that originates from the curl of the stream function. This is because it was assumed that the tracer residual flux is purely diffusive. On the other hand, we show that in the case of an arbitrary tracer, the residual flux has also a skew component that gives rise to an additional bolus velocity. Therefore, instead of only one bolus velocity, there are now two, one coming from the curl of the stream function and other from the skew part of the residual flux. In the buoyancy case, only one bolus velocity contributes to the mean buoyancy equation since the residual flux is indeed only diffusive.

  10. An analysis of MM5 sensitivity to different parameterizations for high-resolution climate simulations

    Science.gov (United States)

    Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.

    2009-04-01

    An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very

  11. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations

    OpenAIRE

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-01-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or ...

  12. Integrated cumulus ensemble and turbulence (ICET): An integrated parameterization system for general circulation models (GCMs)

    Energy Technology Data Exchange (ETDEWEB)

    Evans, J.L.; Frank, W.M.; Young, G.S. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    Successful simulations of the global circulation and climate require accurate representation of the properties of shallow and deep convective clouds, stable-layer clouds, and the interactions between various cloud types, the boundary layer, and the radiative fluxes. Each of these phenomena play an important role in the global energy balance, and each must be parameterized in a global climate model. These processes are highly interactive. One major problem limiting the accuracy of parameterizations of clouds and other processes in general circulation models (GCMs) is that most of the parameterization packages are not linked with a common physical basis. Further, these schemes have not, in general, been rigorously verified against observations adequate to the task of resolving subgrid-scale effects. To address these problems, we are designing a new Integrated Cumulus Ensemble and Turbulence (ICET) parameterization scheme, installing it in a climate model (CCM2), and evaluating the performance of the new scheme using data from Atmospheric Radiation Measurement (ARM) Program Cloud and Radiation Testbed (CART) sites.

  13. A unified spectral parameterization for wave breaking: From the deep ocean to the surf zone

    Science.gov (United States)

    Filipot, J.-F.; Ardhuin, F.

    2012-11-01

    A new wave-breaking dissipation parameterization designed for phase-averaged spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is first explicitly calculated in physical space before being distributed over the relevant spectral components. The transition from deep to shallow water is made possible by using a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth. This parameterization is implemented in the WAVEWATCH III modeling framework, which is applied to a wide range of conditions and scales, from the global ocean to the beach scale. Wave height, peak and mean periods, and spectral data are validated using in situ and remote sensing data. Model errors are comparable to those of other specialized deep or shallow water parameterizations. This work shows that it is possible to have a seamless parameterization from the deep ocean to the surf zone.

  14. Regional modelling of tracer transport by tropical convection – Part 1: Sensitivity to convection parameterization

    Directory of Open Access Journals (Sweden)

    J. Arteta

    2009-09-01

    Full Text Available The general objective of this series of papers is to evaluate long duration limited area simulations with idealised tracers as a tool to assess tracer transport in chemistry-transport models (CTMs. In this first paper, we analyse the results of six simulations using different convection closures and parameterizations. The simulations are using the Grell and Dévényi (2002 mass-flux framework for the convection parameterization with different closures (Grell = GR, Arakawa-Shubert = AS, Kain-Fritch = KF, Low omega = LO, Moisture convergence = MC and an ensemble parameterization (EN based on the other five closures. The simulations are run for one month during the SCOUT-O3 field campaign lead from Darwin (Australia. They have a 60 km horizontal resolution and a fine vertical resolution in the upper troposphere/lower stratosphere. Meteorological results are compared with satellite products, radiosoundings and SCOUT-O3 aircraft campaign data. They show that the model is generally in good agreement with the measurements with less variability in the model. Except for the precipitation field, the differences between the six simulations are small on average with respect to the differences with the meteorological observations. The comparison with TRMM rainrates shows that the six parameterizations or closures have similar behaviour concerning convection triggering times and locations. However, the 6 simulations provide two different behaviours for rainfall values, with the EN, AS and KF parameterizations (Group 1 modelling better rain fields than LO, MC and GR (Group 2. The vertical distribution of tropospheric tracers is very different for the two groups showing significantly more transport into the TTL for Group 1 related to the larger average values of the upward velocities. Nevertheless the low values for the Group 1 fluxes at and above the cold point level indicate that the model does not simulate significant overshooting. For stratospheric tracers

  15. How do dispersal costs and habitat selection influence realized population connectivity?

    Science.gov (United States)

    Burgess, Scott C; Treml, Eric A; Marshall, Dustin J

    2012-06-01

    Despite the importance of dispersal for population connectivity, dispersal is often costly to the individual. A major impediment to understanding connectivity has been a lack of data combining the movement of individuals and their survival to reproduction in the new habitat (realized connectivity). Although mortality often occurs during dispersal (an immediate cost), in many organisms costs are paid after dispersal (deferred costs). It is unclear how such deferred costs influence the mismatch between dispersal and realized connectivity. Through a series of experiments in the field and laboratory, we estimated both direct and indirect deferred costs in a marine bryozoan (Bugula neritina). We then used the empirical data to parameterize a theoretical model in order to formalize predictions about how dispersal costs influence realized connectivity. Individuals were more likely to colonize poor-quality habitat after prolonged dispersal durations. Individuals that colonized poor-quality habitat performed poorly after colonization because of some property of the habitat (an indirect deferred cost) rather than from prolonged dispersal per se (a direct deferred cost). Our theoretical model predicted that indirect deferred costs could result in nonlinear mismatches between spatial patterns of potential and realized connectivity. The deferred costs of dispersal are likely to be crucial for determining how well patterns of dispersal reflect realized connectivity. Ignoring these deferred costs could lead to inaccurate predictions of spatial population dynamics.

  16. Influence of ROI selection on Resting Functional Connectivity: An Individualized Approach for Resting fMRI Analysis

    Directory of Open Access Journals (Sweden)

    William Seunghyun Sohn

    2015-08-01

    Full Text Available The differences in how our brain is connected are often thought to reflect the differences in our individual personalities and cognitive abilities. Individual differences in brain connectivity has long been recognized in the neuroscience community however it has yet to manifest itself in the methodology of resting state analysis. This is evident as previous studies use the same region of interest (ROIs for all subjects. In this paper we demonstrate that the use of ROIs which are standardized across individuals leads to inaccurate calculations of functional connectivity. We also show that this problem can be addressed by taking an individualized approach by using subject-specific ROIs. Finally we show that ROI selection can affect the way we interpret our data by showing different changes in functional connectivity with ageing.

  17. Parameterization of Mixed Layer and Deep-Ocean Mesoscales Including Nonlinearity

    Science.gov (United States)

    Canuto, V. M.; Cheng, Y.; Dubovikov, M. S.; Howard, A. M.; Leboissetier, A.

    2018-01-01

    In 2011, Chelton et al. carried out a comprehensive census of mesoscales using altimetry data and reached the following conclusions: "essentially all of the observed mesoscale features are nonlinear" and "mesoscales do not move with the mean velocity but with their own drift velocity," which is "the most germane of all the nonlinear metrics."� Accounting for these results in a mesoscale parameterization presents conceptual and practical challenges since linear analysis is no longer usable and one needs a model of nonlinearity. A mesoscale parameterization is presented that has the following features: 1) it is based on the solutions of the nonlinear mesoscale dynamical equations, 2) it describes arbitrary tracers, 3) it includes adiabatic (A) and diabatic (D) regimes, 4) the eddy-induced velocity is the sum of a Gent and McWilliams (GM) term plus a new term representing the difference between drift and mean velocities, 5) the new term lowers the transfer of mean potential energy to mesoscales, 6) the isopycnal slopes are not as flat as in the GM case, 7) deep-ocean stratification is enhanced compared to previous parameterizations where being more weakly stratified allowed a large heat uptake that is not observed, 8) the strength of the Deacon cell is reduced. The numerical results are from a stand-alone ocean code with Coordinated Ocean-Ice Reference Experiment I (CORE-I) normal-year forcing.

  18. Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change

    Science.gov (United States)

    Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.

    2017-12-01

    Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at

  19. Current state of aerosol nucleation parameterizations for air-quality and climate modeling

    Science.gov (United States)

    Semeniuk, Kirill; Dastoor, Ashu

    2018-04-01

    Aerosol nucleation parameterization models commonly used in 3-D air quality and climate models have serious limitations. This includes classical nucleation theory based variants, empirical models and other formulations. Recent work based on detailed and extensive laboratory measurements and improved quantum chemistry computation has substantially advanced the state of nucleation parameterizations. In terms of inorganic nucleation involving BHN and THN including ion effects these new models should be considered as worthwhile replacements for the old models. However, the contribution of organic species to nucleation remains poorly quantified. New particle formation consists of a distinct post-nucleation growth regime which is characterized by a strong Kelvin curvature effect and is thus dependent on availability of very low volatility organic species or sulfuric acid. There have been advances in the understanding of the multiphase chemistry of biogenic and anthropogenic organic compounds which facilitate to overcome the initial aerosol growth barrier. Implementation of processes influencing new particle formation is challenging in 3-D models and there is a lack of comprehensive parameterizations. This review considers the existing models and recent innovations.

  20. Parameterized entropy analysis of EEG following hypoxic-ischemic brain injury

    International Nuclear Information System (INIS)

    Tong Shanbao; Bezerianos, Anastasios; Malhotra, Amit; Zhu Yisheng; Thakor, Nitish

    2003-01-01

    In the present study Tsallis and Renyi entropy methods were used to study the electric activity of brain following hypoxic-ischemic (HI) injury. We investigated the performances of these parameterized information measures in describing the electroencephalogram (EEG) signal of controlled experimental animal HI injury. The results show that (a): compared with Shannon and Renyi entropy, the parameterized Tsallis entropy acts like a spatial filter and the information rate can either tune to long range rhythms or to short abrupt changes, such as bursts or spikes during the beginning of recovery, by the entropic index q; (b): Renyi entropy is a compact and predictive indicator for monitoring the physiological changes during the recovery of brain injury. There is a reduction in the Renyi entropy after brain injury followed by a gradual recovery upon resuscitation

  1. A new conceptual framework for water and sediment connectivity

    Science.gov (United States)

    Keesstra, Saskia; Cerdà, Artemi; Parsons, Tony; Nunes, Joao Pedro; Saco, Patricia

    2016-04-01

    hydrological and sediment connectivity as described in previous research by Bracken et al (2013, 2015). By looking at the individual parts of the system, it becomes more manageable and less conceptual, which is important because we have to indicate where the research on connectivity should focus on. With this approach, processes and feedbacks in the catchment system can be pulled apart to study separately, making the system understandable and measureable, which will enable parameterization of models with actual measured data. The approach we took in describing water and sediment transfer is to first assess how they work in a system in dynamic equilibrium. After describing this, an assessment is made of how such dynamic equilibriums can be taken out of balance by an external push. Baartman, J.E.M., Masselink, R.H., Keesstra, S.D., Temme, A.J.A.M., 2013. Linking landscape morphological complexity and sediment connectivity. Earth Surface Processes and Landforms 38: 1457-1471. Bracken, L.J., Wainwright, J., Ali, G.A., Tetzlaff, D., Smith, M.W., Reaney, S.M., and Roy, A.G. 2013. Concepts of hydrological connectivity: research approaches, pathways and future agendas. Earth Science Reviews, 119, 17-34. Bracken, L.J., Turnbull, L, Wainwright, J. and Boogart, P. Submitted. Sediment Connectivity: A Framework for Understanding Sediment Transfer at Multiple Scales. Earth Surface Processes and Landforms. Cerdà, A., Brazier, R., Nearing, M., and de Vente, J. 2012. scales and erosion. Catena, 102, 1-2. doi:10.1016/j.catena.2011.09.006 Parsons A.J., Bracken L., Peoppl , R., Wainwright J., Keesstra, S.D., 2015. Editorial: Introduction to special issue on connectivity in water and sediment dynamics. In press in Earth Surface Processes and Landforms. DOI: 10.1002/esp.3714

  2. Systems Pharmacology-Based Approach of Connecting Disease Genes in Genome-Wide Association Studies with Traditional Chinese Medicine.

    Science.gov (United States)

    Kim, Jihye; Yoo, Minjae; Shin, Jimin; Kim, Hyunmin; Kang, Jaewoo; Tan, Aik Choon

    2018-01-01

    Traditional Chinese medicine (TCM) originated in ancient China has been practiced over thousands of years for treating various symptoms and diseases. However, the molecular mechanisms of TCM in treating these diseases remain unknown. In this study, we employ a systems pharmacology-based approach for connecting GWAS diseases with TCM for potential drug repurposing and repositioning. We studied 102 TCM components and their target genes by analyzing microarray gene expression experiments. We constructed disease-gene networks from 2558 GWAS studies. We applied a systems pharmacology approach to prioritize disease-target genes. Using this bioinformatics approach, we analyzed 14,713 GWAS disease-TCM-target gene pairs and identified 115 disease-gene pairs with q value < 0.2. We validated several of these GWAS disease-TCM-target gene pairs with literature evidence, demonstrating that this computational approach could reveal novel indications for TCM. We also develop TCM-Disease web application to facilitate the traditional Chinese medicine drug repurposing efforts. Systems pharmacology is a promising approach for connecting GWAS diseases with TCM for potential drug repurposing and repositioning. The computational approaches described in this study could be easily expandable to other disease-gene network analysis.

  3. ConnectViz: Accelerated Approach for Brain Structural Connectivity Using Delaunay Triangulation.

    Science.gov (United States)

    Adeshina, A M; Hashim, R

    2016-03-01

    Stroke is a cardiovascular disease with high mortality and long-term disability in the world. Normal functioning of the brain is dependent on the adequate supply of oxygen and nutrients to the brain complex network through the blood vessels. Stroke, occasionally a hemorrhagic stroke, ischemia or other blood vessel dysfunctions can affect patients during a cerebrovascular incident. Structurally, the left and the right carotid arteries, and the right and the left vertebral arteries are responsible for supplying blood to the brain, scalp and the face. However, a number of impairment in the function of the frontal lobes may occur as a result of any decrease in the flow of the blood through one of the internal carotid arteries. Such impairment commonly results in numbness, weakness or paralysis. Recently, the concepts of brain's wiring representation, the connectome, was introduced. However, construction and visualization of such brain network requires tremendous computation. Consequently, previously proposed approaches have been identified with common problems of high memory consumption and slow execution. Furthermore, interactivity in the previously proposed frameworks for brain network is also an outstanding issue. This study proposes an accelerated approach for brain connectomic visualization based on graph theory paradigm using compute unified device architecture, extending the previously proposed SurLens Visualization and computer aided hepatocellular carcinoma frameworks. The accelerated brain structural connectivity framework was evaluated with stripped brain datasets from the Department of Surgery, University of North Carolina, Chapel Hill, USA. Significantly, our proposed framework is able to generate and extract points and edges of datasets, displays nodes and edges in the datasets in form of a network and clearly maps data volume to the corresponding brain surface. Moreover, with the framework, surfaces of the dataset were simultaneously displayed with the

  4. Evaluating the effectiveness of restoring longitudinal connectivity for stream fish communities: towards a more holistic approach.

    Science.gov (United States)

    Tummers, Jeroen S; Hudson, Steve; Lucas, Martyn C

    2016-11-01

    A more holistic approach towards testing longitudinal connectivity restoration is needed in order to establish that intended ecological functions of such restoration are achieved. We illustrate the use of a multi-method scheme to evaluate the effectiveness of 'nature-like' connectivity restoration for stream fish communities in the River Deerness, NE England. Electric-fishing, capture-mark-recapture, PIT telemetry and radio-telemetry were used to measure fish community composition, dispersal, fishway efficiency and upstream migration respectively. For measuring passage and dispersal, our rationale was to evaluate a wide size range of strong swimmers (exemplified by brown trout Salmo trutta) and weak swimmers (exemplified by bullhead Cottus perifretum) in situ in the stream ecosystem. Radio-tracking of adult trout during the spawning migration showed that passage efficiency at each of five connectivity-restored sites was 81.3-100%. Unaltered (experimental control) structures on the migration route had a bottle-neck effect on upstream migration, especially during low flows. However, even during low flows, displaced PIT tagged juvenile trout (total n=153) exhibited a passage efficiency of 70.1-93.1% at two nature-like passes. In mark-recapture experiments juvenile brown trout and bullhead tagged (total n=5303) succeeded in dispersing upstream more often at most structures following obstacle modification, but not at the two control sites, based on a Laplace kernel modelling approach of observed dispersal distance and barrier traverses. Medium-term post-restoration data (2-3years) showed that the fish assemblage remained similar at five of six connectivity-restored sites and two control sites, but at one connectivity-restored headwater site previously inhabited by trout only, three native non-salmonid species colonized. We conclude that stream habitat reconnection should support free movement of a wide range of species and life stages, wherever retention of such

  5. Improving social connection through a communities-of-practice-inspired cognitive work analysis approach.

    Science.gov (United States)

    Euerby, Adam; Burns, Catherine M

    2014-03-01

    Increasingly, people work in socially networked environments. With growing adoption of enterprise social network technologies, supporting effective social community is becoming an important factor in organizational success. Relatively few human factors methods have been applied to social connection in communities. Although team methods provide a contribution, they do not suit design for communities. Wenger's community of practice concept, combined with cognitive work analysis, provided one way of designing for community. We used a cognitive work analysis approach modified with principles for supporting communities of practice to generate a new website design. Over several months, the community using the site was studied to examine their degree of social connectedness and communication levels. Social network analysis and communications analysis, conducted at three different intervals, showed increases in connections between people and between people and organizations, as well as increased communication following the launch of the new design. In this work, we suggest that human factors approaches can be effective in social environments, when applied considering social community principles. This work has implications for the development of new human factors methods as well as the design of interfaces for sociotechnical systems that have community building requirements.

  6. Finding significantly connected voxels based on histograms of connection strengths

    DEFF Research Database (Denmark)

    Kasenburg, Niklas; Pedersen, Morten Vester; Darkner, Sune

    2016-01-01

    We explore a new approach for structural connectivity based segmentations of subcortical brain regions. Connectivity based segmentations are usually based on fibre connections from a seed region to predefined target regions. We present a method for finding significantly connected voxels based...... on the distribution of connection strengths. Paths from seed voxels to all voxels in a target region are obtained from a shortest-path tractography. For each seed voxel we approximate the distribution with a histogram of path scores. We hypothesise that the majority of estimated connections are false-positives...... and that their connection strength is distributed differently from true-positive connections. Therefore, an empirical null-distribution is defined for each target region as the average normalized histogram over all voxels in the seed region. Single histograms are then tested against the corresponding null...

  7. submitter Data-driven RBE parameterization for helium ion beams

    CERN Document Server

    Mairani, A; Dokic, I; Valle, S M; Tessonnier, T; Galm, R; Ciocca, M; Parodi, K; Ferrari, A; Jäkel, O; Haberer, T; Pedroni, P; Böhlen, T T

    2016-01-01

    Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter ${{(\\alpha /\\beta )}_{\\text{ph}}}$ of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the $\\text{RB}{{\\text{E}}_{\\alpha}}={{\\alpha}_{\\text{He}}}/{{\\alpha}_{\\text{ph}}}$ and ${{\\text{R}}_{\\beta}}={{\\beta}_{\\text{He}}}/{{\\beta}_{\\text{ph}}}$ ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival ($\\text{RB}{{\\text{E}}_{10}}$ ) are compared with the experimental ones. Pearson's correlati...

  8. Further study on parameterization of reactor NAA: Pt. 2

    International Nuclear Information System (INIS)

    Tian Weizhi; Zhang Shuxin

    1989-01-01

    In the last paper, Ik 0 method was proposed for fission interference corrections. Another important kind of interferences in reator NAA is due to threshold reaction induced by reactor fast neutrons. In view of the increasing importance of this kind of interferences, and difficulties encountered in using the relative comparison method, a parameterized method has been introduced. Typical channels in heavy water reflector and No.2 horizontal channel of Heavy Water Research Reactor in the Insitute of Atomic Energy have been shown to have fast neutron energy distributions (E>4 MeV) close to primary fission neutron spectrum, by using multi-threshold detectors. On this basis, Ti foil is used as an 'instant fast neutron flux monitor' in parameterized corrections for threshold reaction interferences in the long irradiations. Constant values of φ f /φ s = 0.70 ± 0.02% have been obtained for No.2 rabbit channel. This value can be directly used for threshold reaction inference correction in the short irradiations

  9. Making Connections

    Science.gov (United States)

    Pien, Cheng Lu; Dongsheng, Zhao

    2011-01-01

    Effective teaching includes enabling learners to make connections within mathematics. It is easy to accord with this statement, but how often is it a reality in the mathematics classroom? This article describes an approach in "connecting equivalent" fractions and whole number operations. The authors illustrate how a teacher can combine a common…

  10. Climate impacts of parameterized Nordic Sea overflows

    Science.gov (United States)

    Danabasoglu, Gokhan; Large, William G.; Briegleb, Bruce P.

    2010-11-01

    A new overflow parameterization (OFP) of density-driven flows through ocean ridges via narrow, unresolved channels has been developed and implemented in the ocean component of the Community Climate System Model version 4. It represents exchanges from the Nordic Seas and the Antarctic shelves, associated entrainment, and subsequent injection of overflow product waters into the abyssal basins. We investigate the effects of the parameterized Denmark Strait (DS) and Faroe Bank Channel (FBC) overflows on the ocean circulation, showing their impacts on the Atlantic Meridional Overturning Circulation and the North Atlantic climate. The OFP is based on the Marginal Sea Boundary Condition scheme of Price and Yang (1998), but there are significant differences that are described in detail. Two uncoupled (ocean-only) and two fully coupled simulations are analyzed. Each pair consists of one case with the OFP and a control case without this parameterization. In both uncoupled and coupled experiments, the parameterized DS and FBC source volume transports are within the range of observed estimates. The entrainment volume transports remain lower than observational estimates, leading to lower than observed product volume transports. Due to low entrainment, the product and source water properties are too similar. The DS and FBC overflow temperature and salinity properties are in better agreement with observations in the uncoupled case than in the coupled simulation, likely reflecting surface flux differences. The most significant impact of the OFP is the improved North Atlantic Deep Water penetration depth, leading to a much better comparison with the observational data and significantly reducing the chronic, shallow penetration depth bias in level coordinate models. This improvement is due to the deeper penetration of the southward flowing Deep Western Boundary Current. In comparison with control experiments without the OFP, the abyssal ventilation rates increase in the North

  11. Evaluating and Improving Wind Forecasts over South China: The Role of Orographic Parameterization in the GRAPES Model

    Science.gov (United States)

    Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia

    2018-06-01

    Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.

  12. Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.

    Science.gov (United States)

    Gubler, S.; Gruber, S.; Purves, R. S.

    2012-06-01

    As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR) and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR). In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB) stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM) in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night. We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD) and the relative root mean squared deviance (RMSD) of the clear-sky global SDR scatter between between -2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations to local conditions

  13. Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.

    Directory of Open Access Journals (Sweden)

    S. Gubler

    2012-06-01

    Full Text Available As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR. In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night.

    We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD and the relative root mean squared deviance (RMSD of the clear-sky global SDR scatter between between −2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations

  14. Parameterizing Size Distribution in Ice Clouds

    Energy Technology Data Exchange (ETDEWEB)

    DeSlover, Daniel; Mitchell, David L.

    2009-09-25

    cloud optical properties formulated in terms of PSD parameters in combination with remote measurements of thermal radiances to characterize the small mode. This is possible since the absorption efficiency (Qabs) of small mode crystals is larger at 12 µm wavelength relative to 11 µm wavelength due to the process of wave resonance or photon tunneling more active at 12 µm. This makes the 12/11 µm absorption optical depth ratio (or equivalently the 12/11 µm Qabs ratio) a means for detecting the relative concentration of small ice particles in cirrus. Using this principle, this project tested and developed PSD schemes that can help characterize cirrus clouds at each of the three ARM sites: SGP, NSA and TWP. This was the main effort of this project. These PSD schemes and ice sedimentation velocities predicted from them have been used to test the new cirrus microphysics parameterization in the GCM known as the Community Climate Systems Model (CCSM) as part of an ongoing collaboration with NCAR. Regarding the second problem, we developed and did preliminary testing on a passive thermal method for retrieving the total water path (TWP) of Arctic mixed phase clouds where TWPs are often in the range of 20 to 130 g m-2 (difficult for microwave radiometers to accurately measure). We also developed a new radar method for retrieving the cloud ice water content (IWC), which can be vertically integrated to yield the ice water path (IWP). These techniques were combined to determine the IWP and liquid water path (LWP) in Arctic clouds, and hence the fraction of ice and liquid water. We have tested this approach using a case study from the ARM field campaign called M-PACE (Mixed-Phase Arctic Cloud Experiment). This research led to a new satellite remote sensing method that appears promising for detecting low levels of liquid water in high clouds typically between -20 and -36 oC. We hope to develop this method in future research.

  15. Field Investigation of the Turbulent Flux Parameterization and Scalar Turbulence Structure over a Melting Valley Glacier

    Science.gov (United States)

    Guo, X.; Yang, K.; Yang, W.; Li, S.; Long, Z.

    2011-12-01

    We present a field investigation over a melting valley glacier on the Tibetan Plateau. One particular aspect lies in that three melt phases are distinguished during the glacier's ablation season, which enables us to compare results over snow, bare-ice, and hummocky surfaces [with aerodynamic roughness lengths (z0M) varying on the order of 10-4-10-2 m]. We address two issues of common concern in the study of glacio-meteorology and micrometeorology. First, we study turbulent energy flux estimation through a critical evaluation of three parameterizations of the scalar roughness lengths (z0T for temperature and z0q for humidity), viz. key factors for the accurate estimation of sensible heat and latent heat fluxes using the bulk aerodynamic method. The first approach (Andreas 1987, Boundary-Layer Meteorol 38:159-184) is based on surface-renewal models and has been very widely applied in glaciated areas; the second (Yang et al. 2002, Q J Roy Meteorol Soc 128:2073-2087) has never received application over an ice/snow surface, despite its validity in arid regions; the third approach (Smeets and van den Broeke 2008, Boundary-Layer Meteorol 128:339-355) is proposed for use specifically over rough ice defined as z0M > 10-3 m or so. This empirical z0M threshold value is deemed of general relevance to glaciated areas (e.g. ice sheet/cap and valley/outlet glaciers), above which the first approach gives underestimated z0T and z0q. The first and the third approaches tend to underestimate and overestimate turbulent heat/moisture exchange, respectively (relative errors often > 30%). Overall, the second approach produces fairly low errors in energy flux estimates; it thus emerges as a practically useful choice to parameterize z0T and z0q over an ice/snow surface. Our evaluation of z0T and z0q parameterizations hopefully serves as a useful source of reference for physically based modeling of land-ice surface energy budget and mass balance. Second, we explore how scalar turbulence

  16. A Stochastic Lagrangian Basis for a Probabilistic Parameterization of Moisture Condensation in Eulerian Models

    OpenAIRE

    Tsang, Yue-Kin; Vallis, Geoffrey K.

    2018-01-01

    In this paper we describe the construction of an efficient probabilistic parameterization that could be used in a coarse-resolution numerical model in which the variation of moisture is not properly resolved. An Eulerian model using a coarse-grained field on a grid cannot properly resolve regions of saturation---in which condensation occurs---that are smaller than the grid boxes. Thus, in the absence of a parameterization scheme, either the grid box must become saturated or condensation will ...

  17. Effects of model resolution and parameterizations on the simulations of clouds, precipitation, and their interactions with aerosols

    Science.gov (United States)

    Lee, Seoung Soo; Li, Zhanqing; Zhang, Yuwei; Yoo, Hyelim; Kim, Seungbum; Kim, Byung-Gon; Choi, Yong-Sang; Mok, Jungbin; Um, Junshik; Ock Choi, Kyoung; Dong, Danhong

    2018-01-01

    This study investigates the roles played by model resolution and microphysics parameterizations in the well-known uncertainties or errors in simulations of clouds, precipitation, and their interactions with aerosols by the numerical weather prediction (NWP) models. For this investigation, we used cloud-system-resolving model (CSRM) simulations as benchmark simulations that adopt high-resolution and full-fledged microphysical processes. These simulations were evaluated against observations, and this evaluation demonstrated that the CSRM simulations can function as benchmark simulations. Comparisons between the CSRM simulations and the simulations at the coarse resolutions that are generally adopted by current NWP models indicate that the use of coarse resolutions as in the NWP models can lower not only updrafts and other cloud variables (e.g., cloud mass, condensation, deposition, and evaporation) but also their sensitivity to increasing aerosol concentration. The parameterization of the saturation process plays an important role in the sensitivity of cloud variables to aerosol concentrations. while the parameterization of the sedimentation process has a substantial impact on how cloud variables are distributed vertically. The variation in cloud variables with resolution is much greater than what happens with varying microphysics parameterizations, which suggests that the uncertainties in the NWP simulations are associated with resolution much more than microphysics parameterizations.

  18. A Novel Synchronization-Based Approach for Functional Connectivity Analysis

    Directory of Open Access Journals (Sweden)

    Angela Lombardi

    2017-01-01

    Full Text Available Complex network analysis has become a gold standard to investigate functional connectivity in the human brain. Popular approaches for quantifying functional coupling between fMRI time series are linear zero-lag correlation methods; however, they might reveal only partial aspects of the functional links between brain areas. In this work, we propose a novel approach for assessing functional coupling between fMRI time series and constructing functional brain networks. A phase space framework is used to map couples of signals exploiting their cross recurrence plots (CRPs to compare the trajectories of the interacting systems. A synchronization metric is extracted from the CRP to assess the coupling behavior of the time series. Since the functional communities of a healthy population are expected to be highly consistent for the same task, we defined functional networks of task-related fMRI data of a cohort of healthy subjects and applied a modularity algorithm in order to determine the community structures of the networks. The within-group similarity of communities is evaluated to verify whether such new metric is robust enough against noise. The synchronization metric is also compared with Pearson’s correlation coefficient and the detected communities seem to better reflect the functional brain organization during the specific task.

  19. Tsunami damping by mangrove forest: a laboratory study using parameterized trees

    Directory of Open Access Journals (Sweden)

    A. Strusińska-Correia

    2013-02-01

    Full Text Available Tsunami attenuation by coastal vegetation was examined under laboratory conditions for mature mangroves Rhizophora sp. The developed novel tree parameterization concept, accounting for both bio-mechanical and structural tree properties, allowed to substitute the complex tree structure by a simplified tree model of identical hydraulic resistance. The most representative parameterized mangrove model was selected among the tested models with different frontal area and root density, based on hydraulic test results. The selected parameterized tree models were arranged in a forest model of different width and further tested systematically under varying incident tsunami conditions (solitary waves and tsunami bores. The damping performance of the forest models under these two flow regimes was compared in terms of wave height and force envelopes, wave transmission coefficient as well as drag and inertia coefficients. Unlike the previous studies, the results indicate a significant contribution of the foreshore topography to solitary wave energy reduction through wave breaking in comparison to that attributed to the forest itself. A similar rate of tsunami transmission (ca. 20% was achieved for both flow conditions (solitary waves and tsunami bores and the widest forest (75 m in prototype investigated. Drag coefficient CD attributed to the solitary waves tends to be constant (CD = 1.5 over the investigated range of the Reynolds number.

  20. Parameterization of solar flare dose

    International Nuclear Information System (INIS)

    Lamarche, A.H.; Poston, J.W.

    1996-01-01

    A critical aspect of missions to the moon or Mars will be the safety and health of the crew. Radiation in space is a hazard for astronauts, especially high-energy radiation following certain types of solar flares. A solar flare event can be very dangerous if astronauts are not adequately shielded because flares can deliver a very high dose in a short period of time. The goal of this research was to parameterize solar flare dose as a function of time to see if it was possible to predict solar flare occurrence, thus providing a warning time. This would allow astronauts to take corrective action and avoid receiving a dose greater than the recommended limit set by the National Council on Radiation Protection and Measurements (NCRP)

  1. The Topology Optimization of Three-dimensional Cooling Fins by the Internal Element Connectivity Parameterization Method

    International Nuclear Information System (INIS)

    Yoo, Sung Min; Kim, Yoon Young

    2007-01-01

    This work is concerned with the topology optimization of three-dimensional cooling fins or heat sinks. Motivated by earlier success of the Internal Element Connectivity Method (I-ECP) method in two dimensional problems, the extension of I-ECP to three-dimensional problems is carried out. The main efforts were made to maintain the numerical trouble-free characteristics of I-ECP for full three-dimensional problems; a serious numerical problem appearing in thermal topology optimization is erroneous temperature undershooting. The effectiveness of the present implementation was checked through the design optimization of three-dimensional fins

  2. Model parameterization as method for data analysis in dendroecology

    Science.gov (United States)

    Tychkov, Ivan; Shishov, Vladimir; Popkova, Margarita

    2017-04-01

    There is no argue in usefulness of process-based models in ecological studies. Only limitations is how developed algorithm of model and how it will be applied for research. Simulation of tree-ring growth based on climate provides valuable information of tree-ring growth response on different environmental conditions, but also shares light on species-specifics of tree-ring growth process. Visual parameterization of the Vaganov-Shashkin model, allows to estimate non-linear response of tree-ring growth based on daily climate data: daily temperature, estimated day light and soil moisture. Previous using of the VS-Oscilloscope (a software tool of the visual parameterization) shows a good ability to recreate unique patterns of tree-ring growth for coniferous species in Siberian Russia, USA, China, Mediterranean Spain and Tunisia. But using of the models mostly is one-sided to better understand different tree growth processes, opposite to statistical methods of analysis (e.g. Generalized Linear Models, Mixed Models, Structural Equations.) which can be used for reconstruction and forecast. Usually the models are used either for checking of new hypothesis or quantitative assessment of physiological tree growth data to reveal a growth process mechanisms, while statistical methods used for data mining assessment and as a study tool itself. The high sensitivity of the model's VS-parameters reflects the ability of the model to simulate tree-ring growth and evaluates value of limiting growth climate factors. Precise parameterization of VS-Oscilloscope provides valuable information about growth processes of trees and under what conditions these processes occur (e.g. day of growth season onset, length of season, value of minimal/maximum temperature for tree-ring growth, formation of wide or narrow rings etc.). The work was supported by the Russian Science Foundation (RSF # 14-14-00219)

  3. Systematic Parameterization of Lignin for the CHARMM Force Field

    Energy Technology Data Exchange (ETDEWEB)

    Vermaas, Joshua; Petridis, Loukas; Beckham, Gregg; Crowley, Michael

    2017-07-06

    Plant cell walls have three primary components, cellulose, hemicellulose, and lignin, the latter of which is a recalcitrant, aromatic heteropolymer that provides structure to plants, water and nutrient transport through plant tissues, and a highly effective defense against pathogens. Overcoming the recalcitrance of lignin is key to effective biomass deconstruction, which would in turn enable the use of biomass as a feedstock for industrial processes. Our understanding of lignin structure in the plant cell wall is hampered by the limitations of the available lignin forcefields, which currently only account for a single linkage between lignins and lack explicit parameterization for emerging lignin structures both from natural variants and engineered lignin structures. Since polymerization of lignin occurs via radical intermediates, multiple C-O and C-C linkages have been isolated , and the current force field only represents a small subset of lignin the diverse lignin structures found in plants. In order to take into account the wide range of lignin polymerization chemistries, monomers and dimer combinations of C-, H-, G-, and S-lignins as well as with hydroxycinnamic acid linkages were subjected to extensive quantum mechanical calculations to establish target data from which to build a complete molecular mechanics force field tuned specifically for diverse lignins. This was carried out in a GPU-accelerated global optimization process, whereby all molecules were parameterized simultaneously using the same internal parameter set. By parameterizing lignin specifically, we are able to more accurately represent the interactions and conformations of lignin monomers and dimers relative to a general force field. This new force field will enables computational researchers to study the effects of different linkages on the structure of lignin, as well as construct more accurate plant cell wall models based on observed statistical distributions of lignin that differ between

  4. An Integrated Approach for Non-Recursive Formulation of Connection-Coefficients of Orthogonal Functions

    Directory of Open Access Journals (Sweden)

    Monika GARG

    2012-08-01

    Full Text Available In this paper, an integrated approach is proposed for non-recursive formulation of connection coefficients of different orthogonal functions in terms of a generic orthogonal function. The application of these coefficients arises when the product of two orthogonal basis functions are to be expressed in terms of single basis functions. Two significant advantages are achieved; one, the non-recursive formulations avoid memory and stack overflows in computer implementations; two, the integrated approach provides for digital hardware once-designed can be used for different functions. Computational savings achieved with the proposed non-recursive formulation vis-à-vis recursive formulation, reported in the literature so far, have been demonstrated using MATLAB PROFILER.

  5. A review of recent research on improvement of physical parameterizations in the GLA GCM

    Science.gov (United States)

    Sud, Y. C.; Walker, G. K.

    1990-01-01

    A systematic assessment of the effect of a series of improvements in physical parameterizations of the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) are summarized. The implementation of the Simple Biosphere Model (SiB) in the GCM is followed by a comparison of SiB GCM simulations with that of the earlier slab soil hydrology GCM (SSH-GCM) simulations. In the Sahelian context, the biogeophysical component of desertification was analyzed for SiB-GCM simulations. Cumulus parameterization is found to be the primary determinant of the organization of the simulated tropical rainfall of the GLA GCM using Arakawa-Schubert cumulus parameterization. A comparison of model simulations with station data revealed excessive shortwave radiation accompanied by excessive drying and heating to the land. The perpetual July simulations with and without interactive soil moisture shows that 30 to 40 day oscillations may be a natural mode of the simulated earth atmosphere system.

  6. THOR: A New Higher-Order Closure Assumed PDF Subgrid-Scale Parameterization; Evaluation and Application to Low Cloud Feedbacks

    Science.gov (United States)

    Firl, G. J.; Randall, D. A.

    2013-12-01

    The so-called "assumed probability density function (PDF)" approach to subgrid-scale (SGS) parameterization has shown to be a promising method for more accurately representing boundary layer cloudiness under a wide range of conditions. A new parameterization has been developed, named the Two-and-a-Half ORder closure (THOR), that combines this approach with a higher-order turbulence closure. THOR predicts the time evolution of the turbulence kinetic energy components, the variance of ice-liquid water potential temperature (θil) and total non-precipitating water mixing ratio (qt) and the covariance between the two, and the vertical fluxes of horizontal momentum, θil, and qt. Ten corresponding third-order moments in addition to the skewnesses of θil and qt are calculated using diagnostic functions assuming negligible time tendencies. The statistical moments are used to define a trivariate double Gaussian PDF among vertical velocity, θil, and qt. The first three statistical moments of each variable are used to estimate the two Gaussian plume means, variances, and weights. Unlike previous similar models, plume variances are not assumed to be equal or zero. Instead, they are parameterized using the idea that the less dominant Gaussian plume (typically representing the updraft-containing portion of a grid cell) has greater variance than the dominant plume (typically representing the "environmental" or slowly subsiding portion of a grid cell). Correlations among the three variables are calculated using the appropriate covariance moments, and both plume correlations are assumed to be equal. The diagnosed PDF in each grid cell is used to calculate SGS condensation, SGS fluxes of cloud water species, SGS buoyancy terms, and to inform other physical parameterizations about SGS variability. SGS condensation is extended from previous similar models to include condensation over both liquid and ice substrates, dependent on the grid cell temperature. Implementations have been

  7. Radiative flux and forcing parameterization error in aerosol-free clear skies.

    Science.gov (United States)

    Pincus, Robert; Mlawer, Eli J; Oreopoulos, Lazaros; Ackerman, Andrew S; Baek, Sunghye; Brath, Manfred; Buehler, Stefan A; Cady-Pereira, Karen E; Cole, Jason N S; Dufresne, Jean-Louis; Kelley, Maxwell; Li, Jiangnan; Manners, James; Paynter, David J; Roehrig, Romain; Sekiguchi, Miho; Schwarzkopf, Daniel M

    2015-07-16

    Radiation parameterizations in GCMs are more accurate than their predecessorsErrors in estimates of 4 ×CO 2 forcing are large, especially for solar radiationErrors depend on atmospheric state, so global mean error is unknown.

  8. The parameterization of microchannel-plate-based detection systems

    Science.gov (United States)

    Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Barrie, Alexander C.; Chornay, Dennis J.; MacDonald, Elizabeth A.; Holland, Matthew P.; Giles, Barbara L.; Pollock, Craig J.

    2016-10-01

    The most common instrument for low-energy plasmas consists of a top-hat electrostatic analyzer (ESA) geometry coupled with a microchannel-plate-based (MCP-based) detection system. While the electrostatic optics for such sensors are readily simulated and parameterized during the laboratory calibration process, the detection system is often less well characterized. Here we develop a comprehensive mathematical description of particle detection systems. As a function of instrument azimuthal angle, we parameterize (1) particle scattering within the ESA and at the surface of the MCP, (2) the probability distribution of MCP gain for an incident particle, (3) electron charge cloud spreading between the MCP and anode board, and (4) capacitive coupling between adjacent discrete anodes. Using the Dual Electron Spectrometers on the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission as an example, we demonstrate a method for extracting these fundamental detection system parameters from laboratory calibration. We further show that parameters that will evolve in flight, namely, MCP gain, can be determined through application of this model to specifically tailored in-flight calibration activities. This methodology provides a robust characterization of sensor suite performance throughout mission lifetime. The model developed in this work is not only applicable to existing sensors but also can be used as an analytical design tool for future particle instrumentation.

  9. Does age matter? Controls on the spatial organization of age and life expectancy in hillslopes, and implications for transport parameterization using rSAS

    Science.gov (United States)

    Kim, M.; Harman, C. J.; Troch, P. A. A.

    2017-12-01

    Hillslopes have been extensively explored as a natural fundamental unit for spatially-integrated hydrologic models. Much of this attention has focused on their use in predicting the quantity of discharge, but hillslope-based models can potentially be used to predict the composition of discharge (in terms of age and chemistry) if they can be parameterized terms of measurable physical properties. Here we present advances in the use of rank StorAge Selection (rSAS) functions to parameterize transport through hillslopes. These functions provide a mapping between the distribution of water ages in storage and in outfluxes in terms of a probability distribution over storage. It has previously been shown that rSAS functions are related to the relative partitioning and arrangement of flow pathways (and variabilities in that arrangement), while separating out the effect of changes in the overall rate of fluxes in and out. This suggests that rSAS functions should have a connection to the internal organization of flow paths in a hillslope.Using a combination of numerical modeling and theoretical analysis we examined: first, the controls of physical properties on internal spatial organization of age (time since entry), life expectancy (time to exit), and the emergent transit time distribution and rSAS functions; second, the possible parameterization of the rSAS function using the physical properties. The numerical modeling results showed the clear dependence of the rSAS function forms on the physical properties and relations between the internal organization and the rSAS functions. For the different rates of the exponential saturated hydraulic conductivity decline with depth the spatial organization of life expectancy varied dramatically and determined the rSAS function forms, while the organizaiton of the age showed less qualitative differences. Analytical solutions predicting this spatial organization and the resulting rSAS function were derived for simplified systems. These

  10. Inclusion of Solar Elevation Angle in Land Surface Albedo Parameterization Over Bare Soil Surface.

    Science.gov (United States)

    Zheng, Zhiyuan; Wei, Zhigang; Wen, Zhiping; Dong, Wenjie; Li, Zhenchao; Wen, Xiaohang; Zhu, Xian; Ji, Dong; Chen, Chen; Yan, Dongdong

    2017-12-01

    Land surface albedo is a significant parameter for maintaining a balance in surface energy. It is also an important parameter of bare soil surface albedo for developing land surface process models that accurately reflect diurnal variation characteristics and the mechanism behind the solar spectral radiation albedo on bare soil surfaces and for understanding the relationships between climate factors and spectral radiation albedo. Using a data set of field observations, we conducted experiments to analyze the variation characteristics of land surface solar spectral radiation and the corresponding albedo over a typical Gobi bare soil underlying surface and to investigate the relationships between the land surface solar spectral radiation albedo, solar elevation angle, and soil moisture. Based on both solar elevation angle and soil moisture measurements simultaneously, we propose a new two-factor parameterization scheme for spectral radiation albedo over bare soil underlying surfaces. The results of numerical simulation experiments show that the new parameterization scheme can more accurately depict the diurnal variation characteristics of bare soil surface albedo than the previous schemes. Solar elevation angle is one of the most important factors for parameterizing bare soil surface albedo and must be considered in the parameterization scheme, especially in arid and semiarid areas with low soil moisture content. This study reveals the characteristics and mechanism of the diurnal variation of bare soil surface solar spectral radiation albedo and is helpful in developing land surface process models, weather models, and climate models.

  11. Parameterizing Subgrid-Scale Orographic Drag in the High-Resolution Rapid Refresh (HRRR) Atmospheric Model

    Science.gov (United States)

    Toy, M. D.; Olson, J.; Kenyon, J.; Smirnova, T. G.; Brown, J. M.

    2017-12-01

    The accuracy of wind forecasts in numerical weather prediction (NWP) models is improved when the drag forces imparted on atmospheric flow by subgrid-scale orography are included. Without such parameterizations, only the terrain resolved by the model grid, along with the small-scale obstacles parameterized by the roughness lengths can have an effect on the flow. This neglects the impacts of subgrid-scale terrain variations, which typically leads to wind speeds that are too strong. Using statistical information about the subgrid-scale orography, such as the mean and variance of the topographic height within a grid cell, the drag forces due to flow blocking, gravity wave drag, and turbulent form drag are estimated and distributed vertically throughout the grid cell column. We recently implemented the small-scale gravity wave drag paramterization of Steeneveld et al. (2008) and Tsiringakis et al. (2017) for stable planetary boundary layers, and the turbulent form drag parameterization of Beljaars et al. (2004) in the High-Resolution Rapid Refresh (HRRR) NWP model developed at the National Oceanic and Atmospheric Administration (NOAA). As a result, a high surface wind speed bias in the model has been reduced and small improvement to the maintenance of stable layers has also been found. We present the results of experiments with the subgrid-scale orographic drag parameterization for the regional HRRR model, as well as for a global model in development at NOAA, showing the direct and indirect impacts.

  12. The parameterized post-Newtonian limit of bimetric theories of gravity

    International Nuclear Information System (INIS)

    Clifton, Timothy; Banados, Maximo; Skordis, Constantinos

    2010-01-01

    We consider the post-Newtonian limit of a general class of bimetric theories of gravity, in which both metrics are dynamical. The established parameterized post-Newtonian approach is followed as closely as possible, although new potentials are found that do not exist within the standard framework. It is found that these theories can evade solar system tests of post-Newtonian gravity remarkably well. We show that perturbations about Minkowski space in these theories contain both massless and massive degrees of freedom, and that in general there are two different types of massive mode, each with a different mass parameter. If both of these masses are sufficiently large then the predictions of the most general class of theories we consider are indistinguishable from those of general relativity, up to post-Newtonian order in a weak-field, low-velocity expansion. In the limit that the massive modes become massless, we find that these general theories do not exhibit a van Dam-Veltman-Zakharov-like discontinuity in their γ parameter, although there are discontinuities in other post-Newtonian parameters as the massless limit is approached. This smooth behaviour in γ is due to the discontinuities from each of the two different massive modes cancelling each other out. Such cancellations cannot occur in special cases with only one massive mode, such as the Isham-Salam-Strathdee theory.

  13. A parameterization for the absorption of solar radiation by water vapor in the earth's atmosphere

    Science.gov (United States)

    Wang, W.-C.

    1976-01-01

    A parameterization for the absorption of solar radiation as a function of the amount of water vapor in the earth's atmosphere is obtained. Absorption computations are based on the Goody band model and the near-infrared absorption band data of Ludwig et al. A two-parameter Curtis-Godson approximation is used to treat the inhomogeneous atmosphere. Heating rates based on a frequently used one-parameter pressure-scaling approximation are also discussed and compared with the present parameterization.

  14. Morphing methods to parameterize specimen-specific finite element model geometries.

    Science.gov (United States)

    Sigal, Ian A; Yang, Hongli; Roberts, Michael D; Downs, J Crawford

    2010-01-19

    Shape plays an important role in determining the biomechanical response of a structure. Specimen-specific finite element (FE) models have been developed to capture the details of the shape of biological structures and predict their biomechanics. Shape, however, can vary considerably across individuals or change due to aging or disease, and analysis of the sensitivity of specimen-specific models to these variations has proven challenging. An alternative to specimen-specific representation has been to develop generic models with simplified geometries whose shape is relatively easy to parameterize, and can therefore be readily used in sensitivity studies. Despite many successful applications, generic models are limited in that they cannot make predictions for individual specimens. We propose that it is possible to harness the detail available in specimen-specific models while leveraging the power of the parameterization techniques common in generic models. In this work we show that this can be accomplished by using morphing techniques to parameterize the geometry of specimen-specific FE models such that the model shape can be varied in a controlled and systematic way suitable for sensitivity analysis. We demonstrate three morphing techniques by using them on a model of the load-bearing tissues of the posterior pole of the eye. We show that using relatively straightforward procedures these morphing techniques can be combined, which allows the study of factor interactions. Finally, we illustrate that the techniques can be used in other systems by applying them to morph a femur. Morphing techniques provide an exciting new possibility for the analysis of the biomechanical role of shape, independently or in interaction with loading and material properties. Copyright 2009 Elsevier Ltd. All rights reserved.

  15. Multisite Evaluation of APEX for Water Quality: II. Regional Parameterization.

    Science.gov (United States)

    Nelson, Nathan O; Baffaut, Claire; Lory, John A; Anomaa Senaviratne, G M M M; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S

    2017-11-01

    Phosphorus (P) Index assessment requires independent estimates of long-term average annual P loss from fields, representing multiple climatic scenarios, management practices, and landscape positions. Because currently available measured data are insufficient to evaluate P Index performance, calibrated and validated process-based models have been proposed as tools to generate the required data. The objectives of this research were to develop a regional parameterization for the Agricultural Policy Environmental eXtender (APEX) model to estimate edge-of-field runoff, sediment, and P losses in restricted-layer soils of Missouri and Kansas and to assess the performance of this parameterization using monitoring data from multiple sites in this region. Five site-specific calibrated models (SSCM) from within the region were used to develop a regionally calibrated model (RCM), which was further calibrated and validated with measured data. Performance of the RCM was similar to that of the SSCMs for runoff simulation and had Nash-Sutcliffe efficiency (NSE) > 0.72 and absolute percent bias (|PBIAS|) 90%) and was particularly ineffective at simulating sediment loss from locations with small sediment loads. The RCM had acceptable performance for simulation of total P loss (NSE > 0.74, |PBIAS| < 30%) but underperformed the SSCMs. Total P-loss estimates should be used with caution due to poor simulation of sediment loss. Although we did not attain our goal of a robust regional parameterization of APEX for estimating sediment and total P losses, runoff estimates with the RCM were acceptable for P Index evaluation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  16. Response of Saturn's ionosphere to solar radiation: Testing parameterizations for thermal electron heating and secondary ionization processes

    Science.gov (United States)

    Moore, Luke; Galand, Marina; Mueller-Wodarg, Ingo; Mendillo, Michael

    2009-12-01

    We evaluate the effectiveness of two parameterizations in Saturn's ionosphere over a range of solar fluxes, seasons, and latitudes. First, the parameterization of the thermal electron heating rate, Q* e, introduced in [Moore, L., Galand, M., Mueller-Wodarg, I., Yelle, R.V., Mendillo, M., 2008. Plasma temperatures in Saturn's ionosphere. J. Geophys. Res. 113, A10306. doi:10.1029/2008JA013373.] for one specific set of conditions, is found to produce ion and electron temperatures that agree with self-consistent suprathermal electron calculations to within 2% on average under all conditions considered. Next, we develop a new parameterization of the secondary ion production rate at Saturn based on the calculations of [Galand, M., Moore, L., Mueller-Wodarg, I., Mendillo, M., 2009. Modeling the photoelectron secondary ionization process at Saturn. accepted. J. Geophys. Res.]; it is found to be accurate to within 4% on average. The demonstrated effectiveness of these two parameterizations over a wide range of input conditions makes them good candidates for inclusion in 3D Saturn thermosphere-ionosphere general circulation models (TIGCMs).

  17. A MODIFIED CUMULUS PARAMETERIZATION SCHEME AND ITS APPLICATION IN THE SIMULATIONS OF THE HEAVY PRECIPITATION CASES

    Institute of Scientific and Technical Information of China (English)

    PING Fan; TANG Xi-ba; YIN Lei

    2016-01-01

    According to the characteristics of organized cumulus convective precipitation in China,a cumulus parameterization scheme suitable for describing the organized convective precipitation in East Asia is presented and modified.The Kain-Fristch scheme is chosen as the scheme to be modified based on analyses and comparisons of simulated precipitation in East Asia by several commonly-used mesoscale parameterization schemes.A key dynamic parameter to dynamically control the cumulus parameterization is then proposed to improve the Kain-Fristch scheme.Numerical simulations of a typhoon case and a Mei-yu front rainfall case are carried out with the improved scheme,and the results show that the improved version performs better than the original in simulating the track and intensity of the typhoons,as well as the distribution of Mei-yu front precipitation.

  18. Hierarchical organization of functional connectivity in the mouse brain: a complex network approach.

    Science.gov (United States)

    Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano

    2016-08-18

    This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges.

  19. Mesoscale model parameterizations for radiation and turbulent fluxes at the lower boundary

    International Nuclear Information System (INIS)

    Somieski, F.

    1988-11-01

    A radiation parameterization scheme for use in mesoscale models with orography and clouds has been developed. Broadband parameterizations are presented for the solar and the terrestrial spectral ranges. They account for clear, turbid or cloudy atmospheres. The scheme is one-dimensional in the atmosphere, but the effects of mountains (inclination, shading, elevated horizon) are taken into account at the surface. In the terrestrial band, grey and black clouds are considered. Furthermore, the calculation of turbulent fluxes of sensible and latent heat and momentum at an inclined lower model boundary is described. Surface-layer similarity and the surface energy budget are used to evaluate the ground surface temperature. The total scheme is part of the mesoscale model MESOSCOP. (orig.) With 3 figs., 25 refs [de

  20. Improving microphysics in a convective parameterization: possibilities and limitations

    Science.gov (United States)

    Labbouz, Laurent; Heikenfeld, Max; Stier, Philip; Morrison, Hugh; Milbrandt, Jason; Protat, Alain; Kipling, Zak

    2017-04-01

    The convective cloud field model (CCFM) is a convective parameterization implemented in the climate model ECHAM6.1-HAM2.2. It represents a population of clouds within each ECHAM-HAM model column, simulating up to 10 different convective cloud types with individual radius, vertical velocities and microphysical properties. Comparisons between CCFM and radar data at Darwin, Australia, show that in order to reproduce both the convective cloud top height distribution and the vertical velocity profile, the effect of aerodynamic drag on the rising parcel has to be considered, along with a reduced entrainment parameter. A new double-moment microphysics (the Predicted Particle Properties scheme, P3) has been implemented in the latest version of CCFM and is compared to the standard single-moment microphysics and the radar retrievals at Darwin. The microphysical process rates (autoconversion, accretion, deposition, freezing, …) and their response to changes in CDNC are investigated and compared to high resolution CRM WRF simulations over the Amazon region. The results shed light on the possibilities and limitations of microphysics improvements in the framework of CCFM and in convective parameterizations in general.

  1. The relationship between a deformation-based eddy parameterization and the LANS-α turbulence model

    Science.gov (United States)

    Bachman, Scott D.; Anstey, James A.; Zanna, Laure

    2018-06-01

    A recent class of ocean eddy parameterizations proposed by Porta Mana and Zanna (2014) and Anstey and Zanna (2017) modeled the large-scale flow as a non-Newtonian fluid whose subgridscale eddy stress is a nonlinear function of the deformation. This idea, while largely new to ocean modeling, has a history in turbulence modeling dating at least back to Rivlin (1957). The new class of parameterizations results in equations that resemble the Lagrangian-averaged Navier-Stokes-α model (LANS-α, e.g., Holm et al., 1998a). In this note we employ basic tensor mathematics to highlight the similarities between these turbulence models using component-free notation. We extend the Anstey and Zanna (2017) parameterization, which was originally presented in 2D, to 3D, and derive variants of this closure that arise when the full non-Newtonian stress tensor is used. Despite the mathematical similarities between the non-Newtonian and LANS-α models which might provide insight into numerical implementation, the input and dissipation of kinetic energy between these two turbulent models differ.

  2. Climate Simulations from Super-parameterized and Conventional General Circulation Models with a Third-order Turbulence Closure

    Science.gov (United States)

    Xu, Kuan-Man; Cheng, Anning

    2014-05-01

    A high-resolution cloud-resolving model (CRM) embedded in a general circulation model (GCM) is an attractive alternative for climate modeling because it replaces all traditional cloud parameterizations and explicitly simulates cloud physical processes in each grid column of the GCM. Such an approach is called "Multiscale Modeling Framework." MMF still needs to parameterize the subgrid-scale (SGS) processes associated with clouds and large turbulent eddies because circulations associated with planetary boundary layer (PBL) and in-cloud turbulence are unresolved by CRMs with horizontal grid sizes on the order of a few kilometers. A third-order turbulence closure (IPHOC) has been implemented in the CRM component of the super-parameterized Community Atmosphere Model (SPCAM). IPHOC is used to predict (or diagnose) fractional cloudiness and the variability of temperature and water vapor at scales that are not resolved on the CRM's grid. This model has produced promised results, especially for low-level cloud climatology, seasonal variations and diurnal variations (Cheng and Xu 2011, 2013a, b; Xu and Cheng 2013a, b). Because of the enormous computational cost of SPCAM-IPHOC, which is 400 times of a conventional CAM, we decided to bypass the CRM and implement the IPHOC directly to CAM version 5 (CAM5). IPHOC replaces the PBL/stratocumulus, shallow convection, and cloud macrophysics parameterizations in CAM5. Since there are large discrepancies in the spatial and temporal scales between CRM and CAM5, IPHOC used in CAM5 has to be modified from that used in SPCAM. In particular, we diagnose all second- and third-order moments except for the fluxes. These prognostic and diagnostic moments are used to select a double-Gaussian probability density function to describe the SGS variability. We also incorporate a diagnostic PBL height parameterization to represent the strong inversion above PBL. The goal of this study is to compare the simulation of the climatology from these three

  3. Heuristic Optimization Approach to Selecting a Transport Connection in City Public Transport

    Directory of Open Access Journals (Sweden)

    Kul’ka Jozef

    2017-02-01

    Full Text Available The article presents a heuristic optimization approach to select a suitable transport connection in the framework of a city public transport. This methodology was applied on a part of the public transport in Košice, because it is the second largest city in the Slovak Republic and its network of the public transport creates a complex transport system, which consists of three different transport modes, namely from the bus transport, tram transport and trolley-bus transport. This solution focused on examining the individual transport services and their interconnection in relevant interchange points.

  4. A parameterization of cloud droplet nucleation

    International Nuclear Information System (INIS)

    Ghan, S.J.; Chuang, C.; Penner, J.E.

    1993-01-01

    Droplet nucleation is a fundamental cloud process. The number of aerosols activated to form cloud droplets influences not only the number of aerosols scavenged by clouds but also the size of the cloud droplets. Cloud droplet size influences the cloud albedo and the conversion of cloud water to precipitation. Global aerosol models are presently being developed with the intention of coupling with global atmospheric circulation models to evaluate the influence of aerosols and aerosol-cloud interactions on climate. If these and other coupled models are to address issues of aerosol-cloud interactions, the droplet nucleation process must be adequately represented. Here we introduce a droplet nucleation parametrization that offers certain advantages over the popular Twomey (1959) parameterization

  5. Advances in Domain Connectivity for Overset Grids Using the X-Rays Approach

    Science.gov (United States)

    Chan, William M.; Kim, Noah; Pandya, Shishir A.

    2012-01-01

    Advances in automation and robustness of the X-rays approach to domain connectivity for overset grids are presented. Given the surface definition for each component that makes up a complex configuration, the determination of hole points with appropriate hole boundaries is automatically and efficiently performed. Improvements made to the original X-rays approach for identifying the minimum hole include an automated closure scheme for hole-cutters with open boundaries, automatic determination of grid points to be considered for blanking by each hole-cutter, and an adaptive X-ray map to economically handle components in close proximity. Furthermore, an automated spatially varying offset of the hole boundary from the minimum hole is achieved using a dual wall-distance function and an orphan point removal iteration process. Results using the new scheme are presented for a number of static and relative motion test cases on a variety of aerospace applications.

  6. New grid-planning and certification approaches for the large-scale offshore-wind farm grid-connection systems

    Energy Technology Data Exchange (ETDEWEB)

    Heising, C.; Bartelt, R. [Avasition GmbH, Dortmund (Germany); Zadeh, M. Koochack; Lebioda, T.J.; Jung, J. [TenneT Offshore GmbH, Bayreuth (Germany)

    2012-07-01

    Stable operation of the offshore-wind farms (OWF) and stable grid connection under stationary and dynamic conditions are essential to achieve a stable public power supply. To reach this aim, adequate grid-planning and certification approaches are a major advantage. Within this paper, the fundamental characteristics of the offshore-wind farms and their grid-connection systems are given. The main goal of this research project is to study the stability of the offshore grid especially in terms of subharmonic stability for the likely future extension stage of the offshore grids i.e. having parallel connection of two or more HVDC links and for certain operating scenarios e.g. overload scenario. The current requirements according to the grid code are not the focus of this research project. The goal is to study and define potential additional grid code requirements, simulations, tests and grid planning methods for the future. (orig.)

  7. Parameterization of Rocket Dust Storms on Mars in the LMD Martian GCM: Modeling Details and Validation

    Science.gov (United States)

    Wang, Chao; Forget, François; Bertrand, Tanguy; Spiga, Aymeric; Millour, Ehouarn; Navarro, Thomas

    2018-04-01

    The origin of the detached dust layers observed by the Mars Climate Sounder aboard the Mars Reconnaissance Orbiter is still debated. Spiga et al. (2013, https://doi.org/10.1002/jgre.20046) revealed that deep mesoscale convective "rocket dust storms" are likely to play an important role in forming these dust layers. To investigate how the detached dust layers are generated by this mesoscale phenomenon and subsequently evolve at larger scales, a parameterization of rocket dust storms to represent the mesoscale dust convection is designed and included into the Laboratoire de Météorologie Dynamique (LMD) Martian Global Climate Model (GCM). The new parameterization allows dust particles in the GCM to be transported to higher altitudes than in traditional GCMs. Combined with the horizontal transport by large-scale winds, the dust particles spread out and form detached dust layers. During the Martian dusty seasons, the LMD GCM with the new parameterization is able to form detached dust layers. The formation, evolution, and decay of the simulated dust layers are largely in agreement with the Mars Climate Sounder observations. This suggests that mesoscale rocket dust storms are among the key factors to explain the observed detached dust layers on Mars. However, the detached dust layers remain absent in the GCM during the clear seasons, even with the new parameterization. This implies that other relevant atmospheric processes, operating when no dust storms are occurring, are needed to explain the Martian detached dust layers. More observations of local dust storms could improve the ad hoc aspects of this parameterization, such as the trigger and timing of dust injection.

  8. A clinically parameterized mathematical model of Shigella immunity to inform vaccine design.

    Directory of Open Access Journals (Sweden)

    Courtney L Davis

    Full Text Available We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella's lipopolysaccharide (LPS and O-membrane proteins (OMP were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella's LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1 the rate that Shigella migrates into the lamina propria or epithelium, 2 the rate that memory B cells (BM differentiate into antibody-secreting cells (ASC, 3 the rate at which antibodies are produced by activated ASC, and 4 the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine.

  9. A clinically parameterized mathematical model of Shigella immunity to inform vaccine design.

    Science.gov (United States)

    Davis, Courtney L; Wahid, Rezwanul; Toapanta, Franklin R; Simon, Jakub K; Sztein, Marcelo B

    2018-01-01

    We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella's lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella's LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine.

  10. Sensitivity of Glacier Mass Balance Estimates to the Selection of WRF Cloud Microphysics Parameterization in the Indus River Watershed

    Science.gov (United States)

    Johnson, E. S.; Rupper, S.; Steenburgh, W. J.; Strong, C.; Kochanski, A.

    2017-12-01

    Climate model outputs are often used as inputs to glacier energy and mass balance models, which are essential glaciological tools for testing glacier sensitivity, providing mass balance estimates in regions with little glaciological data, and providing a means to model future changes. Climate model outputs, however, are sensitive to the choice of physical parameterizations, such as those for cloud microphysics, land-surface schemes, surface layer options, etc. Furthermore, glacier mass balance (MB) estimates that use these climate model outputs as inputs are likely sensitive to the specific parameterization schemes, but this sensitivity has not been carefully assessed. Here we evaluate the sensitivity of glacier MB estimates across the Indus Basin to the selection of cloud microphysics parameterizations in the Weather Research and Forecasting Model (WRF). Cloud microphysics parameterizations differ in how they specify the size distributions of hydrometeors, the rate of graupel and snow production, their fall speed assumptions, the rates at which they convert from one hydrometeor type to the other, etc. While glacier MB estimates are likely sensitive to other parameterizations in WRF, our preliminary results suggest that glacier MB is highly sensitive to the timing, frequency, and amount of snowfall, which is influenced by the cloud microphysics parameterization. To this end, the Indus Basin is an ideal study site, as it has both westerly (winter) and monsoonal (summer) precipitation influences, is a data-sparse region (so models are critical), and still has lingering questions as to glacier importance for local and regional resources. WRF is run at a 4 km grid scale using two commonly used parameterizations: the Thompson scheme and the Goddard scheme. On average, these parameterizations result in minimal differences in annual precipitation. However, localized regions exhibit differences in precipitation of up to 3 m w.e. a-1. The different schemes also impact the

  11. Application of a planetary wave breaking parameterization to stratospheric circulation statistics

    Science.gov (United States)

    Randel, William J.; Garcia, Rolando R.

    1994-01-01

    The planetary wave parameterization scheme developed recently by Garcia is applied to statospheric circulation statistics derived from 12 years of National Meteorological Center operational stratospheric analyses. From the data a planetary wave breaking criterion (based on the ratio of the eddy to zonal mean meridional potential vorticity (PV) gradients), a wave damping rate, and a meridional diffusion coefficient are calculated. The equatorward flank of the polar night jet during winter is identified as a wave breaking region from the observed PV gradients; the region moves poleward with season, covering all high latitudes in spring. Derived damping rates maximize in the subtropical upper stratosphere (the 'surf zone'), with damping time scales of 3-4 days. Maximum diffusion coefficients follow the spatial patterns of the wave breaking criterion, with magnitudes comparable to prior published estimates. Overall, the observed results agree well with the parameterized calculations of Garcia.

  12. Parameterization of MARVELS Spectra Using Deep Learning

    Science.gov (United States)

    Gilda, Sankalp; Ge, Jian; MARVELS

    2018-01-01

    Like many large-scale surveys, the Multi-Object APO Radial Velocity Exoplanet Large-area Survey (MARVELS) was designed to operate at a moderate spectral resolution ($\\sim$12,000) for efficiency in observing large samples, which makes the stellar parameterization difficult due to the high degree of blending of spectral features. Two extant solutions to deal with this issue are to utilize spectral synthesis, and to utilize spectral indices [Ghezzi et al. 2014]. While the former is a powerful and tested technique, it can often yield strongly coupled atmospheric parameters, and often requires high spectral resolution (Valenti & Piskunov 1996). The latter, though a promising technique utilizing measurements of equivalent widths of spectral indices, has only been employed with respect to FKG dwarfs and sub-giants and not red-giant branch stars, which constitute ~30% of MARVELS targets. In this work, we tackle this problem using a convolution neural network (CNN). In particular, we train a one-dimensional CNN on appropriately processed PHOENIX synthetic spectra using supervised training to automatically distinguish the features relevant for the determination of each of the three atmospheric parameters – T_eff, log(g), [Fe/H] – and use the knowledge thus gained by the network to parameterize 849 MARVELS giants. When tested on the synthetic spectra themselves, our estimates of the parameters were consistent to within 11 K, .02 dex, and .02 dex (in terms of mean absolute errors), respectively. For MARVELS dwarfs, the accuracies are 80K, .16 dex and .10 dex, respectively.

  13. Examining Chaotic Convection with Super-Parameterization Ensembles

    Science.gov (United States)

    Jones, Todd R.

    This study investigates a variety of features present in a new configuration of the Community Atmosphere Model (CAM) variant, SP-CAM 2.0. The new configuration (multiple-parameterization-CAM, MP-CAM) changes the manner in which the super-parameterization (SP) concept represents physical tendency feedbacks to the large-scale by using the mean of 10 independent two-dimensional cloud-permitting model (CPM) curtains in each global model column instead of the conventional single CPM curtain. The climates of the SP and MP configurations are examined to investigate any significant differences caused by the application of convective physical tendencies that are more deterministic in nature, paying particular attention to extreme precipitation events and large-scale weather systems, such as the Madden-Julian Oscillation (MJO). A number of small but significant changes in the mean state climate are uncovered, and it is found that the new formulation degrades MJO performance. Despite these deficiencies, the ensemble of possible realizations of convective states in the MP configuration allows for analysis of uncertainty in the small-scale solution, lending to examination of those weather regimes and physical mechanisms associated with strong, chaotic convection. Methods of quantifying precipitation predictability are explored, and use of the most reliable of these leads to the conclusion that poor precipitation predictability is most directly related to the proximity of the global climate model column state to atmospheric critical points. Secondarily, the predictability is tied to the availability of potential convective energy, the presence of mesoscale convective organization on the CPM grid, and the directive power of the large-scale.

  14. An integrative approach to the design methodology for 3-phase power conditioners in Photovoltaic Grid-Connected systems

    International Nuclear Information System (INIS)

    Rey-Boué, Alexis B.; García-Valverde, Rafael; Ruz-Vila, Francisco de A.; Torrelo-Ponce, José M.

    2012-01-01

    Highlights: ► A design methodology for Photovoltaic grid-connected systems is presented. ► Models of the Photovoltaic Generator and the 3-phase Inverter are described. ► The power factor and the power quality are regulated with vector control. ► Simulation and experimental results validate the design methodology. ► The proposed methodology can be extended to any Renewable or Industrial System. - Abstract: A novel methodology is presented in this paper, for the design of the Power and Control Subsystems of a 3-phase Photovoltaic Grid-Connected system in an easy and comprehensive way, as an integrative approach. At the DC side of the Power Subsystem, the Photovoltaic Generator modeling is revised and a simple model is proposed, whereas at the AC side, a vector analysis is done to deal with the instantaneous 3-phase variables of the grid-connected Voltage Source Inverter. A d–q control approach is established in the Control Subsystem, along with its specific tuned parameters, as a vector control alternative which will allow the decoupled control of the instantaneous active and reactive powers. A particular Case of Study is presented to illustrate the behavior of the design methodology regarding the fulfillment of the Photovoltaic plant specifications. Some simulations are run to study the performance of the Photovoltaic Generator together with the exerted d–q control to the grid-connected 3-phase inverter, and some experimental results, obtained from a built flexible platform, are also shown. The simulations and the experimental results validate the overall performance of the 3-phase Photovoltaic Grid-Connected system due to the attained unitary power factor operation together with good power quality. The final validation of the proposed design methodology is also achieved.

  15. Direct spondylolisthesis identification and measurement in MR/CT using detectors trained by articulated parameterized spine model

    Science.gov (United States)

    Cai, Yunliang; Leung, Stephanie; Warrington, James; Pandey, Sachin; Shmuilovich, Olga; Li, Shuo

    2017-02-01

    The identification of spondylolysis and spondylolisthesis is important in spinal diagnosis, rehabilitation, and surgery planning. Accurate and automatic detection of spinal portion with spondylolisthesis problem will significantly reduce the manual work of physician and provide a more robust evaluation for the spine condition. Most existing automatic identification methods adopted the indirect approach which used vertebrae locations to measure the spondylolisthesis. However, these methods relied heavily on automatic vertebra detection which often suffered from the pool spatial accuracy and the lack of validated pathological training samples. In this study, we present a novel spondylolisthesis detection method which can directly locate the irregular spine portion and output the corresponding grading. The detection is done by a set of learning-based detectors which are discriminatively trained by synthesized spondylolisthesis image samples. To provide sufficient pathological training samples, we used a parameterized spine model to synthesize different types of spondylolysis images from real MR/CT scans. The parameterized model can automatically locate the vertebrae in spine images and estimate their pose orientations, and can inversely alter the vertebrae locations and poses by changing the corresponding parameters. Various training samples can then be generated from only a few spine MR/CT images. The preliminary results suggest great potential for the fast and efficient spondylolisthesis identification and measurement in both MR and CT spine images.

  16. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  17. Evaluation of surface layer flux parameterizations using in-situ observations

    Science.gov (United States)

    Katz, Jeremy; Zhu, Ping

    2017-09-01

    Appropriate calculation of surface turbulent fluxes between the atmosphere and the underlying ocean/land surface is one of the major challenges in geosciences. In practice, the surface turbulent fluxes are estimated from the mean surface meteorological variables based on the bulk transfer model combined with the Monnin-Obukhov Similarity (MOS) theory. Few studies have been done to examine the extent to which such a flux parameterization can be applied to different weather and surface conditions. A novel validation method is developed in this study to evaluate the surface flux parameterization using in-situ observations collected at a station off the coast of Gulf of Mexico. The main findings are: (a) the theoretical prediction that uses MOS theory does not match well with those directly computed from the observations. (b) The largest spread in exchange coefficients is shown in strong stable conditions with calm winds. (c) Large turbulent eddies, which depend strongly on the mean flow pattern and surface conditions, tend to break the constant flux assumption in the surface layer.

  18. The Effectiveness of Problem-Based Learning Approach Based on Multiple Intelligences in Terms of Student’s Achievement, Mathematical Connection Ability, and Self-Esteem

    Science.gov (United States)

    Kartikasari, A.; Widjajanti, D. B.

    2017-02-01

    The aim of this study is to explore the effectiveness of learning approach using problem-based learning based on multiple intelligences in developing student’s achievement, mathematical connection ability, and self-esteem. This study is experimental research with research sample was 30 of Grade X students of MIA III MAN Yogyakarta III. Learning materials that were implemented consisting of trigonometry and geometry. For the purpose of this study, researchers designed an achievement test made up of 44 multiple choice questions with respectively 24 questions on the concept of trigonometry and 20 questions for geometry. The researcher also designed a connection mathematical test and self-esteem questionnaire that consisted of 7 essay questions on mathematical connection test and 30 items of self-esteem questionnaire. The learning approach said that to be effective if the proportion of students who achieved KKM on achievement test, the proportion of students who achieved a minimum score of high category on the results of both mathematical connection test and self-esteem questionnaire were greater than or equal to 70%. Based on the hypothesis testing at the significance level of 5%, it can be concluded that the learning approach using problem-based learning based on multiple intelligences was effective in terms of student’s achievement, mathematical connection ability, and self-esteem.

  19. Connectivity-based parcellation reveals distinct cortico-striatal connectivity fingerprints in Autism Spectrum Disorder.

    Science.gov (United States)

    Balsters, Joshua H; Mantini, Dante; Wenderoth, Nicole

    2018-04-15

    Autism Spectrum Disorder (ASD) has been associated with abnormal synaptic development causing a breakdown in functional connectivity. However, when measured at the macro scale using resting state fMRI, these alterations are subtle and often difficult to detect due to the large heterogeneity of the pathology. Recently, we outlined a novel approach for generating robust biomarkers of resting state functional magnetic resonance imaging (RS-fMRI) using connectivity based parcellation of gross morphological structures to improve single-subject reproducibility and generate more robust connectivity fingerprints. Here we apply this novel approach to investigating the organization and connectivity strength of the cortico-striatal system in a large sample of ASD individuals and typically developed (TD) controls (N=130 per group). Our results showed differences in the parcellation of the striatum in ASD. Specifically, the putamen was found to be one single structure in ASD, whereas this was split into anterior and posterior segments in an age, IQ, and head movement matched TD group. An analysis of the connectivity fingerprints revealed that the group differences in clustering were driven by differential connectivity between striatum and the supplementary motor area, posterior cingulate cortex, and posterior insula. Our approach for analysing RS-fMRI in clinical populations has provided clear evidence that cortico-striatal circuits are organized differently in ASD. Based on previous task-based segmentations of the striatum, we believe that the anterior putamen cluster present in TD, but not in ASD, likely contributes to social and language processes. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  20. An IoT Based Predictive Connected Car Maintenance Approach

    Directory of Open Access Journals (Sweden)

    Rohit Dhall

    2017-03-01

    Full Text Available Internet of Things (IoT is fast emerging and becoming an almost basic necessity in general life. The concepts of using technology in our daily life is not new, but with the advancements in technology, the impact of technology in daily activities of a person can be seen in almost all the aspects of life. Today, all aspects of our daily life, be it health of a person, his location, movement, etc. can be monitored and analyzed using information captured from various connected devices. This paper discusses one such use case, which can be implemented by the automobile industry, using technological advancements in the areas of IoT and Analytics. ‘Connected Car’ is a terminology, often associated with cars and other passenger vehicles, which are capable of internet connectivity and sharing of various kinds of data with backend applications. The data being shared can be about the location and speed of the car, status of various parts/lubricants of the car, and if the car needs urgent service or not. Once data are transmitted to the backend services, various workflows can be created to take necessary actions, e.g. scheduling a service with the car service provider, or if large numbers of care are in the same location, then the traffic management system can take necessary action. ’Connected cars’ can also communicate with each other, and can send alerts to each other in certain scenarios like possible crash etc. This paper talks about how the concept of ‘connected cars’ can be used to perform ‘predictive car maintenance’. It also discusses how certain technology components, i.e., Eclipse Mosquito and Eclipse Paho can be used to implement a predictive connected car use case.

  1. Joyce and Ulysses: integrated and user-friendly tools for the parameterization of intramolecular force fields from quantum mechanical data.

    Science.gov (United States)

    Barone, Vincenzo; Cacelli, Ivo; De Mitri, Nicola; Licari, Daniele; Monti, Susanna; Prampolini, Giacomo

    2013-03-21

    The Joyce program is augmented with several new features, including the user friendly Ulysses GUI, the possibility of complete excited state parameterization and a more flexible treatment of the force field electrostatic terms. A first validation is achieved by successfully comparing results obtained with Joyce2.0 to literature ones, obtained for the same set of benchmark molecules. The parameterization protocol is also applied to two other larger molecules, namely nicotine and a coumarin based dye. In the former case, the parameterized force field is employed in molecular dynamics simulations of solvated nicotine, and the solute conformational distribution at room temperature is discussed. Force fields parameterized with Joyce2.0, for both the dye's ground and first excited electronic states, are validated through the calculation of absorption and emission vertical energies with molecular mechanics optimized structures. Finally, the newly implemented procedure to handle polarizable force fields is discussed and applied to the pyrimidine molecule as a test case.

  2. The Barbero connection and its relation to the histories connection formalism without gauge fixing

    International Nuclear Information System (INIS)

    Savvidou, Ntina

    2006-01-01

    We present a histories version of the connection formalism of general relativity. Such an approach introduces a spacetime description-a characteristic feature of the histories approach-and we discuss the extent to which the usual loop variables are compatible with a spacetime description. In particular, we discuss the definability of the Barbero connection without any gauge fixing. Although it is not the pullback of a spacetime connection onto the 3-surface and it does not have a natural spacetime interpretation, this does not mean that the Barbero connection is not a suitable variable for quantization; it appears naturally in the formalism even in the absence of gauge fixing. It may be employed therefore to define loop variables similar to those employed in loop quantum gravity. However, the loop algebra would have to be augmented by the introduction of additional variables

  3. Ocean's response to Hurricane Frances and its implications for drag coefficient parameterization at high wind speeds

    KAUST Repository

    Zedler, S. E.

    2009-04-25

    The drag coefficient parameterization of wind stress is investigated for tropical storm conditions using model sensitivity studies. The Massachusetts Institute of Technology (MIT) Ocean General Circulation Model was run in a regional setting with realistic stratification and forcing fields representing Hurricane Frances, which in early September 2004 passed east of the Caribbean Leeward Island chain. The model was forced with a NOAA-HWIND wind speed product after converting it to wind stress using four different drag coefficient parameterizations. Respective model results were tested against in situ measurements of temperature profiles and velocity, available from an array of 22 surface drifters and 12 subsurface floats. Changing the drag coefficient parameterization from one that saturated at a value of 2.3 × 10 -3 to a constant drag coefficient of 1.2 × 10-3 reduced the standard deviation difference between the simulated minus the measured sea surface temperature change from 0.8°C to 0.3°C. Additionally, the standard deviation in the difference between simulated minus measured high pass filtered 15-m current speed reduced from 15 cm/s to 5 cm/s. The maximum difference in sea surface temperature response when two different turbulent mixing parameterizations were implemented was 0.3°C, i.e., only 11% of the maximum change of sea surface temperature caused by the storm. Copyright 2009 by the American Geophysical Union.

  4. Biological engineering applications of feedforward neural networks designed and parameterized by genetic algorithms.

    Science.gov (United States)

    Ferentinos, Konstantinos P

    2005-09-01

    Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.

  5. Parameterization of light absorption by components of seawater in optically complex coastal waters of the Crimea Peninsula (Black Sea).

    Science.gov (United States)

    Dmitriev, Egor V; Khomenko, Georges; Chami, Malik; Sokolov, Anton A; Churilova, Tatyana Y; Korotaev, Gennady K

    2009-03-01

    The absorption of sunlight by oceanic constituents significantly contributes to the spectral distribution of the water-leaving radiance. Here it is shown that current parameterizations of absorption coefficients do not apply to the optically complex waters of the Crimea Peninsula. Based on in situ measurements, parameterizations of phytoplankton, nonalgal, and total particulate absorption coefficients are proposed. Their performance is evaluated using a log-log regression combined with a low-pass filter and the nonlinear least-square method. Statistical significance of the estimated parameters is verified using the bootstrap method. The parameterizations are relevant for chlorophyll a concentrations ranging from 0.45 up to 2 mg/m(3).

  6. Initial Validation for the Estimation of Resting-State fMRI Effective Connectivity by a Generalization of the Correlation Approach

    Directory of Open Access Journals (Sweden)

    Nan Xu

    2017-05-01

    Full Text Available Resting-state functional MRI (rs-fMRI is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD signal from different regions of interest (ROIs. However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1 Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2 On simulated data designed to display the “common driver” problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3 On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain.

  7. Parameterization of sheared entrainment in a well-developed CBL. Part I: Evaluation of the scheme through large-eddy simulations

    Science.gov (United States)

    Liu, Peng; Sun, Jianning; Shen, Lidu

    2016-10-01

    The entrainment flux ratio A e and the inversion layer (IL) thickness are two key parameters in a mixed layer model. A e is defined as the ratio of the entrainment heat flux at the mixed layer top to the surface heat flux. The IL is the layer between the mixed layer and the free atmosphere. In this study, a parameterization of A e is derived from the TKE budget in the firstorder model for a well-developed CBL under the condition of linearly sheared geostrophic velocity with a zero value at the surface. It is also appropriate for a CBL under the condition of geostrophic velocity remaining constant with height. LESs are conducted under the above two conditions to determine the coefficients in the parameterization scheme. Results suggest that about 43% of the shear-produced TKE in the IL is available for entrainment, while the shear-produced TKE in the mixed layer and surface layer have little effect on entrainment. Based on this scheme, a new scale of convective turbulence velocity is proposed and applied to parameterize the IL thickness. The LES outputs for the CBLs under the condition of linearly sheared geostrophic velocity with a non-zero surface value are used to verify the performance of the parameterization scheme. It is found that the parameterized A e and IL thickness agree well with the LES outputs.

  8. Correction of Excessive Precipitation over Steep and High Mountains in a GCM: A Simple Method of Parameterizing the Thermal Effects of Subgrid Topographic Variation

    Science.gov (United States)

    Chao, Winston C.

    2015-01-01

    The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.

  9. Parameterizing the competition between homogeneous and heterogeneous freezing in ice cloud formation – polydisperse ice nuclei

    Directory of Open Access Journals (Sweden)

    D. Barahona

    2009-08-01

    Full Text Available This study presents a comprehensive ice cloud formation parameterization that computes the ice crystal number, size distribution, and maximum supersaturation from precursor aerosol and ice nuclei. The parameterization provides an analytical solution of the cloud parcel model equations and accounts for the competition effects between homogeneous and heterogeneous freezing, and, between heterogeneous freezing in different modes. The diversity of heterogeneous nuclei is described through a nucleation spectrum function which is allowed to follow any form (i.e., derived from classical nucleation theory or from observations. The parameterization reproduces the predictions of a detailed numerical parcel model over a wide range of conditions, and several expressions for the nucleation spectrum. The average error in ice crystal number concentration was −2.0±8.5% for conditions of pure heterogeneous freezing, and, 4.7±21% when both homogeneous and heterogeneous freezing were active. The formulation presented is fast and free from requirements of numerical integration.

  10. Incorporating field wind data to improve crop evapotranspiration parameterization in heterogeneous regions

    Science.gov (United States)

    Accurate parameterization of reference evapotranspiration (ET0) is necessary for optimizing irrigation scheduling and avoiding costs associated with over-irrigation (water expense, loss of water productivity, energy costs, pollution) or with under-irrigation (crop stress and suboptimal yields or qua...

  11. Basic Concepts for Convection Parameterization in Weather Forecast and Climate Models: COST Action ES0905 Final Report

    Directory of Open Access Journals (Sweden)

    Jun–Ichi Yano

    2014-12-01

    Full Text Available The research network “Basic Concepts for Convection Parameterization in Weather Forecast and Climate Models” was organized with European funding (COST Action ES0905 for the period of 2010–2014. Its extensive brainstorming suggests how the subgrid-scale parameterization problem in atmospheric modeling, especially for convection, can be examined and developed from the point of view of a robust theoretical basis. Our main cautions are current emphasis on massive observational data analyses and process studies. The closure and the entrainment–detrainment problems are identified as the two highest priorities for convection parameterization under the mass–flux formulation. The need for a drastic change of the current European research culture as concerns policies and funding in order not to further deplete the visions of the European researchers focusing on those basic issues is emphasized.

  12. Impact of different parameterization schemes on simulation of mesoscale convective system over south-east India

    Science.gov (United States)

    Madhulatha, A.; Rajeevan, M.

    2018-02-01

    Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.

  13. Implementation of a generalized actuator line model for wind turbine parameterization in the Weather Research and Forecasting model

    Energy Technology Data Exchange (ETDEWEB)

    Marjanovic, Nikola [Department of Civil and Environmental Engineering, University of California, Berkeley, MC 1710, Berkeley, California 94720-1710, USA; Atmospheric, Earth and Energy Division, Lawrence Livermore National Laboratory, PO Box 808, L-103, Livermore, California 94551, USA; Mirocha, Jeffrey D. [Atmospheric, Earth and Energy Division, Lawrence Livermore National Laboratory, PO Box 808, L-103, Livermore, California 94551, USA; Kosović, Branko [Research Applications Laboratory, Weather Systems and Assessment Program, University Corporation for Atmospheric Research, PO Box 3000, Boulder, Colorado 80307, USA; Lundquist, Julie K. [Department of Atmospheric and Oceanic Sciences, University of Colorado, Boulder, Campus Box 311, Boulder, Colorado 80309, USA; National Renewable Energy Laboratory, 15013 Denver West Parkway, Golden, Colorado 80401, USA; Chow, Fotini Katopodes [Department of Civil and Environmental Engineering, University of California, Berkeley, MC 1710, Berkeley, California 94720-1710, USA

    2017-11-01

    A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulations show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.

  14. Infrared radiation parameterizations for the minor CO2 bands and for several CFC bands in the window region

    Science.gov (United States)

    Kratz, David P.; Chou, Ming-Dah; Yan, Michael M.-H.

    1993-01-01

    Fast and accurate parameterizations have been developed for the transmission functions of the CO2 9.4- and 10.4-micron bands, as well as the CFC-11, CFC-12, and CFC-22 bands located in the 8-12-micron region. The parameterizations are based on line-by-line calculations of transmission functions for the CO2 bands and on high spectral resolution laboratory measurements of the absorption coefficients for the CFC bands. Also developed are the parameterizations for the H2O transmission functions for the corresponding spectral bands. Compared to the high-resolution calculations, fluxes at the tropopause computed with the parameterizations are accurate to within 10 percent when overlapping of gas absorptions within a band is taken into account. For individual gas absorption, the accuracy is of order 0-2 percent. The climatic effects of these trace gases have been studied using a zonally averaged multilayer energy balance model, which includes seasonal cycles and a simplified deep ocean. With the trace gas abundances taken to follow the Intergovernmental Panel on Climate Change Low Emissions 'B' scenario, the transient response of the surface temperature is simulated for the period 1900-2060.

  15. Zlib: A numerical library for optimal design of truncated power series algebra and map parameterization routines

    International Nuclear Information System (INIS)

    Yan, Y.T.

    1996-11-01

    A brief review of the Zlib development is given. Emphasized is the Zlib nerve system which uses the One-Step Index Pointers (OSIPs) for efficient computation and flexible use of the Truncated Power Series Algebra (TPSA). Also emphasized is the treatment of parameterized maps with an object-oriented language (e.g. C++). A parameterized map can be a Vector Power Series (Vps) or a Lie generator represented by an exponent of a Truncated Power Series (Tps) of which each coefficient is an object of truncated power series

  16. A simple parameterization of aerosol emissions in RAMS

    Science.gov (United States)

    Letcher, Theodore

    Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical

  17. Parameterization of ionization rate by auroral electron precipitation in Jupiter

    Directory of Open Access Journals (Sweden)

    Y. Hiraki

    2008-02-01

    Full Text Available We simulate auroral electron precipitation into the Jovian atmosphere in which electron multi-directional scattering and energy degradation processes are treated exactly with a Monte Carlo technique. We make a parameterization of the calculated ionization rate of the neutral gas by electron impact in a similar way as used for the Earth's aurora. Our method allows the altitude distribution of the ionization rate to be obtained as a function of an arbitrary initial energy spectrum in the range of 1–200 keV. It also includes incident angle dependence and an arbitrary density distribution of molecular hydrogen. We show that there is little dependence of the estimated ionospheric conductance on atomic species such as H and He. We compare our results with those of recent studies with different electron transport schemes by adapting our parameterization to their atmospheric conditions. We discuss the intrinsic problem of their simplified assumption. The ionospheric conductance, which is important for Jupiter's magnetosphere-ionosphere coupling system, is estimated to vary by a factor depending on the electron energy spectrum based on recent observation and modeling. We discuss this difference through the relation with field-aligned current and electron spectrum.

  18. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kookjin [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science; Carlberg, Kevin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Elman, Howard C. [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science and Inst. for Advanced Computer Studies

    2018-03-29

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weighted $\\ell^2$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $\\ell^2$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.

  19. Parameterization of Time-Averaged Suspended Sediment Concentration in the Nearshore

    Directory of Open Access Journals (Sweden)

    Hyun-Doug Yoon

    2015-11-01

    Full Text Available To quantify the effect of wave breaking turbulence on sediment transport in the nearshore, the vertical distribution of time-averaged suspended sediment concentration (SSC in the surf zone was parameterized in terms of the turbulent kinetic energy (TKE at different cross-shore locations, including the bar crest, bar trough, and inner surf zone. Using data from a large-scale laboratory experiment, a simple relationship was developed between the time-averaged SSC and the time-averaged TKE. The vertical variation of the time-averaged SSC was fitted to an equation analogous to the turbulent dissipation rate term. At the bar crest, the proposed equation was slightly modified to incorporate the effect of near-bed sediment processes and yielded reasonable agreement. This parameterization yielded the best agreement at the bar trough, with a coefficient of determination R2 ≥ 0.72 above the bottom boundary layer. The time-averaged SSC in the inner surf zone showed good agreement near the bed but poor agreement near the water surface, suggesting that there is a different sedimentation mechanism that controls the SSC in the inner surf zone.

  20. The nuisance of nuisance regression: spectral misspecification in a common approach to resting-state fMRI preprocessing reintroduces noise and obscures functional connectivity.

    Science.gov (United States)

    Hallquist, Michael N; Hwang, Kai; Luna, Beatriz

    2013-11-15

    Recent resting-state functional connectivity fMRI (RS-fcMRI) research has demonstrated that head motion during fMRI acquisition systematically influences connectivity estimates despite bandpass filtering and nuisance regression, which are intended to reduce such nuisance variability. We provide evidence that the effects of head motion and other nuisance signals are poorly controlled when the fMRI time series are bandpass-filtered but the regressors are unfiltered, resulting in the inadvertent reintroduction of nuisance-related variation into frequencies previously suppressed by the bandpass filter, as well as suboptimal correction for noise signals in the frequencies of interest. This is important because many RS-fcMRI studies, including some focusing on motion-related artifacts, have applied this approach. In two cohorts of individuals (n=117 and 22) who completed resting-state fMRI scans, we found that the bandpass-regress approach consistently overestimated functional connectivity across the brain, typically on the order of r=.10-.35, relative to a simultaneous bandpass filtering and nuisance regression approach. Inflated correlations under the bandpass-regress approach were associated with head motion and cardiac artifacts. Furthermore, distance-related differences in the association of head motion and connectivity estimates were much weaker for the simultaneous filtering approach. We recommend that future RS-fcMRI studies ensure that the frequencies of nuisance regressors and fMRI data match prior to nuisance regression, and we advocate a simultaneous bandpass filtering and nuisance regression strategy that better controls nuisance-related variability. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Connectivity Neurofeedback Training Can Differentially Change Functional Connectivity and Cognitive Performance.

    Science.gov (United States)

    Yamashita, Ayumu; Hayasaka, Shunsuke; Kawato, Mitsuo; Imamizu, Hiroshi

    2017-10-01

    Advances in functional magnetic resonance imaging have made it possible to provide real-time feedback on brain activity. Neurofeedback has been applied to therapeutic interventions for psychiatric disorders. Since many studies have shown that most psychiatric disorders exhibit abnormal brain networks, a novel experimental paradigm named connectivity neurofeedback, which can directly modulate a brain network, has emerged as a promising approach to treat psychiatric disorders. Here, we investigated the hypothesis that connectivity neurofeedback can induce the aimed direction of change in functional connectivity, and the differential change in cognitive performance according to the direction of change in connectivity. We selected the connectivity between the left primary motor cortex and the left lateral parietal cortex as the target. Subjects were divided into 2 groups, in which only the direction of change (an increase or a decrease in correlation) in the experimentally manipulated connectivity differed between the groups. As a result, subjects successfully induced the expected connectivity changes in either of the 2 directions. Furthermore, cognitive performance significantly and differentially changed from preneurofeedback to postneurofeedback training between the 2 groups. These findings indicate that connectivity neurofeedback can induce the aimed direction of change in connectivity and also a differential change in cognitive performance. © The Author 2017. Published by Oxford University Press.

  2. Solvation of monovalent anions in formamide and methanol: Parameterization of the IEF-PCM model

    International Nuclear Information System (INIS)

    Boees, Elvis S.; Bernardi, Edson; Stassen, Hubert; Goncalves, Paulo F.B.

    2008-01-01

    The thermodynamics of solvation for a series of monovalent anions in formamide and methanol has been studied using the polarizable continuum model (PCM). The parameterization of this continuum model was guided by molecular dynamics simulations. The parameterized PCM model predicts the Gibbs free energies of solvation for 13 anions in formamide and 16 anions in methanol in very good agreement with experimental data. Two sets of atomic radii were tested in the definition of the solute cavities in the PCM and their performances are evaluated and discussed. Mean absolute deviations of the calculated free energies of solvation from the experimental values are in the range of 1.3-2.1 kcal/mol

  3. CloudSat 2C-ICE product update with a new Ze parameterization in lidar-only region.

    Science.gov (United States)

    Deng, Min; Mace, Gerald G; Wang, Zhien; Berry, Elizabeth

    2015-12-16

    The CloudSat 2C-ICE data product is derived from a synergetic ice cloud retrieval algorithm that takes as input a combination of CloudSat radar reflectivity ( Z e ) and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation lidar attenuated backscatter profiles. The algorithm uses a variational method for retrieving profiles of visible extinction coefficient, ice water content, and ice particle effective radius in ice or mixed-phase clouds. Because of the nature of the measurements and to maintain consistency in the algorithm numerics, we choose to parameterize (with appropriately large specification of uncertainty) Z e and lidar attenuated backscatter in the regions of a cirrus layer where only the lidar provides data and where only the radar provides data, respectively. To improve the Z e parameterization in the lidar-only region, the relations among Z e , extinction, and temperature have been more thoroughly investigated using Atmospheric Radiation Measurement long-term millimeter cloud radar and Raman lidar measurements. This Z e parameterization provides a first-order estimation of Z e as a function extinction and temperature in the lidar-only regions of cirrus layers. The effects of this new parameterization have been evaluated for consistency using radiation closure methods where the radiative fluxes derived from retrieved cirrus profiles compare favorably with Clouds and the Earth's Radiant Energy System measurements. Results will be made publicly available for the entire CloudSat record (since 2006) in the most recent product release known as R05.

  4. A Heuristic Parameterization for the Integrated Vertical Overlap of Cumulus and Stratus

    Science.gov (United States)

    Park, Sungsu

    2017-10-01

    The author developed a heuristic parameterization to handle the contrasting vertical overlap structures of cumulus and stratus in an integrated way. The parameterization assumes that cumulus is maximum-randomly overlapped with adjacent cumulus; stratus is maximum-randomly overlapped with adjacent stratus; and radiation and precipitation areas at each model interface are grouped into four categories, that is, convective, stratiform, mixed, and clear areas. For simplicity, thermodynamic scalars within individual portions of cloud, radiation, and precipitation areas are assumed to be internally homogeneous. The parameterization was implemented into the Seoul National University Atmosphere Model version 0 (SAM0) in an offline mode and tested over the globe. The offline control simulation reasonably reproduces the online surface precipitation flux and longwave cloud radiative forcing (LWCF). Although the cumulus fraction is much smaller than the stratus fraction, cumulus dominantly contributes to precipitation production in the tropics. For radiation, however, stratus is dominant. Compared with the maximum overlap, the random overlap of stratus produces stronger LWCF and, surprisingly, more precipitation flux due to less evaporation of convective precipitation. Compared with the maximum overlap, the random overlap of cumulus simulates stronger LWCF and weaker precipitation flux. Compared with the control simulation with separate cumulus and stratus, the simulation with a single-merged cloud substantially enhances the LWCF in the tropical deep convection and midlatitude storm track regions. The process-splitting treatment of convective and stratiform precipitation with an independent precipitation approximation (IPA) simulates weaker surface precipitation flux than the control simulation in the tropical region.

  5. Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model

    Science.gov (United States)

    This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.

  6. Multi-sensor remote sensing parameterization of heat fluxes over heterogeneous land surfaces

    NARCIS (Netherlands)

    Faivre, R.D.

    2014-01-01

    The parameterization of heat transfer by remote sensing, and based on SEBS scheme for turbulent heat fluxes retrieval, already proved to be very convenient for estimating evapotranspiration (ET) over homogeneous land surfaces. However, the use of such a method over heterogeneous landscapes (e.g.

  7. A parameterization study for elastic VTI Full Waveform Inversion of hydrophone components: synthetic and North Sea field data examples

    KAUST Repository

    Guitton, Antoine; Alkhalifah, Tariq Ali

    2017-01-01

    Choosing the right parameterization to describe a transversely isotropic medium with a vertical symmetry axis (VTI) allows us to match the scattering potential of these parameters to the available data in a way that avoids potential tradeoff and focus on the parameters to which the data are sensitive. For 2-D elastic full waveform inversion in VTI media of pressure components and for data with a reasonable range of offsets (as with those found in conventional streamer data acquisition systems), assuming that we have a kinematically accurate NMO velocity (vnmo) and anellipticity parameter η (or horizontal velocity, vh) obtained from tomographic methods, a parameterization in terms of horizontal velocity vh, η and ε is preferred to the more conventional parameterization in terms of vh, δ and ε. In the vh, η, ε parameterization and for reasonable scattering angles (<60o), ε acts as a “garbage collector” and absorbs most of the amplitude discrepancies; between modeled and observed data, more so when density ρ and shear-wave velocity vs are not inverted for (a standard practice with streamer data). On the contrary, in the vv, δ, ε parameterization, ε is mostly sensitive to large scattering angles, leaving vv exposed to strong leakages from ρ mainly. There assertions will be demonstrated on the synthetic Marmousi II as well as a North Sea OBC dataset, where inverting for the horizontal velocity rather than the vertical velocity yields more accurate models and migrated images.

  8. A parameterization study for elastic VTI Full Waveform Inversion of hydrophone components: synthetic and North Sea field data examples

    KAUST Repository

    Guitton, Antoine

    2017-08-15

    Choosing the right parameterization to describe a transversely isotropic medium with a vertical symmetry axis (VTI) allows us to match the scattering potential of these parameters to the available data in a way that avoids potential tradeoff and focus on the parameters to which the data are sensitive. For 2-D elastic full waveform inversion in VTI media of pressure components and for data with a reasonable range of offsets (as with those found in conventional streamer data acquisition systems), assuming that we have a kinematically accurate NMO velocity (vnmo) and anellipticity parameter η (or horizontal velocity, vh) obtained from tomographic methods, a parameterization in terms of horizontal velocity vh, η and ε is preferred to the more conventional parameterization in terms of vh, δ and ε. In the vh, η, ε parameterization and for reasonable scattering angles (<60o), ε acts as a “garbage collector” and absorbs most of the amplitude discrepancies; between modeled and observed data, more so when density ρ and shear-wave velocity vs are not inverted for (a standard practice with streamer data). On the contrary, in the vv, δ, ε parameterization, ε is mostly sensitive to large scattering angles, leaving vv exposed to strong leakages from ρ mainly. There assertions will be demonstrated on the synthetic Marmousi II as well as a North Sea OBC dataset, where inverting for the horizontal velocity rather than the vertical velocity yields more accurate models and migrated images.

  9. Mass-flux subgrid-scale parameterization in analogy with multi-component flows: a formulation towards scale independence

    Directory of Open Access Journals (Sweden)

    J.-I. Yano

    2012-11-01

    Full Text Available A generalized mass-flux formulation is presented, which no longer takes a limit of vanishing fractional areas for subgrid-scale components. The presented formulation is applicable to a~situation in which the scale separation is still satisfied, but fractional areas occupied by individual subgrid-scale components are no longer small. A self-consistent formulation is presented by generalizing the mass-flux formulation under the segmentally-constant approximation (SCA to the grid–scale variabilities. The present formulation is expected to alleviate problems arising from increasing resolutions of operational forecast models without invoking more extensive overhaul of parameterizations.

    The present formulation leads to an analogy of the large-scale atmospheric flow with multi-component flows. This analogy allows a generality of including any subgrid-scale variability into the mass-flux parameterization under SCA. Those include stratiform clouds as well as cold pools in the boundary layer.

    An important finding under the present formulation is that the subgrid-scale quantities are advected by the large-scale velocities characteristic of given subgrid-scale components (large-scale subcomponent flows, rather than by the total large-scale flows as simply defined by grid-box average. In this manner, each subgrid-scale component behaves as if like a component of multi-component flows. This formulation, as a result, ensures the lateral interaction of subgrid-scale variability crossing the grid boxes, which are missing in the current parameterizations based on vertical one-dimensional models, and leading to a reduction of the grid-size dependencies in its performance. It is shown that the large-scale subcomponent flows are driven by large-scale subcomponent pressure gradients. The formulation, as a result, furthermore includes a self-contained description of subgrid-scale momentum transport.

    The main purpose of the present paper

  10. APPROACHES AND METHODS FOR OVERCOMING OF POSTTRAUMATIC STRESS IN CONNECTION WITH DISASTER SITUATIONS

    Directory of Open Access Journals (Sweden)

    Desislava Todorova

    2018-03-01

    Full Text Available The disaster situations have become more and more frequent for the last decade, and they have intensified the manifestation of fragmentation of the modern society, bringing about a sense of helplessness. In those conditions, the art therapeutic groups provide a sense of connection with the other people and interpersonal support. The aim of the current study is an examination of the way, by which the brain and body react to events causing an acute stress reaction. Assessment of art therapy applicability and helpfulness is done in connection with disaster situations, and comparative analysis of the main approaches and methods, used in practice of work with people who had suffered traumatic events. The results of the study show that the new type of art therapists may disclose an emotional problem, related to trauma sustained, that the client cannot cope with on his/her own. The focus - in connection with the choice of method - is concentrated on the therapeutic needs of the person. In terms of the particular individual, the different methods of art therapy create a medium for the achievement of alleviation from insurmountable emotions or traumas. In social terms, the latter methods help to be achieved an increase of the sense for social adaptation of people from all ages, and of whole families. In conclusion, it may be stated, that the use of different forms of support after disaster situations, has major significance for recovery and maintenance of the physical, emotional, and mental health of the population.

  11. Efficient Parameterization for Grey-box Model Identification of Complex Physical Systems

    DEFF Research Database (Denmark)

    Blanke, Mogens; Knudsen, Morten Haack

    2006-01-01

    Grey box model identification preserves known physical structures in a model but with limits to the possible excitation, all parameters are rarely identifiable, and different parametrizations give significantly different model quality. Convenient methods to show which parameterizations are the be...... that need be constrained to achieve satisfactory convergence. Identification of nonlinear models for a ship illustrate the concept....

  12. The constellation of dietary factors in adolescent acne: a semantic connectivity map approach.

    Science.gov (United States)

    Grossi, E; Cazzaniga, S; Crotti, S; Naldi, L; Di Landro, A; Ingordo, V; Cusano, F; Atzori, L; Tripodi Cutrì, F; Musumeci, M L; Pezzarossa, E; Bettoli, V; Caproni, M; Bonci, A

    2016-01-01

    Different lifestyle and dietetic factors have been linked with the onset and severity of acne. To assess the complex interconnection between dietetic variables and acne. This was a reanalysis of data from a case-control study by using a semantic connectivity map approach. 563 subjects, aged 10-24 years, involved in a case-control study of acne between March 2009 and February 2010, were considered in this study. The analysis evaluated the link between a moderate to severe acne and anthropometric variables, family history and dietetic factors. Analyses were conducted by relying on an artificial adaptive system, the Auto Semantic Connectivity Map (AutoCM). The AutoCM map showed that moderate-severe acne was closely associated with family history of acne in first degree relatives, obesity (BMI ≥ 30), and high consumption of milk, in particular skim milk, cheese/yogurt, sweets/cakes, chocolate, and a low consumption of fish, and limited intake of fruits/vegetables. Our analyses confirm the link between several dietetic items and acne. When providing care, dermatologists should also be aware of the complex interconnection between dietetic factors and acne. © 2014 European Academy of Dermatology and Venereology.

  13. Parameterized Finite Element Modeling and Buckling Analysis of Six Typical Composite Grid Cylindrical Shells

    Science.gov (United States)

    Lai, Changliang; Wang, Junbiao; Liu, Chuang

    2014-10-01

    Six typical composite grid cylindrical shells are constructed by superimposing three basic types of ribs. Then buckling behavior and structural efficiency of these shells are analyzed under axial compression, pure bending, torsion and transverse bending by finite element (FE) models. The FE models are created by a parametrical FE modeling approach that defines FE models with original natural twisted geometry and orients cross-sections of beam elements exactly. And the approach is parameterized and coded by Patran Command Language (PCL). The demonstrations of FE modeling indicate the program enables efficient generation of FE models and facilitates parametric studies and design of grid shells. Using the program, the effects of helical angles on the buckling behavior of six typical grid cylindrical shells are determined. The results of these studies indicate that the triangle grid and rotated triangle grid cylindrical shell are more efficient than others under axial compression and pure bending, whereas under torsion and transverse bending, the hexagon grid cylindrical shell is most efficient. Additionally, buckling mode shapes are compared and provide an understanding of composite grid cylindrical shells that is useful in preliminary design of such structures.

  14. Intercomparison of Martian Lower Atmosphere Simulated Using Different Planetary Boundary Layer Parameterization Schemes

    Science.gov (United States)

    Natarajan, Murali; Fairlie, T. Duncan; Dwyer Cianciolo, Alicia; Smith, Michael D.

    2015-01-01

    We use the mesoscale modeling capability of Mars Weather Research and Forecasting (MarsWRF) model to study the sensitivity of the simulated Martian lower atmosphere to differences in the parameterization of the planetary boundary layer (PBL). Characterization of the Martian atmosphere and realistic representation of processes such as mixing of tracers like dust depend on how well the model reproduces the evolution of the PBL structure. MarsWRF is based on the NCAR WRF model and it retains some of the PBL schemes available in the earth version. Published studies have examined the performance of different PBL schemes in NCAR WRF with the help of observations. Currently such assessments are not feasible for Martian atmospheric models due to lack of observations. It is of interest though to study the sensitivity of the model to PBL parameterization. Typically, for standard Martian atmospheric simulations, we have used the Medium Range Forecast (MRF) PBL scheme, which considers a correction term to the vertical gradients to incorporate nonlocal effects. For this study, we have also used two other parameterizations, a non-local closure scheme called Yonsei University (YSU) PBL scheme and a turbulent kinetic energy closure scheme called Mellor- Yamada-Janjic (MYJ) PBL scheme. We will present intercomparisons of the near surface temperature profiles, boundary layer heights, and wind obtained from the different simulations. We plan to use available temperature observations from Mini TES instrument onboard the rovers Spirit and Opportunity in evaluating the model results.

  15. Parameterization of phase change of water in a mesoscale model

    Energy Technology Data Exchange (ETDEWEB)

    Levkov, L; Eppel, D; Grassl, H

    1987-01-01

    A parameterization scheme of phase change of water is suggested to be used in the 3-D numerical nonhydrostatic model GESIMA. The microphysical formulation follows the so-called bulk technique. With this procedure the net production rates in the balance equations for water and potential temperature are given both for liquid and ice-phase. Convectively stable as well as convectively unstable mesoscale systems are considered. With 2 figs..

  16. A Comparative Study of Nucleation Parameterizations: 2. Three-Dimensional Model Application and Evaluation

    Science.gov (United States)

    Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...

  17. Estimating wetland connectivity to streams in the Prairie Pothole Region: An isotopic and remote sensing approach

    Science.gov (United States)

    Brooks, J. R.; Mushet, David M.; Vanderhoof, Melanie; Leibowitz, Scott G.; Neff, Brian; Christensen, J. R.; Rosenberry, Donald O.; Rugh, W. D.; Alexander, L.C.

    2018-01-01

    Understanding hydrologic connectivity between wetlands and perennial streams is critical to understanding the reliance of stream flow on inputs from wetlands. We used the isotopic evaporation signal in water and remote sensing to examine wetland‐stream hydrologic connectivity within the Pipestem Creek watershed, North Dakota, a watershed dominated by prairie‐pothole wetlands. Pipestem Creek exhibited an evaporated‐water signal that had approximately half the isotopic‐enrichment signal found in most evaporatively enriched prairie‐pothole wetlands. Groundwater adjacent to Pipestem Creek had isotopic values that indicated recharge from winter precipitation and had no significant evaporative enrichment, indicating that enriched surface water did not contribute significantly to groundwater discharging into Pipestem Creek. The estimated surface water area necessary to generate the evaporation signal within Pipestem Creek was highly dynamic, varied primarily with the amount of discharge, and was typically greater than the immediate Pipestem Creek surface water area, indicating that surficial flow from wetlands contributed to stream flow throughout the summer. We propose a dynamic range of spilling thresholds for prairie‐pothole wetlands across the watershed allowing for wetland inputs even during low‐flow periods. Combining Landsat estimates with the isotopic approach allowed determination of potential (Landsat) and actual (isotope) contributing areas in wetland‐dominated systems. This combined approach can give insights into the changes in location and magnitude of surface water and groundwater pathways over time. This approach can be used in other areas where evaporation from wetlands results in a sufficient evaporative isotopic signal.

  18. The feasibility of parameterizing four-state equilibria using relaxation dispersion measurements

    International Nuclear Information System (INIS)

    Li Pilong; Martins, Ilídio R. S.; Rosen, Michael K.

    2011-01-01

    Coupled equilibria play important roles in controlling information flow in biochemical systems, including allosteric molecules and multidomain proteins. In the simplest case, two equilibria are coupled to produce four interconverting states. In this study, we assessed the feasibility of determining the degree of coupling between two equilibria in a four-state system via relaxation dispersion measurements. A major bottleneck in this effort is the lack of efficient approaches to data analysis. To this end, we designed a strategy to efficiently evaluate the smoothness of the target function surface (TFS). Using this approach, we found that the TFS is very rough when fitting benchmark CPMG data to all adjustable variables of the four-state equilibria. After constraining a portion of the adjustable variables, which can often be achieved through independent biochemical manipulation of the system, the smoothness of TFS improves dramatically, although it is still insufficient to pinpoint the solution. The four-state equilibria can be finally solved with further incorporation of independent chemical shift information that is readily available. We also used Monte Carlo simulations to evaluate how well each adjustable parameter can be determined in a large kinetic and thermodynamic parameter space and how much improvement can be achieved in defining the parameters through additional measurements. The results show that in favorable conditions the combination of relaxation dispersion and biochemical manipulation allow the four-state equilibrium to be parameterized, and thus coupling strength between two processes to be determined.

  19. Connected Traveler

    Energy Technology Data Exchange (ETDEWEB)

    2016-06-01

    The Connected Traveler framework seeks to boost the energy efficiency of personal travel and the overall transportation system by maximizing the accuracy of predicted traveler behavior in response to real-time feedback and incentives. It is anticipated that this approach will establish a feedback loop that 'learns' traveler preferences and customizes incentives to meet or exceed energy efficiency targets by empowering individual travelers with information needed to make energy-efficient choices and reducing the complexity required to validate transportation system energy savings. This handout provides an overview of NREL's Connected Traveler project, including graphics, milestones, and contact information.

  20. Weighing neutrinos in the scenario of vacuum energy interacting with cold dark matter: application of the parameterized post-Friedmann approach

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Rui-Yun; Li, Yun-He; Zhang, Jing-Fei; Zhang, Xin, E-mail: guoruiyun110@163.com, E-mail: liyh19881206@126.com, E-mail: jfzhang@mail.neu.edu.cn, E-mail: zhangxin@mail.neu.edu.cn [Department of Physics, College of Sciences, Northeastern University, Shenyang 110004 (China)

    2017-05-01

    We constrain the neutrino mass in the scenario of vacuum energy interacting with cold dark matter by using current cosmological observations. To avoid the large-scale instability problem in interacting dark energy models, we employ the parameterized post-Friedmann (PPF) approach to do the calculation of perturbation evolution, for the Q = β H ρ{sub c} and Q = β H ρ{sub Λ} models. The current observational data sets used in this work include Planck (cosmic microwave background), BSH (baryon acoustic oscillations, type Ia supernovae, and Hubble constant), and LSS (redshift space distortions and weak lensing). According to the constraint results, we find that β > 0 at more than 1σ level for the Q = β H ρ{sub c} model, which indicates that cold dark matter decays into vacuum energy; while β = 0 is consistent with the current data at 1σ level for the Q = β H ρ{sub Λ} model. Taking the ΛCDM model as a baseline model, we find that a smaller upper limit, ∑ m {sub ν} < 0.11 eV (2σ), is induced by the latest BAO BOSS DR12 data and the Hubble constant measurement H {sub 0} = 73.00 ± 1.75 km s{sup −1} Mpc{sup −1}. For the Q = β H ρ{sub c} model, we obtain ∑ m {sub ν}<0.20 eV (2σ) from Planck+BSH. For the Q = β H ρ{sub Λ} model, ∑ m {sub ν}<0.10 eV (2σ) and ∑ m {sub ν}<0.14 eV (2σ) are derived from Planck+BSH and Planck+BSH+LSS, respectively. We show that these smaller upper limits on ∑ m {sub ν} are affected more or less by the tension between H {sub 0} and other observational data.

  1. Test Driven Development of a Parameterized Ice Sheet Component

    Science.gov (United States)

    Clune, T.

    2011-12-01

    Test driven development (TDD) is a software development methodology that offers many advantages over traditional approaches including reduced development and maintenance costs, improved reliability, and superior design quality. Although TDD is widely accepted in many software communities, the suitability to scientific software is largely undemonstrated and warrants a degree of skepticism. Indeed, numerical algorithms pose several challenges to unit testing in general, and TDD in particular. Among these challenges are the need to have simple, non-redundant closed-form expressions to compare against the results obtained from the implementation as well as realistic error estimates. The necessity for serial and parallel performance raises additional concerns for many scientific applicaitons. In previous work I demonstrated that TDD performed well for the development of a relatively simple numerical model that simulates the growth of snowflakes, but the results were anecdotal and of limited relevance to far more complex software components typical of climate models. This investigation has now been extended by successfully applying TDD to the implementation of a substantial portion of a new parameterized ice sheet component within a full climate model. After a brief introduction to TDD, I will present techniques that address some of the obstacles encountered with numerical algorithms. I will conclude with some quantitative and qualitative comparisons against climate components developed in a more traditional manner.

  2. Parameterization of ionization rate by auroral electron precipitation in Jupiter

    Directory of Open Access Journals (Sweden)

    Y. Hiraki

    2008-02-01

    Full Text Available We simulate auroral electron precipitation into the Jovian atmosphere in which electron multi-directional scattering and energy degradation processes are treated exactly with a Monte Carlo technique. We make a parameterization of the calculated ionization rate of the neutral gas by electron impact in a similar way as used for the Earth's aurora. Our method allows the altitude distribution of the ionization rate to be obtained as a function of an arbitrary initial energy spectrum in the range of 1–200 keV. It also includes incident angle dependence and an arbitrary density distribution of molecular hydrogen. We show that there is little dependence of the estimated ionospheric conductance on atomic species such as H and He. We compare our results with those of recent studies with different electron transport schemes by adapting our parameterization to their atmospheric conditions. We discuss the intrinsic problem of their simplified assumption. The ionospheric conductance, which is important for Jupiter's magnetosphere-ionosphere coupling system, is estimated to vary by a factor depending on the electron energy spectrum based on recent observation and modeling. We discuss this difference through the relation with field-aligned current and electron spectrum.

  3. Rapid parameterization of small molecules using the Force Field Toolkit.

    Science.gov (United States)

    Mayne, Christopher G; Saam, Jan; Schulten, Klaus; Tajkhorshid, Emad; Gumbart, James C

    2013-12-15

    The inability to rapidly generate accurate and robust parameters for novel chemical matter continues to severely limit the application of molecular dynamics simulations to many biological systems of interest, especially in fields such as drug discovery. Although the release of generalized versions of common classical force fields, for example, General Amber Force Field and CHARMM General Force Field, have posited guidelines for parameterization of small molecules, many technical challenges remain that have hampered their wide-scale extension. The Force Field Toolkit (ffTK), described herein, minimizes common barriers to ligand parameterization through algorithm and method development, automation of tedious and error-prone tasks, and graphical user interface design. Distributed as a VMD plugin, ffTK facilitates the traversal of a clear and organized workflow resulting in a complete set of CHARMM-compatible parameters. A variety of tools are provided to generate quantum mechanical target data, setup multidimensional optimization routines, and analyze parameter performance. Parameters developed for a small test set of molecules using ffTK were comparable to existing CGenFF parameters in their ability to reproduce experimentally measured values for pure-solvent properties (<15% error from experiment) and free energy of solvation (±0.5 kcal/mol from experiment). Copyright © 2013 Wiley Periodicals, Inc.

  4. Impact of climate seasonality on catchment yield: A parameterization for commonly-used water balance formulas

    Science.gov (United States)

    de Lavenne, Alban; Andréassian, Vazken

    2018-03-01

    This paper examines the hydrological impact of the seasonality of precipitation and maximum evaporation: seasonality is, after aridity, a second-order determinant of catchment water yield. Based on a data set of 171 French catchments (where aridity ranged between 0.2 and 1.2), we present a parameterization of three commonly-used water balance formulas (namely, Turc-Mezentsev, Tixeront-Fu and Oldekop formulas) to account for seasonality effects. We quantify the improvement of seasonality-based parameterization in terms of the reconstitution of both catchment streamflow and water yield. The significant improvement obtained (reduction of RMSE between 9 and 14% depending on the formula) demonstrates the importance of climate seasonality in the determination of long-term catchment water balance.

  5. Integrated Care and Connected Health Approaches Leveraging Personalised Health through Big Data Analytics.

    Science.gov (United States)

    Maglaveras, Nicos; Kilintzis, Vassilis; Koutkias, Vassilis; Chouvarda, Ioanna

    2016-01-01

    Integrated care and connected health are two fast evolving concepts that have the potential to leverage personalised health. From the one side, the restructuring of care models and implementation of new systems and integrated care programs providing coaching and advanced intervention possibilities, enable medical decision support and personalized healthcare services. From the other side, the connected health ecosystem builds the means to follow and support citizens via personal health systems in their everyday activities and, thus, give rise to an unprecedented wealth of data. These approaches are leading to the deluge of complex data, as well as in new types of interactions with and among users of the healthcare ecosystem. The main challenges refer to the data layer, the information layer, and the output of information processing and analytics. In all the above mentioned layers, the primary concern is the quality both in data and information, thus, increasing the need for filtering mechanisms. Especially in the data layer, the big biodata management and analytics ecosystem is evolving, telemonitoring is a step forward for data quality leverage, with numerous challenges still left to address, partly due to the large number of micro-nano sensors and technologies available today, as well as the heterogeneity in the users' background and data sources. This leads to new R&D pathways as it concerns biomedical information processing and management, as well as to the design of new intelligent decision support systems (DSS) and interventions for patients. In this paper, we illustrate these issues through exemplar research targeting chronic patients, illustrating the current status and trends in PHS within the integrated care and connected care world.

  6. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    Science.gov (United States)

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in

  7. Ecological connectivity networks in rapidly expanding cities.

    Science.gov (United States)

    Nor, Amal Najihah M; Corstanje, Ron; Harris, Jim A; Grafius, Darren R; Siriwardena, Gavin M

    2017-06-01

    Urban expansion increases fragmentation of the landscape. In effect, fragmentation decreases connectivity, causes green space loss and impacts upon the ecology and function of green space. Restoration of the functionality of green space often requires restoring the ecological connectivity of this green space within the city matrix. However, identifying ecological corridors that integrate different structural and functional connectivity of green space remains vague. Assessing connectivity for developing an ecological network by using efficient models is essential to improve these networks under rapid urban expansion. This paper presents a novel methodological approach to assess and model connectivity for the Eurasian tree sparrow ( Passer montanus ) and Yellow-vented bulbul ( Pycnonotus goiavier ) in three cities (Kuala Lumpur, Malaysia; Jakarta, Indonesia and Metro Manila, Philippines). The approach identifies potential priority corridors for ecological connectivity networks. The study combined circuit models, connectivity analysis and least-cost models to identify potential corridors by integrating structure and function of green space patches to provide reliable ecological connectivity network models in the cities. Relevant parameters such as landscape resistance and green space structure (vegetation density, patch size and patch distance) were derived from an expert and literature-based approach based on the preference of bird behaviour. The integrated models allowed the assessment of connectivity for both species using different measures of green space structure revealing the potential corridors and least-cost pathways for both bird species at the patch sites. The implementation of improvements to the identified corridors could increase the connectivity of green space. This study provides examples of how combining models can contribute to the improvement of ecological networks in rapidly expanding cities and demonstrates the usefulness of such models for

  8. Ecological connectivity networks in rapidly expanding cities

    Directory of Open Access Journals (Sweden)

    Amal Najihah M. Nor

    2017-06-01

    Full Text Available Urban expansion increases fragmentation of the landscape. In effect, fragmentation decreases connectivity, causes green space loss and impacts upon the ecology and function of green space. Restoration of the functionality of green space often requires restoring the ecological connectivity of this green space within the city matrix. However, identifying ecological corridors that integrate different structural and functional connectivity of green space remains vague. Assessing connectivity for developing an ecological network by using efficient models is essential to improve these networks under rapid urban expansion. This paper presents a novel methodological approach to assess and model connectivity for the Eurasian tree sparrow (Passer montanus and Yellow-vented bulbul (Pycnonotus goiavier in three cities (Kuala Lumpur, Malaysia; Jakarta, Indonesia and Metro Manila, Philippines. The approach identifies potential priority corridors for ecological connectivity networks. The study combined circuit models, connectivity analysis and least-cost models to identify potential corridors by integrating structure and function of green space patches to provide reliable ecological connectivity network models in the cities. Relevant parameters such as landscape resistance and green space structure (vegetation density, patch size and patch distance were derived from an expert and literature-based approach based on the preference of bird behaviour. The integrated models allowed the assessment of connectivity for both species using different measures of green space structure revealing the potential corridors and least-cost pathways for both bird species at the patch sites. The implementation of improvements to the identified corridors could increase the connectivity of green space. This study provides examples of how combining models can contribute to the improvement of ecological networks in rapidly expanding cities and demonstrates the usefulness of such

  9. Prioritizing connection requests in GMPLS-controlled optical networks

    DEFF Research Database (Denmark)

    Ruepp, Sarah Renée; Koster, A.; Andriolli, N.

    2009-01-01

    We prioritize bidirectional connection requests by combining dynamic connection provisioning with off-line optimization. Results show that the proposed approach decreases wavelength-converter usage, thereby allowing operators to reduce blocking-probably under bulk connection assignment or network...

  10. Recent developments and assessment of a three-dimensional PBL parameterization for improved wind forecasting over complex terrain

    Science.gov (United States)

    Kosovic, B.; Jimenez, P. A.; Haupt, S. E.; Martilli, A.; Olson, J.; Bao, J. W.

    2017-12-01

    At present, the planetary boundary layer (PBL) parameterizations available in most numerical weather prediction (NWP) models are one-dimensional. One-dimensional parameterizations are based on the assumption of horizontal homogeneity. This homogeneity assumption is appropriate for grid cell sizes greater than 10 km. However, for mesoscale simulations of flows in complex terrain with grid cell sizes below 1 km, the assumption of horizontal homogeneity is violated. Applying a one-dimensional PBL parameterization to high-resolution mesoscale simulations in complex terrain could result in significant error. For high-resolution mesoscale simulations of flows in complex terrain, we have therefore developed and implemented a three-dimensional (3D) PBL parameterization in the Weather Research and Forecasting (WRF) model. The implementation of the 3D PBL scheme is based on the developments outlined by Mellor and Yamada (1974, 1982). Our implementation in the Weather Research and Forecasting (WRF) model uses a pure algebraic model (level 2) to diagnose the turbulent fluxes. To evaluate the performance of the 3D PBL model, we use observations from the Wind Forecast Improvement Project 2 (WFIP2). The WFIP2 field study took place in the Columbia River Gorge area from 2015-2017. We focus on selected cases when physical phenomena of significance for wind energy applications such as mountain waves, topographic wakes, and gap flows were observed. Our assessment of the 3D PBL parameterization also considers a large-eddy simulation (LES). We carried out a nested LES with grid cell sizes of 30 m and 10 m covering a large fraction of the WFIP2 study area. Both LES domains were discretized using 6000 x 3000 x 200 grid cells in zonal, meridional, and vertical direction, respectively. The LES results are used to assess the relative magnitude of horizontal gradients of turbulent stresses and fluxes in comparison to vertical gradients. The presentation will highlight the advantages of the 3

  11. Interconnection of subsystems in closed-loop systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2009-01-01

    -Jabr-Bongiorno-Kucera (YJBK) parameterization. The dual YJBK transfer function is applied in connection with the closed-loop stability analysis. The primary YJBK parameterization is applied in connection with design of controllers. Further, it is shown how it is possible to obtain a direct estimation of a connected sub...

  12. Bone and fat connection in aging bone.

    Science.gov (United States)

    Duque, Gustavo

    2008-07-01

    The fat and bone connection plays an important role in the pathophysiology of age-related bone loss. This review will focus on the age-induced mechanisms regulating the predominant differentiation of mesenchymal stem cells into adipocytes. Additionally, bone marrow fat will be considered as a diagnostic and therapeutic approach to osteoporosis. There are two types of bone and fat connection. The 'systemic connection', usually seen in obese patients, is hormonally regulated and associated with high bone mass and strength. The 'local connection' happens inside the bone marrow. Increasing amounts of bone marrow fat affect bone turnover through the inhibition of osteoblast function and survival and the promotion of osteoclast differentiation and activation. This interaction is regulated by paracrine secretion of fatty acids and adipokines. Additionally, bone marrow fat could be quantified using noninvasive methods and could be used as a therapeutic approach due to its capacity to transdifferentiate into bone without affecting other types of fat in the body. The bone and fat connection within the bone marrow constitutes a typical example of lipotoxicity. Additionally, bone marrow fat could be used as a new diagnostic and therapeutic approach for osteoporosis in older persons.

  13. Parameterization of Cherenkov Light Lateral Distribution Function as a Function of the Zenith Angle around the Knee Region

    OpenAIRE

    Abdulsttar, Marwah M.; Al-Rubaiee, A. A.; Ali, Abdul Halim Kh.

    2016-01-01

    Cherenkov light lateral distribution function (CLLDF) simulation was fulfilled using CORSIKA code for configurations of Tunka EAS array of different zenith angles. The parameterization of the CLLDF was carried out as a function of the distance from the shower core in extensive air showers (EAS) and zenith angle on the basis of the CORSIKA simulation of primary proton around the knee region with the energy 3.10^15 eV at different zenith angles. The parameterized CLLDF is verified in comparison...

  14. Comparison of different Kalman filter approaches in deriving time varying connectivity from EEG data.

    Science.gov (United States)

    Ghumare, Eshwar; Schrooten, Maarten; Vandenberghe, Rik; Dupont, Patrick

    2015-08-01

    Kalman filter approaches are widely applied to derive time varying effective connectivity from electroencephalographic (EEG) data. For multi-trial data, a classical Kalman filter (CKF) designed for the estimation of single trial data, can be implemented by trial-averaging the data or by averaging single trial estimates. A general linear Kalman filter (GLKF) provides an extension for multi-trial data. In this work, we studied the performance of the different Kalman filtering approaches for different values of signal-to-noise ratio (SNR), number of trials and number of EEG channels. We used a simulated model from which we calculated scalp recordings. From these recordings, we estimated cortical sources. Multivariate autoregressive model parameters and partial directed coherence was calculated for these estimated sources and compared with the ground-truth. The results showed an overall superior performance of GLKF except for low levels of SNR and number of trials.

  15. Oral manifestations of connective tissue disease and novel therapeutic approaches.

    Science.gov (United States)

    Heath, Kenisha R; Rogers, Roy S; Fazel, Nasim

    2015-10-16

    Connective tissue diseases such as systemic lupus erythematosus (SLE), systemic sclerosis (SSc), and Sjögren syndrome (SS) have presented many difficulties both in their diagnosis and treatment. Known causes for this difficulty include uncertainty of disease etiology, the multitude of clinical presentations, the unpredictable disease course, and the variable cell types, soluble mediators, and tissue factors that are believed to play a role in the pathogenesis of connective tissue diseases. The characteristic oral findings seen with these specific connective tissue diseases may assist with more swift diagnostic capability. Additionally, the recent use of biologics may redefine the success rate in the treatment and management of the disease. In this review we describe the oral manifestations associated with SLE, SSc, and SS and review the novel biologic drugs used to treat these conditions.

  16. Modelling and parameterizing the influence of tides on ice-shelf melt rates

    Science.gov (United States)

    Jourdain, N.; Molines, J. M.; Le Sommer, J.; Mathiot, P.; de Lavergne, C.; Gurvan, M.; Durand, G.

    2017-12-01

    Significant Antarctic ice sheet thinning is observed in several sectors of Antarctica, in particular in the Amundsen Sea sector, where warm circumpolar deep waters affect basal melting. The later has the potential to trigger marine ice sheet instabilities, with an associated potential for rapid sea level rise. It is therefore crucial to simulate and understand the processes associated with ice-shelf melt rates. In particular, the absence of tides representation in ocean models remains a caveat of numerous ocean hindcasts and climate projections. In the Amundsen Sea, tides are relatively weak and the melt-induced circulation is stronger than the tidal circulation. Using a regional 1/12° ocean model of the Amundsen Sea, we nonetheless find that tides can increase melt rates by up to 36% in some ice-shelf cavities. Among the processes that can possibly affect melt rates, the most important is an increased exchange at the ice/ocean interface resulting from the presence of strong tidal currents along the ice drafts. Approximately a third of this effect is compensated by a decrease in thermal forcing along the ice draft, which is related to an enhanced vertical mixing in the ocean interior in presence of tides. Parameterizing the effect of tides is an alternative to the representation of explicit tides in an ocean model, and has the advantage not to require any filtering of ocean model outputs. We therefore explore different ways to parameterize the effects of tides on ice shelf melt. First, we compare several methods to impose tidal velocities along the ice draft. We show that getting a realistic spatial distribution of tidal velocities in important, and can be deduced from the barotropic velocities of a tide model. Then, we explore several aspects of parameterized tidal mixing to reproduce the tide-induced decrease in thermal forcing along the ice drafts.

  17. Modeling the energy balance in Marseille: Sensitivity to roughness length parameterizations and thermal admittance

    Science.gov (United States)

    Demuzere, M.; De Ridder, K.; van Lipzig, N. P. M.

    2008-08-01

    During the ESCOMPTE campaign (Experience sur Site pour COntraindre les Modeles de Pollution atmospherique et de Transport d'Emissions), a 4-day intensive observation period was selected to evaluate the Advanced Regional Prediction System (ARPS), a nonhydrostatic meteorological mesoscale model that was optimized with a parameterization for thermal roughness length to better represent urban surfaces. The evaluation shows that the ARPS model is able to correctly reproduce temperature, wind speed, and direction for one urban and two rural measurements stations. Furthermore, simulated heat fluxes show good agreement compared to the observations, although simulated sensible heat fluxes were initially too low for the urban stations. In order to improve the latter, different roughness length parameterization schemes were tested, combined with various thermal admittance values. This sensitivity study showed that the Zilitinkevich scheme combined with and intermediate value of thermal admittance performs best.

  18. PHOTOGRAMMETRIC APPROACH IN DETERMINING BEAM-COLUMN CONNECTION DEFORMATIONS

    Directory of Open Access Journals (Sweden)

    Ali Koken

    Full Text Available In accordance with the advances in technology, displacement calculation techniques are ever developing. Photogrammetry has become preferable in some new disciplines with the advances in the image processing methods. In this study, the authors have used two different measurement techniques to determine the angles of rotation in beam-column connections that are subjected to reversible cyclic loading. The first of these is the method that is widely used, the conventional method in structural mechanics experiments, where Linear Variable Differential Transformers (LVDTs are utilized; and the second is the photogrammetric measurement technique. The rotation angles were determined using these techniques in a total of ten steel beam-column connection experiments. After discussing the test procedures of the aforementioned methods, the results were presented. It was observed that the rotation angles measured by each method were very close to each other. It was concluded that the photogrammetric measurement technique could be used as an alternative to conventional methods, where electronic LVDTs are used.

  19. Hawaiian Winter Workshop Proceedings of Parameterization of Small-Scale Processes Held in Manoa, Hawaii on 17-20 January 1989

    Science.gov (United States)

    1989-01-01

    scale separation, the small scales would provide a diffusivity given by the Green - Kubo formula D = >dt: ( 1 ) 246 where V is the velocity followed by...the Green - Kubo convergence time can be considerably larger than the eddy under consideration. It is the nature of diffusion that if there isn’t any...adopt an eddy viscosity parameterization based on SCM or even a lower level parameterization, then one does find that, by integrating the mean

  20. Graphical Derivatives and Stability Analysis for Parameterized Equilibria with Conic Constraints

    Czech Academy of Sciences Publication Activity Database

    Mordukhovich, B. S.; Outrata, Jiří; Ramírez, H. C.

    2015-01-01

    Roč. 23, č. 4 (2015), s. 687-704 ISSN 1877-0533 R&D Projects: GA ČR(CZ) GAP201/12/0671 Institutional support: RVO:67985556 Keywords : Variational analysis and optimization * Parameterized equilibria * Conic constraints * Sensitivity and stability analysis * Solution maps * Graphical derivatives * Normal and tangent cones Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2015 http://library.utia.cas.cz/separaty/2015/MTR/outrata-0449259.pdf

  1. Evaluation of two MM5-PBL parameterization for solar radiation and temperature estimation in the South-Eastern area of the Iberian Peninsula

    International Nuclear Information System (INIS)

    Ruiz-Arias, J.A.; Pozo-Vasquez, D.; Sanchez-Sanchez, N.; Hayas-Barru, A.; Tovar-Pescador, J.; Montavez, J.P.

    2008-01-01

    We study the relative performance of two different MM5-PBL parametrizations (Blackadar and MRF) simulating hourly values of solar irradiance and temperature in the south-eastern part of the Iberian Peninsula. The evaluation was carried out throughout the different season of the year 2005 and for three different sky conditions: clear-sky, broken-clouds and overcast conditions. Two integrations, one per PBL parameterization, were carried out for every sky condition and season of the year and results were compared with observational data. Overall, the MM5 model, both using the Blackadar or MRF PBL parameterization, revealed to be a valid tool to estimate hourly values of solar radiation and temperature over the study area. The influence of the PBL parameterization on the model estimates was found to be more important for the solar radiation than for the temperature and highly dependent on the season and sky conditions. Particularly, a detailed analysis revealed that, during broken-clouds conditions, the ability of the model to reproduce hourly changes in the solar radiation strongly depends upon the selected PBL parameterization. Additionally, it was found that solar radiation RMSE values are about one order of magnitude higher during broken-clouds and overcast conditions compared to clear-sky conditions. For the temperature, the two PBL parameterizations provide very similar estimates. Only under overcast conditions and during the autumn, the MRF provides significantly better estimates.

  2. Identification Of The Epileptogenic Zone From Stereo-EEG Signals: A Connectivity-Graph Theory Approach

    Directory of Open Access Journals (Sweden)

    Ferruccio ePanzica

    2013-11-01

    Full Text Available In the context of focal drug-resistant epilepsies, the surgical resection of the epileptogenic zone (EZ, the cortical region responsible for the onset, early seizures organization and propagation, may be the only therapeutic option for reducing or suppressing seizures. The rather high rate of failure in epilepsy surgery of extra-temporal epilepsies highlights that the precise identification of the EZ, mandatory objective to achieve seizure freedom, is still an unsolved problem that requires more sophisticated methods of investigation.Despite the wide range of non-invasive investigations, intracranial stereo-EEG (SEEG recordings still represent, in many patients, the gold standard for the EZ identification. In this contest, the EZ localization is still based on visual analysis of SEEG, inevitably affected by the drawback of subjectivity and strongly time-consuming. Over the last years, considerable efforts have been made to develop advanced signal analysis techniques able to improve the identification of the EZ. Particular attention has been paid to those methods aimed at quantifying and characterising the interactions and causal relationships between neuronal populations, since is nowadays well assumed that epileptic phenomena are associated with abnormal changes in brain synchronisation mechanisms, and initial evidence has shown the suitability of this approach for the EZ localisation. The aim of this review is to provide an overview of the different EEG signal processing methods applied to study connectivity between distinct brain cortical regions, namely in focal epilepsies. In addition, with the aim of localizing the EZ, the approach based on graph theory will be described, since the study of the topological properties of the networks has strongly improved the study of brain connectivity mechanisms.

  3. Parameterization of cloud droplet formation for global and regional models: including adsorption activation from insoluble CCN

    Directory of Open Access Journals (Sweden)

    P. Kumar

    2009-04-01

    Full Text Available Dust and black carbon aerosol have long been known to exert potentially important and diverse impacts on cloud droplet formation. Most studies to date focus on the soluble fraction of these particles, and overlook interactions of the insoluble fraction with water vapor (even if known to be hydrophilic. To address this gap, we developed a new parameterization that considers cloud droplet formation within an ascending air parcel containing insoluble (but wettable particles externally mixed with aerosol containing an appreciable soluble fraction. Activation of particles with a soluble fraction is described through well-established Köhler theory, while the activation of hydrophilic insoluble particles is treated by "adsorption-activation" theory. In the latter, water vapor is adsorbed onto insoluble particles, the activity of which is described by a multilayer Frenkel-Halsey-Hill (FHH adsorption isotherm modified to account for particle curvature. We further develop FHH activation theory to i find combinations of the adsorption parameters AFHH, BFHH which yield atmospherically-relevant behavior, and, ii express activation properties (critical supersaturation that follow a simple power law with respect to dry particle diameter.

    The new parameterization is tested by comparing the parameterized cloud droplet number concentration against predictions with a detailed numerical cloud model, considering a wide range of particle populations, cloud updraft conditions, water vapor condensation coefficient and FHH adsorption isotherm characteristics. The agreement between parameterization and parcel model is excellent, with an average error of 10% and R2~0.98. A preliminary sensitivity study suggests that the sublinear response of droplet number to Köhler particle concentration is not as strong for FHH particles.

  4. Radiological approach to systemic connective tissue diseases

    Energy Technology Data Exchange (ETDEWEB)

    Wiesmann, W; Schneider, M

    1988-07-01

    Systemic lupus erythematosus (SLE) and progressive systemic sclerosis (PSS) represent the most frequent manifestations of systemic connective tissue diseases (collagen diseases). Radiological examinations are employed to estimate the extension and degree of the pathological process. In addition, progression of the disease can be verified. In both of the above collagen diseases, specific radiological findings can be observed that permit them to be differentiated from other entities. An algorithm for the adequate radiological work-up of collagen diseases is presented.

  5. GHI calculation sensitivity on microphysics, land- and cumulus parameterization in WRF over the Reunion Island

    Science.gov (United States)

    De Meij, A.; Vinuesa, J.-F.; Maupas, V.

    2018-05-01

    The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.

  6. A stratiform cloud parameterization for General Circulation Models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  7. Phenomenology of convection-parameterization closure

    Directory of Open Access Journals (Sweden)

    J.-I. Yano

    2013-04-01

    Full Text Available Closure is a problem of defining the convective intensity in a given parameterization. In spite of many years of efforts and progress, it is still considered an overall unresolved problem. The present article reviews this problem from phenomenological perspectives. The physical variables that may contribute in defining the convective intensity are listed, and their statistical significances identified by observational data analyses are reviewed. A possibility is discussed for identifying a correct closure hypothesis by performing a linear stability analysis of tropical convectively coupled waves with various different closure hypotheses. Various individual theoretical issues are considered from various different perspectives. The review also emphasizes that the dominant physical factors controlling convection differ between the tropics and extra-tropics, as well as between oceanic and land areas. Both observational as well as theoretical analyses, often focused on the tropics, do not necessarily lead to conclusions consistent with our operational experiences focused on midlatitudes. Though we emphasize the importance of the interplays between these observational, theoretical and operational perspectives, we also face challenges for establishing a solid research framework that is universally applicable. An energy cycle framework is suggested as such a candidate.

  8. IR Optics Measurement with Linear Coupling's Action-Angle Parameterization

    CERN Document Server

    Luo, Yun; Pilat, Fulvia Caterina; Satogata, Todd; Trbojevic, Dejan

    2005-01-01

    The interaction region (IP) optics are measured with the two DX/BPMs close to the IPs at the Relativistic Heavy Ion Collider (RHIC). The beta functions at IP are measured with the two eigenmodes' phase advances between the two BPMs. And the beta waists are also determined through the beta functions at the two BPMs. The coupling parameters at the IPs are also given through the linear coupling's action-angle parameterization. All the experimental data are taken during the driving oscillations with the AC dipole. The methods to do these measurements are discussed. And the measurement results during the beta*

  9. Non-perturbative Aspects of QCD and Parameterized Quark Propagator

    Institute of Scientific and Technical Information of China (English)

    HAN Ding-An; ZHOU Li-Juan; ZENG Ya-Guang; GU Yun-Ting; CAO Hui; MA Wei-Xing; MENG Cheng-Ju; PAN Ji-Huan

    2008-01-01

    Based on the Global Color Symmetry Model, the non-perturbative QCD vacuum is investigated in theparameterized fully dressed quark propagator. Our theoretical predictions for various quantities characterized the QCD vacuum are in agreement with those predicted by many other phenomenological QCD inspired models. The successful predictions clearly indicate the extensive validity of our parameterized quark propagator used here. A detailed discussion on the arbitrariness in determining the integration cut-off parameter of# in calculating QCD vacuum condensates and a good method, which avoided the dependence of calculating results on the cut-off parameter is also strongly recommended to readers.

  10. CHEM2D-OPP: A new linearized gas-phase ozone photochemistry parameterization for high-altitude NWP and climate models

    Directory of Open Access Journals (Sweden)

    J. P. McCormack

    2006-01-01

    Full Text Available The new CHEM2D-Ozone Photochemistry Parameterization (CHEM2D-OPP for high-altitude numerical weather prediction (NWP systems and climate models specifies the net ozone photochemical tendency and its sensitivity to changes in ozone mixing ratio, temperature and overhead ozone column based on calculations from the CHEM2D interactive middle atmospheric photochemical transport model. We evaluate CHEM2D-OPP performance using both short-term (6-day and long-term (1-year stratospheric ozone simulations with the prototype high-altitude NOGAPS-ALPHA forecast model. An inter-comparison of NOGAPS-ALPHA 6-day ozone hindcasts for 7 February 2005 with ozone photochemistry parameterizations currently used in operational NWP systems shows that CHEM2D-OPP yields the best overall agreement with both individual Aura Microwave Limb Sounder ozone profile measurements and independent hemispheric (10°–90° N ozone analysis fields. A 1-year free-running NOGAPS-ALPHA simulation using CHEM2D-OPP produces a realistic seasonal cycle in zonal mean ozone throughout the stratosphere. We find that the combination of a model cold temperature bias at high latitudes in winter and a warm bias in the CHEM2D-OPP temperature climatology can degrade the performance of the linearized ozone photochemistry parameterization over seasonal time scales despite the fact that the parameterized temperature dependence is weak in these regions.

  11. Observations of surface momentum exchange over the marginal-ice-zone and recommendations for its parameterization

    Science.gov (United States)

    Elvidge, A. D.; Renfrew, I. A.; Weiss, A. I.; Brooks, I. M.; Lachlan-Cope, T. A.; King, J. C.

    2015-10-01

    Comprehensive aircraft observations are used to characterise surface roughness over the Arctic marginal ice zone (MIZ) and consequently make recommendations for the parameterization of surface momentum exchange in the MIZ. These observations were gathered in the Barents Sea and Fram Strait from two aircraft as part of the Aerosol-Cloud Coupling And Climate Interactions in the Arctic (ACCACIA) project. They represent a doubling of the total number of such aircraft observations currently available over the Arctic MIZ. The eddy covariance method is used to derive estimates of the 10 m neutral drag coefficient (CDN10) from turbulent wind velocity measurements, and a novel method using albedo and surface temperature is employed to derive ice fraction. Peak surface roughness is found at ice fractions in the range 0.6 to 0.8 (with a mean interquartile range in CDN10 of 1.25 to 2.85 × 10-3). CDN10 as a function of ice fraction is found to be well approximated by the negatively skewed distribution provided by a leading parameterization scheme (Lüpkes et al., 2012) tailored for sea ice drag over the MIZ in which the two constituent components of drag - skin and form drag - are separately quantified. Current parameterization schemes used in the weather and climate models are compared with our results and the majority are found to be physically unjustified and unrepresentative. The Lüpkes et al. (2012) scheme is recommended in a computationally simple form, with adjusted parameter settings. A good agreement is found to hold for subsets of the data from different locations despite differences in sea ice conditions. Ice conditions in the Barents Sea, characterised by small, unconsolidated ice floes, are found to be associated with higher CDN10 values - especially at the higher ice fractions - than those of Fram Strait, where typically larger, smoother floes are observed. Consequently, the important influence of sea ice morphology and floe size on surface roughness is

  12. Radiological approach to systemic connective tissue diseases

    International Nuclear Information System (INIS)

    Wiesmann, W.; Schneider, M.

    1988-01-01

    Systemic lupus erythematosus (SLE) and progressive systemic sclerosis (PSS) represent the most frequent manifestations of systemic connective tissue diseases (collagen diseases). Radiological examinations are employed to estimate the extension and degree of the pathological process. In addition, progression of the disease can be verified. In both of the above collagen diseases, specific radiological findings can be observed that permit them to be differentiated from other entities. An algorithm for the adequate radiological work-up of collagen diseases is presented. (orig.) [de

  13. Anticipation-related brain connectivity in bipolar and unipolar depression: a graph theory approach.

    Science.gov (United States)

    Manelis, Anna; Almeida, Jorge R C; Stiffler, Richelle; Lockovich, Jeanette C; Aslam, Haris A; Phillips, Mary L

    2016-09-01

    Bipolar disorder is often misdiagnosed as major depressive disorder, which leads to inadequate treatment. Depressed individuals versus healthy control subjects, show increased expectation of negative outcomes. Due to increased impulsivity and risk for mania, however, depressed individuals with bipolar disorder may differ from those with major depressive disorder in neural mechanisms underlying anticipation processes. Graph theory methods for neuroimaging data analysis allow the identification of connectivity between multiple brain regions without prior model specification, and may help to identify neurobiological markers differentiating these disorders, thereby facilitating development of better therapeutic interventions. This study aimed to compare brain connectivity among regions involved in win/loss anticipation in depressed individuals with bipolar disorder (BDD) versus depressed individuals with major depressive disorder (MDD) versus healthy control subjects using graph theory methods. The study was conducted at the University of Pittsburgh Medical Center and included 31 BDD, 39 MDD, and 36 healthy control subjects. Participants were scanned while performing a number guessing reward task that included the periods of win and loss anticipation. We first identified the anticipatory network across all 106 participants by contrasting brain activation during all anticipation periods (win anticipation + loss anticipation) versus baseline, and win anticipation versus loss anticipation. Brain connectivity within the identified network was determined using the Independent Multiple sample Greedy Equivalence Search (IMaGES) and Linear non-Gaussian Orientation, Fixed Structure (LOFS) algorithms. Density of connections (the number of connections in the network), path length, and the global connectivity direction ('top-down' versus 'bottom-up') were compared across groups (BDD/MDD/healthy control subjects) and conditions (win/loss anticipation). These analyses showed that

  14. Impact of APEX parameterization and soil data on runoff, sediment, and nutrients transport assessment

    Science.gov (United States)

    Hydrological models have become essential tools for environmental assessments. This study’s objective was to evaluate a best professional judgment (BPJ) parameterization of the Agricultural Policy and Environmental eXtender (APEX) model with soil-survey data against the calibrated model with either ...

  15. Parameterization of interatomic potential by genetic algorithms: A case study

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Partha S., E-mail: psghosh@barc.gov.in; Arya, A.; Dey, G. K. [Materials Science Division, Bhabha Atomic Research Centre, Mumbai-400085 (India); Ranawat, Y. S. [Department of Ceramic Engineering, Indian Institute of Technology (BHU), Varanasi-221005 (India)

    2015-06-24

    A framework for Genetic Algorithm based methodology is developed to systematically obtain and optimize parameters for interatomic force field functions for MD simulations by fitting to a reference data base. This methodology is applied to the fitting of ThO{sub 2} (CaF{sub 2} prototype) – a representative of ceramic based potential fuel for nuclear applications. The resulting GA optimized parameterization of ThO{sub 2} is able to capture basic structural, mechanical, thermo-physical properties and also describes defect structures within the permissible range.

  16. Impact of a Stochastic Parameterization Scheme on El Nino-Southern Oscillation in the Community Climate System Model

    Science.gov (United States)

    Christensen, H. M.; Berner, J.; Sardeshmukh, P. D.

    2017-12-01

    Stochastic parameterizations have been used for more than a decade in atmospheric models. They provide a way to represent model uncertainty through representing the variability of unresolved sub-grid processes, and have been shown to have a beneficial effect on the spread and mean state for medium- and extended-range forecasts. There is increasing evidence that stochastic parameterization of unresolved processes can improve the bias in mean and variability, e.g. by introducing a noise-induced drift (nonlinear rectification), and by changing the residence time and structure of flow regimes. We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. SPPT results in a significant improvement in the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. We use a Linear Inverse Modelling framework to gain insight into the mechanisms by which SPPT has improved ENSO-variability.

  17. Learning connective-based word representations for implicit discourse relation identification

    DEFF Research Database (Denmark)

    Braud, Chloé Elodie; Denis, Pascal

    2016-01-01

    We introduce a simple semi-supervised ap-proach to improve implicit discourse relation identification. This approach harnesses large amounts of automatically extracted discourse connectives along with their arguments to con-struct new distributional word representations. Specifically, we represen...... their simplicity, these connective-based rep-resentations outperform various off-the-shelf word embeddings, and achieve state-of-the-art performance on this problem.......We introduce a simple semi-supervised ap-proach to improve implicit discourse relation identification. This approach harnesses large amounts of automatically extracted discourse connectives along with their arguments to con-struct new distributional word representations. Specifically, we represent...... words in the space of discourse connectives as a way to directly encode their rhetorical function. Experiments on the Penn Discourse Treebank demonstrate the effectiveness of these task-tailored repre-sentations in predicting implicit discourse re-lations. Our results indeed show that, despite...

  18. Framework of cloud parameterization including ice for 3-D mesoscale models

    Energy Technology Data Exchange (ETDEWEB)

    Levkov, L; Jacob, D; Eppel, D; Grassl, H

    1989-01-01

    A parameterization scheme for the simulation of ice in clouds incorporated into the hydrostatic version of the GKSS three-dimensional mesoscale model. Numerical simulations of precipitation are performed: over the Northe Sea, the Hawaiian trade wind area and in the region of the intertropical convergence zone. Not only some major features of convective structures in all three areas but also cloud-aerosol interactions have successfully been simulated. (orig.) With 19 figs., 2 tabs.

  19. WRF model sensitivity to choice of parameterization: a study of the `York Flood 1999'

    Science.gov (United States)

    Remesan, Renji; Bellerby, Tim; Holman, Ian; Frostick, Lynne

    2015-10-01

    Numerical weather modelling has gained considerable attention in the field of hydrology especially in un-gauged catchments and in conjunction with distributed models. As a consequence, the accuracy with which these models represent precipitation, sub-grid-scale processes and exceptional events has become of considerable concern to the hydrological community. This paper presents sensitivity analyses for the Weather Research Forecast (WRF) model with respect to the choice of physical parameterization schemes (both cumulus parameterisation (CPSs) and microphysics parameterization schemes (MPSs)) used to represent the `1999 York Flood' event, which occurred over North Yorkshire, UK, 1st-14th March 1999. The study assessed four CPSs (Kain-Fritsch (KF2), Betts-Miller-Janjic (BMJ), Grell-Devenyi ensemble (GD) and the old Kain-Fritsch (KF1)) and four MPSs (Kessler, Lin et al., WRF single-moment 3-class (WSM3) and WRF single-moment 5-class (WSM5)] with respect to their influence on modelled rainfall. The study suggests that the BMJ scheme may be a better cumulus parameterization choice for the study region, giving a consistently better performance than other three CPSs, though there are suggestions of underestimation. The WSM3 was identified as the best MPSs and a combined WSM3/BMJ model setup produced realistic estimates of precipitation quantities for this exceptional flood event. This study analysed spatial variability in WRF performance through categorical indices, including POD, FBI, FAR and CSI during York Flood 1999 under various model settings. Moreover, the WRF model was good at predicting high-intensity rare events over the Yorkshire region, suggesting it has potential for operational use.

  20. Subgrid Parameterization of the Soil Moisture Storage Capacity for a Distributed Rainfall-Runoff Model

    Directory of Open Access Journals (Sweden)

    Weijian Guo

    2015-05-01

    Full Text Available Spatial variability plays an important role in nonlinear hydrologic processes. Due to the limitation of computational efficiency and data resolution, subgrid variability is usually assumed to be uniform for most grid-based rainfall-runoff models, which leads to the scale-dependence of model performances. In this paper, the scale effect on the Grid-Xinanjiang model was examined. The bias of the estimation of precipitation, runoff, evapotranspiration and soil moisture at the different grid scales, along with the scale-dependence of the effective parameters, highlights the importance of well representing the subgrid variability. This paper presents a subgrid parameterization method to incorporate the subgrid variability of the soil storage capacity, which is a key variable that controls runoff generation and partitioning in the Grid-Xinanjiang model. In light of the similar spatial pattern and physical basis, the soil storage capacity is correlated with the topographic index, whose spatial distribution can more readily be measured. A beta distribution is introduced to represent the spatial distribution of the soil storage capacity within the grid. The results derived from the Yanduhe Basin show that the proposed subgrid parameterization method can effectively correct the watershed soil storage capacity curve. Compared to the original Grid-Xinanjiang model, the model performances are quite consistent at the different grid scales when the subgrid variability is incorporated. This subgrid parameterization method reduces the recalibration necessity when the Digital Elevation Model (DEM resolution is changed. Moreover, it improves the potential for the application of the distributed model in the ungauged basin.

  1. A universal approach to electrically connecting nanowire arrays using nanoparticles—application to a novel gas sensor architecture

    Science.gov (United States)

    Parthangal, Prahalad M.; Cavicchi, Richard E.; Zachariah, Michael R.

    2006-08-01

    We report on a novel, in situ approach toward connecting and electrically contacting vertically aligned nanowire arrays using conductive nanoparticles. The utility of the approach is demonstrated by development of a gas sensing device employing this nano-architecture. Well-aligned, single-crystalline zinc oxide nanowires were grown through a direct thermal evaporation process at 550 °C on gold catalyst layers. Electrical contact to the top of the nanowire array was established by creating a contiguous nanoparticle film through electrostatic attachment of conductive gold nanoparticles exclusively onto the tips of nanowires. A gas sensing device was constructed using such an arrangement and the nanowire assembly was found to be sensitive to both reducing (methanol) and oxidizing (nitrous oxides) gases. This assembly approach is amenable to any nanowire array for which a top contact electrode is needed.

  2. A universal approach to electrically connecting nanowire arrays using nanoparticles-application to a novel gas sensor architecture

    International Nuclear Information System (INIS)

    Parthangal, Prahalad M; Cavicchi, Richard E; Zachariah, Michael R

    2006-01-01

    We report on a novel, in situ approach toward connecting and electrically contacting vertically aligned nanowire arrays using conductive nanoparticles. The utility of the approach is demonstrated by development of a gas sensing device employing this nano-architecture. Well-aligned, single-crystalline zinc oxide nanowires were grown through a direct thermal evaporation process at 550 deg. C on gold catalyst layers. Electrical contact to the top of the nanowire array was established by creating a contiguous nanoparticle film through electrostatic attachment of conductive gold nanoparticles exclusively onto the tips of nanowires. A gas sensing device was constructed using such an arrangement and the nanowire assembly was found to be sensitive to both reducing (methanol) and oxidizing (nitrous oxides) gases. This assembly approach is amenable to any nanowire array for which a top contact electrode is needed

  3. Merging Approaches to Explore Connectivity in the Anemonefish, Amphiprion bicinctus, along the Saudi Arabian Coast of the Red Sea

    KAUST Repository

    Nanninga, Gerrit B.

    2013-09-01

    The field of marine population connectivity is receiving growing attention from ecologists worldwide. The degree to which metapopulations are connected via larval dispersal has vital ramifications for demographic and evolutionary dynamics and largely determines the way we manage threatened coastal ecosystems. Here we addressed different questions relating to connectivity by integrating direct and indirect genetic approaches over different spatial and ecological scales in a coral reef fish in the Red Sea. We developed 35 novel microsatellite loci for our study organism the two-band anemonefish Amphiprion bicinctus (Rüppel 1830), which served as the basis of the following approaches. First, we collected nearly one thousand samples of A. bicinctus from 19 locations across 1500 km along the Saudi Arabian coast to infer population genetic structure. Genetic variability along the northern and central coast was weak, but showed a significant break at approximately 20°N. Implementing a model of isolation by environment with chlorophyll-a concentrations and geographic distance as predictors we were able to explain over 90% of the genetic variability in the data (R2 = 0.92). For the second approach we sampled 311 (c. 99%) putative parents and 172 juveniles at an isolated reef, Quita al Girsh (QG), to estimate self-recruitment using genetic parentage analysis. Additionally we collected 176 juveniles at surrounding locations to estimate larval dispersal from QG and ran a biophysical dispersal model of the system with real5 time climatological forcing. In concordance with model predictions, we found a complete lack (c. 0.5%) of self-recruitment over two sampling periods within our study system, thus presenting the first empirical evidence for a largely open reef fish population. Lastly, to conceptualize different hypotheses regarding the underlying processes and mechanisms of self-recruitment versus long-distance dispersal in marine organisms with pelagic larval stages, I

  4. A new albedo parameterization for use in climate models over the Antarctic ice sheet

    NARCIS (Netherlands)

    Kuipers Munneke, P.|info:eu-repo/dai/nl/304831891; van den Broeke, M.R.|info:eu-repo/dai/nl/073765643; Lenaerts, J.T.M.|info:eu-repo/dai/nl/314850163; Flanner, M.G.; Gardner, A.S.; van de Berg, W.J.|info:eu-repo/dai/nl/304831611

    2011-01-01

    A parameterization for broadband snow surface albedo, based on snow grain size evolution, cloud optical thickness, and solar zenith angle, is implemented into a regional climate model for Antarctica and validated against field observations of albedo for the period 1995–2004. Over the Antarctic

  5. Improving Convection and Cloud Parameterization Using ARM Observations and NCAR Community Atmosphere Model CAM5

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Guang J. [Univ. of California, San Diego, CA (United States)

    2016-11-07

    The fundamental scientific objectives of our research are to use ARM observations and the NCAR CAM5 to understand the large-scale control on convection, and to develop improved convection and cloud parameterizations for use in GCMs.

  6. Emotional Connections in Higher Education Marketing

    Science.gov (United States)

    Durkin, Mark; McKenna, Seamas; Cummins, Darryl

    2012-01-01

    Purpose: Through examination of a case study this paper aims to describe a brand re-positioning exercise and explore how an emotionally driven approach to branding can help create meaningful connections with potential undergraduate students and can positively influence choice. Design/methodology/approach: The paper's approach is a case study…

  7. A stratiform cloud parameterization for general circulation models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in general circulation models (GCMs) is widely recognized as a major limitation in applying these models to predictions of global climate change. The purpose of this project is to develop in GCMs a stratiform cloud parameterization that expresses clouds in terms of bulk microphysical properties and their subgrid variability. Various clouds variables and their interactions are summarized. Precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  8. Parameterizing Urban Canopy Layer transport in an Lagrangian Particle Dispersion Model

    Science.gov (United States)

    Stöckl, Stefan; Rotach, Mathias W.

    2016-04-01

    The percentage of people living in urban areas is rising worldwide, crossed 50% in 2007 and is even higher in developed countries. High population density and numerous sources of air pollution in close proximity can lead to health issues. Therefore it is important to understand the nature of urban pollutant dispersion. In the last decades this field has experienced considerable progress, however the influence of large roughness elements is complex and has as of yet not been completely described. Hence, this work studied urban particle dispersion close to source and ground. It used an existing, steady state, three-dimensional Lagrangian particle dispersion model, which includes Roughness Sublayer parameterizations of turbulence and flow. The model is valid for convective and neutral to stable conditions and uses the kernel method for concentration calculation. As most Lagrangian models, its lower boundary is the zero-plane displacement, which means that roughly the lower two-thirds of the mean building height are not included in the model. This missing layer roughly coincides with the Urban Canopy Layer. An earlier work "traps" particles hitting the lower model boundary for a recirculation period, which is calculated under the assumption of a vortex in skimming flow, before "releasing" them again. The authors hypothesize that improving the lower boundary condition by including Urban Canopy Layer transport could improve model predictions. This was tested herein by not only trapping the particles, but also advecting them with a mean, parameterized flow in the Urban Canopy Layer. Now the model calculates the trapping period based on either recirculation due to vortex motion in skimming flow regimes or vertical velocity if no vortex forms, depending on incidence angle of the wind on a randomly chosen street canyon. The influence of this modification, as well as the model's sensitivity to parameterization constants, was investigated. To reach this goal, the model was

  9. Magic neutrino mass matrix and the Bjorken-Harrison-Scott parameterization

    International Nuclear Information System (INIS)

    Lam, C.S.

    2006-01-01

    Observed neutrino mixing can be described by a tribimaximal MNS matrix. The resulting neutrino mass matrix in the basis of a diagonal charged lepton mass matrix is both 2-3 symmetric and magic. By a magic matrix, I mean one whose row sums and column sums are all identical. I study what happens if 2-3 symmetry is broken but the magic symmetry is kept intact. In that case, the mixing matrix is parameterized by a single complex parameter U e3 , in a form discussed recently by Bjorken, Harrison, and Scott

  10. Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhien [Univ. of Wyoming, Laramie, WY (United States)

    2016-12-13

    Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentration retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic evaluations

  11. Framework to parameterize and validate APEX to support deployment of the nutrient tracking tool

    Science.gov (United States)

    Guidelines have been developed to parameterize and validate the Agricultural Policy Environmental eXtender (APEX) to support the Nutrient Tracking Tool (NTT). This follow-up paper presents 1) a case study to illustrate how the developed guidelines are applied in a headwater watershed located in cent...

  12. A generative modeling approach to connectivity-Electrical conduction in vascular networks

    DEFF Research Database (Denmark)

    Hald, Bjørn Olav

    2016-01-01

    The physiology of biological structures is inherently dynamic and emerges from the interaction and assembly of large collections of small entities. The extent of coupled entities complicates modeling and increases computational load. Here, microvascular networks are used to present a novel...... to synchronize vessel tone across the vast distances within a network. We hypothesize that electrical conduction capacity is delimited by the size of vascular structures and connectivity of the network. Generation and simulation of series of dynamical models of electrical spread within vascular networks...... of different size and composition showed that (1) Conduction is enhanced in models harboring long and thin endothelial cells that couple preferentially along the longitudinal axis. (2) Conduction across a branch point depends on endothelial connectivity between branches. (3) Low connectivity sub...

  13. Evaluation and Improvement of Cloud and Convective Parameterizations from Analyses of ARM Observations and Models

    Energy Technology Data Exchange (ETDEWEB)

    Del Genio, Anthony D. [NASA Goddard Inst. for Space Studies (GISS), New York, NY (United States)

    2016-03-11

    Over this period the PI and his performed a broad range of data analysis, model evaluation, and model improvement studies using ARM data. These included cloud regimes in the TWP and their evolution over the MJO; M-PACE IOP SCM-CRM intercomparisons; simulations of convective updraft strength and depth during TWP-ICE; evaluation of convective entrainment parameterizations using TWP-ICE simulations; evaluation of GISS GCM cloud behavior vs. long-term SGP cloud statistics; classification of aerosol semi-direct effects on cloud cover; depolarization lidar constraints on cloud phase; preferred states of the winter Arctic atmosphere, surface, and sub-surface; sensitivity of convection to tropospheric humidity; constraints on the parameterization of mesoscale organization from TWP-ICE WRF simulations; updraft and downdraft properties in TWP-ICE simulated convection; insights from long-term ARM records at Manus and Nauru.

  14. Parameterization models for solar radiation and solar technology applications

    Energy Technology Data Exchange (ETDEWEB)

    Khalil, Samy A. [National Research Institute of Astronomy and Geophysics, Solar and Space Department, Marsed Street, Helwan, 11421 Cairo (Egypt)

    2008-08-15

    Solar radiation is very important for the evaluation and wide use of solar renewable energy systems. The development of calibration procedures for broadband solar radiation photometric instrumentation and the improvement of broadband solar radiation measurement accuracy have been done. An improved diffuse sky reference and photometric calibration and characterization software for outdoor pyranometer calibrations are outlined. Parameterizations for direct beam, total hemispherical and diffuse sky radiation and solar radiation technology are briefly reviewed. The uncertainties for various broadband solar radiations of solar energy and atmospheric effects are discussed. The varying responsivities of solar radiation with meteorological, statistical and climatological parameters and possibility atmospheric conditions was examined. (author)

  15. Parameterization models for solar radiation and solar technology applications

    International Nuclear Information System (INIS)

    Khalil, Samy A.

    2008-01-01

    Solar radiation is very important for the evaluation and wide use of solar renewable energy systems. The development of calibration procedures for broadband solar radiation photometric instrumentation and the improvement of broadband solar radiation measurement accuracy have been done. An improved diffuse sky reference and photometric calibration and characterization software for outdoor pyranometer calibrations are outlined. Parameterizations for direct beam, total hemispherical and diffuse sky radiation and solar radiation technology are briefly reviewed. The uncertainties for various broadband solar radiations of solar energy and atmospheric effects are discussed. The varying responsivities of solar radiation with meteorological, statistical and climatological parameters and possibility atmospheric conditions was examined

  16. Classification of parameter-dependent quantum integrable models, their parameterization, exact solution and other properties

    International Nuclear Information System (INIS)

    Owusu, Haile K; Yuzbashyan, Emil A

    2011-01-01

    We study general quantum integrable Hamiltonians linear in a coupling constant and represented by finite N x N real symmetric matrices. The restriction on the coupling dependence leads to a natural notion of nontrivial integrals of motion and classification of integrable families into types according to the number of such integrals. A type M family in our definition is formed by N-M nontrivial mutually commuting operators linear in the coupling. Working from this definition alone, we parameterize type M operators, i.e. resolve the commutation relations, and obtain an exact solution for their eigenvalues and eigenvectors. We show that our parameterization covers all type 1, 2 and 3 integrable models and discuss the extent to which it is complete for other types. We also present robust numerical observation on the number of energy-level crossings in type M integrable systems and analyze the taxonomy of types in the 1D Hubbard model. (paper)

  17. Basic gerbe over non simply connected compact groups

    OpenAIRE

    Gawedzki, Krzysztof; Reis, Nuno

    2003-01-01

    We present an explicit construction of the basic bundle gerbes with connection over all connected compact simple Lie groups. These are geometric objects that appear naturally in the Lagrangian approach to the WZW conformal field theories. Our work extends the recent construction of E. Meinrenken \\cite{Meinr} restricted to the case of simply connected groups.

  18. The Development of a Parameterized Scatter Removal Algorithm for Nuclear Materials Identification System Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon Robert [Univ. of Tennessee, Knoxville, TN (United States)

    2010-03-01

    This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using

  19. THE DEVELOPMENT OF A PARAMETERIZED SCATTER REMOVAL ALGORITHM FOR NUCLEAR MATERIALS IDENTIFICATION SYSTEM IMAGING

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon R [ORNL

    2010-05-01

    This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the

  20. The use of the k - {epsilon} turbulence model within the Rossby Centre regional ocean climate model: parameterization development and results

    Energy Technology Data Exchange (ETDEWEB)

    Markus Meier, H.E. [Swedish Meteorological and Hydrological Inst., Norrkoeping (Sweden). Rossby Centre

    2000-09-01

    As mixing plays a dominant role for the physics of an estuary like the Baltic Sea (seasonal heat storage, mixing in channels, deep water mixing), different mixing parameterizations for use in 3D Baltic Sea models are discussed and compared. For this purpose two different OGCMs of the Baltic Sea are utilized. Within the Swedish regional climate modeling program, SWECLIM, a 3D coupled ice-ocean model for the Baltic Sea has been coupled with an improved version of the two-equation k - {epsilon} turbulence model with corrected dissipation term, flux boundary conditions to include the effect of a turbulence enhanced layer due to breaking surface gravity waves and a parameterization for breaking internal waves. Results of multi-year simulations are compared with observations. The seasonal thermocline is simulated satisfactory and erosion of the halocline is avoided. Unsolved problems are discussed. To replace the controversial equation for dissipation the performance of a hierarchy of k-models has been tested and compared with the k - {epsilon} model. In addition, it is shown that the results of the mixing parameterization depend very much on the choice of the ocean model. Finally, the impact of two mixing parameterizations on Baltic Sea climate is investigated. In this case the sensitivity of mean SST, vertical temperature and salinity profiles, ice season and seasonal cycle of heat fluxes is quite large.

  1. Assessment of two physical parameterization schemes for desert dust emissions in an atmospheric chemistry general circulation model

    Science.gov (United States)

    Astitha, M.; Abdel Kader, M.; Pozzer, A.; Lelieveld, J.

    2012-04-01

    Atmospheric particulate matter and more specific desert dust has been the topic of numerous research studies in the past due to the wide range of impacts in the environment and climate and the uncertainty of characterizing and quantifying these impacts in a global scale. In this work we present two physical parameterizations of the desert dust production that have been incorporated in the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). The scope of this work is to assess the impact of the two physical parameterizations in the global distribution of desert dust and highlight the advantages and disadvantages of using either technique. The dust concentration and deposition has been evaluated using the AEROCOM dust dataset for the year 2000 and data from the MODIS and MISR satellites as well as sun-photometer data from the AERONET network was used to compare the modelled aerosol optical depth with observations. The implementation of the two parameterizations and the simulations using relatively high spatial resolution (T106~1.1deg) has highlighted the large spatial heterogeneity of the dust emission sources as well as the importance of the input parameters (soil size and texture, vegetation, surface wind speed). Also, sensitivity simulations with the nudging option using reanalysis data from ECMWF and without nudging have showed remarkable differences for some areas. Both parameterizations have revealed the difficulty of simulating all arid regions with the same assumptions and mechanisms. Depending on the arid region, each emission scheme performs more or less satisfactorily which leads to the necessity of treating each desert differently. Even though this is a quite different task to accomplish in a global model, some recommendations are given and ideas for future improvements.

  2. Electrochemical-mechanical coupled modeling and parameterization of swelling and ionic transport in lithium-ion batteries

    Science.gov (United States)

    Sauerteig, Daniel; Hanselmann, Nina; Arzberger, Arno; Reinshagen, Holger; Ivanov, Svetlozar; Bund, Andreas

    2018-02-01

    The intercalation and aging induced volume changes of lithium-ion battery electrodes lead to significant mechanical pressure or volume changes on cell and module level. As the correlation between electrochemical and mechanical performance of lithium ion batteries at nano and macro scale requires a comprehensive and multidisciplinary approach, physical modeling accounting for chemical and mechanical phenomena during operation is very useful for the battery design. Since the introduced fully-coupled physical model requires proper parameterization, this work also focuses on identifying appropriate mathematical representation of compressibility as well as the ionic transport in the porous electrodes and the separator. The ionic transport is characterized by electrochemical impedance spectroscopy (EIS) using symmetric pouch cells comprising LiNi1/3Mn1/3Co1/3O2 (NMC) cathode, graphite anode and polyethylene separator. The EIS measurements are carried out at various mechanical loads. The observed decrease of the ionic conductivity reveals a significant transport limitation at high pressures. The experimentally obtained data are applied as input to the electrochemical-mechanical model of a prismatic 10 Ah cell. Our computational approach accounts intercalation induced electrode expansion, stress generation caused by mechanical boundaries, compression of the electrodes and the separator, outer expansion of the cell and finally the influence of the ionic transport within the electrolyte.

  3. A distribution-oriented approach to support landscape connectivity for ecologically distinct bird species.

    Science.gov (United States)

    Herrera, José M; Alagador, Diogo; Salgueiro, Pedro; Mira, António

    2018-01-01

    Managing landscape connectivity is a widely recognized overarching strategy for conserving biodiversity in human-impacted landscapes. However, planning the conservation and management of landscape connectivity of multiple and ecologically distinct species is still challenging. Here we provide a spatially-explicit framework which identifies and prioritizes connectivity conservation and restoration actions for species with distinct habitat affinities. Specifically, our study system comprised three groups of common bird species, forest-specialists, farmland-specialists, and generalists, populating a highly heterogeneous agricultural countryside in the southwestern Iberian Peninsula. We first performed a comprehensive analysis of the environmental variables underlying the distributional patterns of each bird species to reveal generalities in their guild-specific responses to landscape structure. Then, we identified sites which could be considered pivotal in maintaining current levels of landscape connectivity for the three bird guilds simultaneously, as well as the number and location of sites that need to be restored to maximize connectivity levels. Interestingly, we found that a small number of sites defined the shortest connectivity paths for the three bird guilds simultaneously, and were therefore considered key for conservation. Moreover, an even smaller number of sites were identified as critical to expand the landscape connectivity at maximum for the regional bird assemblage as a whole. Our spatially-explicit framework can provide valuable decision-making support to conservation practitioners aiming to identify key connectivity and restoration sites, a particularly urgent task in rapidly changing landscapes such as agroecosystems.

  4. Technical report series on global modeling and data assimilation. Volume 3: An efficient thermal infrared radiation parameterization for use in general circulation models

    Science.gov (United States)

    Suarex, Max J. (Editor); Chou, Ming-Dah

    1994-01-01

    A detailed description of a parameterization for thermal infrared radiative transfer designed specifically for use in global climate models is presented. The parameterization includes the effects of the main absorbers of terrestrial radiation: water vapor, carbon dioxide, and ozone. While being computationally efficient, the schemes compute very accurately the clear-sky fluxes and cooling rates from the Earth's surface to 0.01 mb. This combination of accuracy and speed makes the parameterization suitable for both tropospheric and middle atmospheric modeling applications. Since no transmittances are precomputed the atmospheric layers and the vertical distribution of the absorbers may be freely specified. The scheme can also account for any vertical distribution of fractional cloudiness with arbitrary optical thickness. These features make the parameterization very flexible and extremely well suited for use in climate modeling studies. In addition, the numerics and the FORTRAN implementation have been carefully designed to conserve both memory and computer time. This code should be particularly attractive to those contemplating long-term climate simulations, wishing to model the middle atmosphere, or planning to use a large number of levels in the vertical.

  5. Detecting altered connectivity patterns in HIV associated neurocognitive impairment using mutual connectivity analysis

    Science.gov (United States)

    Abidin, Anas Zainul; D'Souza, Adora M.; Nagarajan, Mahesh B.; Wismüller, Axel

    2016-03-01

    The use of functional Magnetic Resonance Imaging (fMRI) has provided interesting insights into our understanding of the brain. In clinical setups these scans have been used to detect and study changes in the brain network properties in various neurological disorders. A large percentage of subjects infected with HIV present cognitive deficits, which are known as HIV associated neurocognitive disorder (HAND). In this study we propose to use our novel technique named Mutual Connectivity Analysis (MCA) to detect differences in brain networks in subjects with and without HIV infection. Resting state functional MRI scans acquired from 10 subjects (5 HIV+ and 5 HIV-) were subject to standard preprocessing routines. Subsequently, the average time-series for each brain region of the Automated Anatomic Labeling (AAL) atlas are extracted and used with the MCA framework to obtain a graph characterizing the interactions between them. The network graphs obtained for different subjects are then compared using Network-Based Statistics (NBS), which is an approach to detect differences between graphs edges while controlling for the family-wise error rate when mass univariate testing is performed. Applying this approach on the graphs obtained yields a single network encompassing 42 nodes and 65 edges, which is significantly different between the two subject groups. Specifically connections to the regions in and around the basal ganglia are significantly decreased. Also some nodes corresponding to the posterior cingulate cortex are affected. These results are inline with our current understanding of pathophysiological mechanisms of HIV associated neurocognitive disease (HAND) and other HIV based fMRI connectivity studies. Hence, we illustrate the applicability of our novel approach with network-based statistics in a clinical case-control study to detect differences connectivity patterns.

  6. Monitoring Effective Connectivity in the Preterm Brain: A Graph Approach to Study Maturation

    Directory of Open Access Journals (Sweden)

    M. Lavanga

    2017-01-01

    Full Text Available In recent years, functional connectivity in the developmental science received increasing attention. Although it has been reported that the anatomical connectivity in the preterm brain develops dramatically during the last months of pregnancy, little is known about how functional and effective connectivity change with maturation. The present study investigated how effective connectivity in premature infants evolves. To assess it, we use EEG measurements and graph-theory methodologies. We recorded data from 25 preterm babies, who underwent long-EEG monitoring at least twice during their stay in the NICU. The recordings took place from 27 weeks postmenstrual age (PMA until 42 weeks PMA. Results showed that the EEG-connectivity, assessed using graph-theory indices, moved from a small-world network to a random one, since the clustering coefficient increases and the path length decreases. This shift can be due to the development of the thalamocortical connections and long-range cortical connections. Based on the network indices, we developed different age-prediction models. The best result showed that it is possible to predict the age of the infant with a root mean-squared error (MSE equal to 2.11 weeks. These results are similar to the ones reported in the literature for age prediction in preterm babies.

  7. Parameterizing radiative transfer to convert MAX-DOAS dSCDs into near-surface box-averaged mixing ratios

    Directory of Open Access Journals (Sweden)

    R. Sinreich

    2013-06-01

    Full Text Available We present a novel parameterization method to convert multi-axis differential optical absorption spectroscopy (MAX-DOAS differential slant column densities (dSCDs into near-surface box-averaged volume mixing ratios. The approach is applicable inside the planetary boundary layer under conditions with significant aerosol load, and builds on the increased sensitivity of MAX-DOAS near the instrument altitude. It parameterizes radiative transfer model calculations and significantly reduces the computational effort, while retrieving ~ 1 degree of freedom. The biggest benefit of this method is that the retrieval of an aerosol profile, which usually is necessary for deriving a trace gas concentration from MAX-DOAS dSCDs, is not needed. The method is applied to NO2 MAX-DOAS dSCDs recorded during the Mexico City Metropolitan Area 2006 (MCMA-2006 measurement campaign. The retrieved volume mixing ratios of two elevation angles (1° and 3° are compared to volume mixing ratios measured by two long-path (LP-DOAS instruments located at the same site. Measurements are found to agree well during times when vertical mixing is expected to be strong. However, inhomogeneities in the air mass above Mexico City can be detected by exploiting the different horizontal and vertical dimensions probed by the MAX-DOAS and LP-DOAS instruments. In particular, a vertical gradient in NO2 close to the ground can be observed in the afternoon, and is attributed to reduced mixing coupled with near-surface emission inside street canyons. The existence of a vertical gradient in the lower 250 m during parts of the day shows the general challenge of sampling the boundary layer in a representative way, and emphasizes the need of vertically resolved measurements.

  8. MINIMALLY INVASIVE SINGLE FLAP APPROACH WITH CONNECTIVE TISSUE WALL FOR PERIODONTAL REGENERATION

    Directory of Open Access Journals (Sweden)

    Kamen Kotsilkov

    2017-09-01

    Full Text Available INTRODUCTION: The destructive periodontal diseases are among the most prevalent in the human population. In some cases, bony defects are formed during the disease progression, thus sustaining deep periodontal pockets. The reconstruction of these defects is usually done with the classical techniques of bone substitutes placement and guided tissue regeneration. The clinical and histological data from the recent years, however, demonstrate the relatively low regenerative potential of these techniques. The contemporary approaches for periodontal regeneration rely on minimally invasive surgical protocols, aimed at complete tissue preservation in order to achieve and maintain primary closure and at stimulating the natural regenerative potential of the periodontal tissues. AIM: This presentation demonstrates the application of a new, minimally invasive, single flap surgical technique for periodontal regeneration in a clinical case with periodontitis and a residual deep intrabony defect. MATERIALS AND METHODS: A 37 years old patient presented with chronic generalised periodontitis. The initial therapy led to good control of the periodontal infection with a single residual deep periodontal pocket medially at 11 due to a deep intrabony defect. A single flap approach with an enamel matrix derivate application and a connective tissue wall technique were performed. The proper primary closure was obtained. RESULT: One month after surgery an initial mineralisation process in the defect was detected. At the third month, a complete clinical healing was observed. The radiographic control showed finished bone mineralisation and periodontal space recreation. CONCLUSION: In the limitation of the presented case, the minimally invasive surgical approach led to complete clinical healing and new bone formation, which could be proof for periodontal regeneration.

  9. New and extended parameterization of the thermodynamic model AIOMFAC: calculation of activity coefficients for organic-inorganic mixtures containing carboxyl, hydroxyl, carbonyl, ether, ester, alkenyl, alkyl, and aromatic functional groups

    Directory of Open Access Journals (Sweden)

    A. Zuend

    2011-09-01

    Full Text Available We present a new and considerably extended parameterization of the thermodynamic activity coefficient model AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients at room temperature. AIOMFAC combines a Pitzer-like electrolyte solution model with a UNIFAC-based group-contribution approach and explicitly accounts for interactions between organic functional groups and inorganic ions. Such interactions constitute the salt-effect, may cause liquid-liquid phase separation, and affect the gas-particle partitioning of aerosols. The previous AIOMFAC version was parameterized for alkyl and hydroxyl functional groups of alcohols and polyols. With the goal to describe a wide variety of organic compounds found in atmospheric aerosols, we extend here the parameterization of AIOMFAC to include the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkenyl, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon. Thermodynamic equilibrium data of organic-inorganic systems from the literature are critically assessed and complemented with new measurements to establish a comprehensive database. The database is used to determine simultaneously the AIOMFAC parameters describing interactions of organic functional groups with the ions H+, Li+, Na+, K+, NH4+, Mg2+, Ca2+, Cl, Br, NO3, HSO4, and SO42−. Detailed descriptions of different types of thermodynamic data, such as vapor-liquid, solid-liquid, and liquid-liquid equilibria, and their use for the model parameterization are provided. Issues regarding deficiencies of the database, types and uncertainties of experimental data, and limitations of the model, are discussed. The challenging parameter optimization problem is solved with a novel combination of powerful global minimization

  10. The importance of accurate meteorological input fields and accurate planetary boundary layer parameterizations, tested against ETEX-1

    International Nuclear Information System (INIS)

    Brandt, J.; Ebel, A.; Elbern, H.; Jakobs, H.; Memmesheimer, M.; Mikkelsen, T.; Thykier-Nielsen, S.; Zlatev, Z.

    1997-01-01

    Atmospheric transport of air pollutants is, in principle, a well understood process. If information about the state of the atmosphere is given in all details (infinitely accurate information about wind speed, etc.) and infinitely fast computers are available then the advection equation could in principle be solved exactly. This is, however, not the case: discretization of the equations and input data introduces some uncertainties and errors in the results. Therefore many different issues have to be carefully studied in order to diminish these uncertainties and to develop an accurate transport model. Some of these are e.g. the numerical treatment of the transport equation, accuracy of the mean meteorological input fields and parameterizations of sub-grid scale phenomena (as e.g. parameterizations of the 2 nd and higher order turbulence terms in order to reach closure in the perturbation equation). A tracer model for studying transport and dispersion of air pollution caused by a single but strong source is under development. The model simulations from the first ETEX release illustrate the differences caused by using various analyzed fields directly in the tracer model or using a meteorological driver. Also different parameterizations of the mixing height and the vertical exchange are compared. (author)

  11. Autonomous economic operation of grid connected DC microgrid

    DEFF Research Database (Denmark)

    Nutkani, Inam Ullah; Wang, Peng; Loh, Poh Chiang

    2014-01-01

    This paper presents an autonomous power sharing scheme for economic operation of grid-connected DC microgrid. Autonomous economic operation approach has already been tested for standalone AC microgrids to reduce the overall generation cost and proven a simple and easier to realize compared...... with the centralized management approach. In this paper, the same concept has been extended to grid-connected DC microgrid. The proposed economic droop scheme takes into consideration the power generation cost of Distributed Generators (DGs) and utility grid tariff and adaptively tunes their respective droop curves...... secondary control. The performance of the proposed scheme has been verified for the example grid-connected DC microgrid....

  12. The Sensitivity of WRF Daily Summertime Simulations over West Africa to Alternative Parameterizations. Part 1: African Wave Circulation

    Science.gov (United States)

    Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew

    2014-01-01

    The performance of the NCAR Weather Research and Forecasting Model (WRF) as a West African regional-atmospheric model is evaluated. The study tests the sensitivity of WRF-simulated vorticity maxima associated with African easterly waves to 64 combinations of alternative parameterizations in a series of simulations in September. In all, 104 simulations of 12-day duration during 11 consecutive years are examined. The 64 combinations combine WRF parameterizations of cumulus convection, radiation transfer, surface hydrology, and PBL physics. Simulated daily and mean circulation results are validated against NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) and NCEP/Department of Energy Global Reanalysis 2. Precipitation is considered in a second part of this two-part paper. A wide range of 700-hPa vorticity validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve correlations against reanalysis of 0.40-0.60 and realistic amplitudes of spatiotemporal variability for the 2006 focus year while a parallel-benchmark simulation by the NASA Regional Model-3 (RM3) achieves higher correlations, but less realistic spatiotemporal variability. The largest favorable impact on WRF-vorticity validation is achieved by selecting the Grell-Devenyi cumulus convection scheme, resulting in higher correlations against reanalysis than simulations using the Kain-Fritch convection. Other parameterizations have less-obvious impact, although WRF configurations incorporating one surface model and PBL scheme consistently performed poorly. A comparison of reanalysis circulation against two NASA radiosonde stations confirms that both reanalyses represent observations well enough to validate the WRF results. Validation statistics for optimized WRF configurations simulating the parallel period during 10 additional years are less favorable than for 2006.

  13. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.

    Science.gov (United States)

    Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.

  14. Dual Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2003-01-01

    A different aspect of using the parameterisation of all systems stabilised by a given controller, i.e. the dual Youla parameterisation, is considered. The relation between system change and the dual Youla parameter is derived in explicit form. A number of standard uncertain model descriptions...... are considered and the relation with the dual Youla parameter given. Some applications of the dual Youla parameterisation are considered in connection with the design of controllers and model/performance validation....

  15. GIS-based approach for quantifying landscape connectivity of Javan Hawk-Eagle habitat

    Science.gov (United States)

    Nurfatimah, C.; Syartinilia; Mulyani, Y. A.

    2018-05-01

    Javan Hawk-Eagle (Nisaetus bartelsi; JHE) is a law-protected endemic raptor which currently faced the decreased in number and size of habitat patches that will lead to patch isolation and species extinction. This study assessed the degree of connectivity between remnant habitat patches in central part of Java by utilizing Conefor Sensinode software as an additional tool for ArcGIS. The connectivity index was determined by three fractions which are infra, flux and connector. Using connectivity indices successfully identified 4 patches as core habitat, 9 patches as stepping-stone habitat and 6 patches as isolated habitat were derived from those connectivity indices. Those patches then being validated with land cover map derived from Landsat 8 of August 2014. 36% of core habitat covered by natural forest, meanwhile stepping stone habitat has 55% natural forest and isolated habitat covered by 59% natural forest. Isolated patches were caused by zero connectivity (PCcon = 0) and the patch size which too small to support viable JHE population. Yet, the condition of natural forest and the surrounding matrix landscape in isolated patches actually support the habitat need. Thus, it is very important to conduct the right conservation management system based on the condition of each patches.

  16. Parameterized representation of macroscopic cross section in the PWR fuel element considering burn-up cycles

    International Nuclear Information System (INIS)

    Belo, Thiago F.; Fiel, Joao Claudio B.

    2015-01-01

    Nuclear reactor core analysis involves neutronic modeling and the calculations require problem dependent nuclear data generated with few neutron energy groups, as for instance the neutron cross sections. The methods used to obtain these problem-dependent cross sections, in the reactor calculations, generally uses nuclear computer codes that require a large processing time and computational memory, making the process computationally very expensive. Presently, analysis of the macroscopic cross section, as a function of nuclear parameters, has shown a very distinct behavior that cannot be represented by simply using linear interpolation. Indeed, a polynomial representation is more adequate for the data parameterization. To provide the cross sections of rapidly and without the dependence of complex systems calculations, this work developed a set of parameterized cross sections, based on the Tchebychev polynomials, by fitting the cross sections as a function of nuclear parameters, which include fuel temperature, moderator temperature and density, soluble boron concentration, uranium enrichment, and the burn-up. In this study is evaluated the problem-dependent about fission, scattering, total, nu-fission, capture, transport and absorption cross sections for a typical PWR fuel element reactor, considering burn-up cycle. The analysis was carried out with the SCALE 6.1 code package. The results of comparison with direct calculations with the SCALE code system and also the test using project parameters, such as the temperature coefficient of reactivity and fast fission factor, show excellent agreements. The differences between the cross-section parameterization methodology and the direct calculations based on the SCALE code system are less than 0.03 percent. (author)

  17. Parameterized representation of macroscopic cross section in the PWR fuel element considering burn-up cycles

    Energy Technology Data Exchange (ETDEWEB)

    Belo, Thiago F.; Fiel, Joao Claudio B., E-mail: thiagofbelo@hotmail.com [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil)

    2015-07-01

    Nuclear reactor core analysis involves neutronic modeling and the calculations require problem dependent nuclear data generated with few neutron energy groups, as for instance the neutron cross sections. The methods used to obtain these problem-dependent cross sections, in the reactor calculations, generally uses nuclear computer codes that require a large processing time and computational memory, making the process computationally very expensive. Presently, analysis of the macroscopic cross section, as a function of nuclear parameters, has shown a very distinct behavior that cannot be represented by simply using linear interpolation. Indeed, a polynomial representation is more adequate for the data parameterization. To provide the cross sections of rapidly and without the dependence of complex systems calculations, this work developed a set of parameterized cross sections, based on the Tchebychev polynomials, by fitting the cross sections as a function of nuclear parameters, which include fuel temperature, moderator temperature and density, soluble boron concentration, uranium enrichment, and the burn-up. In this study is evaluated the problem-dependent about fission, scattering, total, nu-fission, capture, transport and absorption cross sections for a typical PWR fuel element reactor, considering burn-up cycle. The analysis was carried out with the SCALE 6.1 code package. The results of comparison with direct calculations with the SCALE code system and also the test using project parameters, such as the temperature coefficient of reactivity and fast fission factor, show excellent agreements. The differences between the cross-section parameterization methodology and the direct calculations based on the SCALE code system are less than 0.03 percent. (author)

  18. Seizure-Onset Mapping Based on Time-Variant Multivariate Functional Connectivity Analysis of High-Dimensional Intracranial EEG: A Kalman Filter Approach.

    Science.gov (United States)

    Lie, Octavian V; van Mierlo, Pieter

    2017-01-01

    The visual interpretation of intracranial EEG (iEEG) is the standard method used in complex epilepsy surgery cases to map the regions of seizure onset targeted for resection. Still, visual iEEG analysis is labor-intensive and biased due to interpreter dependency. Multivariate parametric functional connectivity measures using adaptive autoregressive (AR) modeling of the iEEG signals based on the Kalman filter algorithm have been used successfully to localize the electrographic seizure onsets. Due to their high computational cost, these methods have been applied to a limited number of iEEG time-series (Kalman filter implementations, a well-known multivariate adaptive AR model (Arnold et al. 1998) and a simplified, computationally efficient derivation of it, for their potential application to connectivity analysis of high-dimensional (up to 192 channels) iEEG data. When used on simulated seizures together with a multivariate connectivity estimator, the partial directed coherence, the two AR models were compared for their ability to reconstitute the designed seizure signal connections from noisy data. Next, focal seizures from iEEG recordings (73-113 channels) in three patients rendered seizure-free after surgery were mapped with the outdegree, a graph-theory index of outward directed connectivity. Simulation results indicated high levels of mapping accuracy for the two models in the presence of low-to-moderate noise cross-correlation. Accordingly, both AR models correctly mapped the real seizure onset to the resection volume. This study supports the possibility of conducting fully data-driven multivariate connectivity estimations on high-dimensional iEEG datasets using the Kalman filter approach.

  19. The causal structure of spacetime is a parameterized Randers geometry

    Energy Technology Data Exchange (ETDEWEB)

    Skakala, Jozef; Visser, Matt, E-mail: jozef.skakala@msor.vuw.ac.nz, E-mail: matt.visser@msor.vuw.ac.nz [School of Mathematics, Statistics and Operations Research, Victoria University of Wellington, PO Box 600, Wellington (New Zealand)

    2011-03-21

    There is a well-established isomorphism between stationary four-dimensional spacetimes and three-dimensional purely spatial Randers geometries-these Randers geometries being a particular case of the more general class of three-dimensional Finsler geometries. We point out that in stably causal spacetimes, by using the (time-dependent) ADM decomposition, this result can be extended to general non-stationary spacetimes-the causal structure (conformal structure) of the full spacetime is completely encoded in a parameterized (t-dependent) class of Randers spaces, which can then be used to define a Fermat principle, and also to reconstruct the null cones and causal structure.

  20. The causal structure of spacetime is a parameterized Randers geometry

    International Nuclear Information System (INIS)

    Skakala, Jozef; Visser, Matt

    2011-01-01

    There is a well-established isomorphism between stationary four-dimensional spacetimes and three-dimensional purely spatial Randers geometries-these Randers geometries being a particular case of the more general class of three-dimensional Finsler geometries. We point out that in stably causal spacetimes, by using the (time-dependent) ADM decomposition, this result can be extended to general non-stationary spacetimes-the causal structure (conformal structure) of the full spacetime is completely encoded in a parameterized (t-dependent) class of Randers spaces, which can then be used to define a Fermat principle, and also to reconstruct the null cones and causal structure.

  1. Parameterization of a complex landscape for a sediment routing model of the Le Sueur River, southern Minnesota

    Science.gov (United States)

    Belmont, P.; Viparelli, E.; Parker, G.; Lauer, W.; Jennings, C.; Gran, K.; Wilcock, P.; Melesse, A.

    2008-12-01

    Modeling sediment fluxes and pathways in complex landscapes is limited by our inability to accurately measure and integrate heterogeneous, spatially distributed sources into a single coherent, predictive geomorphic transport law. In this study, we partition the complex landscape of the Le Sueur River watershed into five distributed primary source types, bluffs (including strath terrace caps), ravines, streambanks, tributaries, and flat,agriculture-dominated uplands. The sediment contribution of each source is quantified independently and parameterized for use in a sand and mud routing model. Rigorous modeling of the evolution of this landscape and sediment flux from each source type requires consideration of substrate characteristics, heterogeneity, and spatial connectivity. The subsurface architecture of the Le Sueur drainage basin is defined by a layer cake sequence of fine-grained tills, interbedded with fluvioglacial sands. Nearly instantaneous baselevel fall of 65 m occurred at 11.5 ka, as a result of the catastrophic draining of glacial Lake Agassiz through the Minnesota River, to which the Le Sueur is a tributary. The major knickpoint that was generated from that event has propagated 40 km into the Le Sueur network, initiating an incised river valley with tall, retreating bluffs and actively incising ravines. Loading estimates constrained by river gaging records that bound the knick zone indicate that bluffs connected to the river are retreating at an average rate of less than 2 cm per year and ravines are incising at an average rate of less than 0.8 mm per year, consistent with the Holocene average incision rate on the main stem of the river of less than 0.6 mm per year. Ongoing work with cosmogenic nuclide sediment tracers, ground-based LiDAR, historic aerial photos, and field mapping will be combined to represent the diversity of erosional environments and processes in a single coherent routing model.

  2. Halo nuclei studied by relativistic mean-field approach

    International Nuclear Information System (INIS)

    Gmuca, S.

    1997-01-01

    Density distributions of light neutron-rich nuclei are studied by using the relativistic mean-field approach. The effective interaction which parameterizes the recent Dirac-Brueckner-Hartree-Fock calculations of nuclear matter is used. The results are discussed and compared with the experimental observations with special reference to the neutron halo in the drip-line nuclei. (author)

  3. Restoration of lost connectivity of partitioned wireless sensor networks

    Directory of Open Access Journals (Sweden)

    Virender Ranga

    2016-05-01

    Full Text Available The lost connectivity due to failure of large scale nodes plays major role to degrade the system performance by generating unnecessary overhead or sometimes totally collapse the active network. There are many issues and challenges to restore the lost connectivity in an unattended scenario, i.e. how many recovery nodes will be sufficient and on which locations these recovery nodes have to be placed. A very few centralized and distributed approaches have been proposed till now. The centralized approaches are good for a scenario where information about the disjoint network, i.e. number of disjoint segments and their locations are well known in advance. However, for a scenario where such information is unknown due to the unattended harsh environment, a distributed approach is a better solution to restore the partitioned network. In this paper, we have proposed and implemented a semi-distributed approach called Relay node Placement using Fermat Point (RPFP. The proposed approach is capable of restoring lost connectivity with small number of recovery relay nodes and it works for any number of disjoint segments. The simulation experiment results show effectiveness of our approach as compared to existing benchmark approaches.

  4. On the sensitivity of mesoscale models to surface-layer parameterization constants

    Science.gov (United States)

    Garratt, J. R.; Pielke, R. A.

    1989-09-01

    The Colorado State University standard mesoscale model is used to evaluate the sensitivity of one-dimensional (1D) and two-dimensional (2D) fields to differences in surface-layer parameterization “constants”. Such differences reflect the range in the published values of the von Karman constant, Monin-Obukhov stability functions and the temperature roughness length at the surface. The sensitivity of 1D boundary-layer structure, and 2D sea-breeze intensity, is generally less than that found in published comparisons related to turbulence closure schemes generally.

  5. Connected Gaming: An Inclusive Perspective for Serious Gaming

    Directory of Open Access Journals (Sweden)

    Yasmin Kafai

    2017-09-01

    Full Text Available Serious games should focus on connected gaming, which is combining the instructionist approach on having students play educational games for learning with the constructionist approach on having students make their own games for learning. Constructionist activities have always been part of the larger gaming ecology but have traditionally received far less attention than their instructionist counterparts. Future developments in serious gaming ought to promote this more inclusive perspective to better realize the full potential of gaming as a means for learning and for connecting children to technology and to each other. This potential for more meaningful connectivity can also address the persistent access and diversity issues long facing gaming cultures. 

  6. Using polarimetric radar observations and probabilistic inference to develop the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), a novel microphysical parameterization framework

    Science.gov (United States)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.

    2016-12-01

    Microphysical parameterization schemes have reached an impressive level of sophistication: numerous prognostic hydrometeor categories, and either size-resolved (bin) particle size distributions, or multiple prognostic moments of the size distribution. Yet, uncertainty in model representation of microphysical processes and the effects of microphysics on numerical simulation of weather has not shown a improvement commensurate with the advanced sophistication of these schemes. We posit that this may be caused by unconstrained assumptions of these schemes, such as ad-hoc parameter value choices and structural uncertainties (e.g. choice of a particular form for the size distribution). We present work on development and observational constraint of a novel microphysical parameterization approach, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), which seeks to address these sources of uncertainty. Our framework avoids unnecessary a priori assumptions, and instead relies on observations to provide probabilistic constraint of the scheme structure and sensitivities to environmental and microphysical conditions. We harness the rich microphysical information content of polarimetric radar observations to develop and constrain BOSS within a Bayesian inference framework using a Markov Chain Monte Carlo sampler (see Kumjian et al., this meeting for details on development of an associated polarimetric forward operator). Our work shows how knowledge of microphysical processes is provided by polarimetric radar observations of diverse weather conditions, and which processes remain highly uncertain, even after considering observations.

  7. Methylphenidate Modulates Functional Network Connectivity to Enhance Attention

    OpenAIRE

    Rosenberg, Monica D.; Zhang, Sheng; Hsu, Wei-Ting; Scheinost, Dustin; Finn, Emily S.; Shen, Xilin; Constable, R. Todd; Li, Chiang-Shan R.; Chun, Marvin M.

    2016-01-01

    Recent work has demonstrated that human whole-brain functional connectivity patterns measured with fMRI contain information about cognitive abilities, including sustained attention. To derive behavioral predictions from connectivity patterns, our group developed a connectome-based predictive modeling (CPM) approach (Finn et al., 2015; Rosenberg et al., 2016). Previously using CPM, we defined a high-attention network, comprising connections positively correlated with performance on a sustained...

  8. Continuation of connecting orbits in 3d-ODEs' (i) point-to-cycle connections.

    NARCIS (Netherlands)

    Doedel, E.J.; Kooi, B.W.; van Voorn, G.A.K.; Kuznetzov, Y.A.

    2008-01-01

    We propose new methods for the numerical continuation of point-to-cycle connecting orbits in three-dimensional autonomous ODE's using projection boundary conditions. In our approach, the projection boundary conditions near the cycle are formulated using an eigenfunction of the associated adjoint

  9. Representing connectivity: quantifying effective habitat availability based on area and connectivity for conservation status assessment and recovery.

    Science.gov (United States)

    Neel, Maile; Tumas, Hayley R; Marsden, Brittany W

    2014-01-01

    We apply a comprehensive suite of graph theoretic metrics to illustrate how landscape connectivity can be effectively incorporated into conservation status assessments and in setting conservation objectives. These metrics allow conservation practitioners to evaluate and quantify connectivity in terms of representation, resiliency, and redundancy and the approach can be applied in spite of incomplete knowledge of species-specific biology and dispersal processes. We demonstrate utility of the graph metrics by evaluating changes in distribution and connectivity that would result from implementing two conservation plans for three endangered plant species (Erigeron parishii, Acanthoscyphus parishii var. goodmaniana, and Eriogonum ovalifolium var. vineum) relative to connectivity under current conditions. Although distributions of the species differ from one another in terms of extent and specific location of occupied patches within the study landscape, the spatial scale of potential connectivity in existing networks were strikingly similar for Erigeron and Eriogonum, but differed for Acanthoscyphus. Specifically, patches of the first two species were more regularly distributed whereas subsets of patches of Acanthoscyphus were clustered into more isolated components. Reserves based on US Fish and Wildlife Service critical habitat designation would not greatly contribute to maintain connectivity; they include 83-91% of the extant occurrences and >92% of the aerial extent of each species. Effective connectivity remains within 10% of that in the whole network for all species. A Forest Service habitat management strategy excluded up to 40% of the occupied habitat of each species resulting in both range reductions and loss of occurrences from the central portions of each species' distribution. Overall effective network connectivity was reduced to 62-74% of the full networks. The distance at which each CHMS network first became fully connected was reduced relative to the full

  10. Estimating time-dependent connectivity in marine systems

    Science.gov (United States)

    Defne, Zafer; Ganju, Neil K.; Aretxabaleta, Alfredo

    2016-01-01

    Hydrodynamic connectivity describes the sources and destinations of water parcels within a domain over a given time. When combined with biological models, it can be a powerful concept to explain the patterns of constituent dispersal within marine ecosystems. However, providing connectivity metrics for a given domain is a three-dimensional problem: two dimensions in space to define the sources and destinations and a time dimension to evaluate connectivity at varying temporal scales. If the time scale of interest is not predefined, then a general approach is required to describe connectivity over different time scales. For this purpose, we have introduced the concept of a “retention clock” that highlights the change in connectivity through time. Using the example of connectivity between protected areas within Barnegat Bay, New Jersey, we show that a retention clock matrix is an informative tool for multitemporal analysis of connectivity.

  11. Computational Approach for Securing Radiology-Diagnostic Data in Connected Health Network using High-Performance GPU-Accelerated AES.

    Science.gov (United States)

    Adeshina, A M; Hashim, R

    2017-03-01

    Diagnostic radiology is a core and integral part of modern medicine, paving ways for the primary care physicians in the disease diagnoses, treatments and therapy managements. Obviously, all recent standard healthcare procedures have immensely benefitted from the contemporary information technology revolutions, apparently revolutionizing those approaches to acquiring, storing and sharing of diagnostic data for efficient and timely diagnosis of diseases. Connected health network was introduced as an alternative to the ageing traditional concept in healthcare system, improving hospital-physician connectivity and clinical collaborations. Undoubtedly, the modern medicinal approach has drastically improved healthcare but at the expense of high computational cost and possible breach of diagnosis privacy. Consequently, a number of cryptographical techniques are recently being applied to clinical applications, but the challenges of not being able to successfully encrypt both the image and the textual data persist. Furthermore, processing time of encryption-decryption of medical datasets, within a considerable lower computational cost without jeopardizing the required security strength of the encryption algorithm, still remains as an outstanding issue. This study proposes a secured radiology-diagnostic data framework for connected health network using high-performance GPU-accelerated Advanced Encryption Standard. The study was evaluated with radiology image datasets consisting of brain MR and CT datasets obtained from the department of Surgery, University of North Carolina, USA, and the Swedish National Infrastructure for Computing. Sample patients' notes from the University of North Carolina, School of medicine at Chapel Hill were also used to evaluate the framework for its strength in encrypting-decrypting textual data in the form of medical report. Significantly, the framework is not only able to accurately encrypt and decrypt medical image datasets, but it also

  12. Hierarchical multivariate covariance analysis of metabolic connectivity.

    Science.gov (United States)

    Carbonell, Felix; Charil, Arnaud; Zijdenbos, Alex P; Evans, Alan C; Bedell, Barry J

    2014-12-01

    Conventional brain connectivity analysis is typically based on the assessment of interregional correlations. Given that correlation coefficients are derived from both covariance and variance, group differences in covariance may be obscured by differences in the variance terms. To facilitate a comprehensive assessment of connectivity, we propose a unified statistical framework that interrogates the individual terms of the correlation coefficient. We have evaluated the utility of this method for metabolic connectivity analysis using [18F]2-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. As an illustrative example of the utility of this approach, we examined metabolic connectivity in angular gyrus and precuneus seed regions of mild cognitive impairment (MCI) subjects with low and high β-amyloid burdens. This new multivariate method allowed us to identify alterations in the metabolic connectome, which would not have been detected using classic seed-based correlation analysis. Ultimately, this novel approach should be extensible to brain network analysis and broadly applicable to other imaging modalities, such as functional magnetic resonance imaging (MRI).

  13. A theory-based parameterization for heterogeneous ice nucleation and implications for the simulation of ice processes in atmospheric models

    Science.gov (United States)

    Savre, J.; Ekman, A. M. L.

    2015-05-01

    A new parameterization for heterogeneous ice nucleation constrained by laboratory data and based on classical nucleation theory is introduced. Key features of the parameterization include the following: a consistent and modular modeling framework for treating condensation/immersion and deposition freezing, the possibility to consider various potential ice nucleating particle types (e.g., dust, black carbon, and bacteria), and the possibility to account for an aerosol size distribution. The ice nucleating ability of each aerosol type is described using a contact angle (θ) probability density function (PDF). A new modeling strategy is described to allow the θ PDF to evolve in time so that the most efficient ice nuclei (associated with the lowest θ values) are progressively removed as they nucleate ice. A computationally efficient quasi Monte Carlo method is used to integrate the computed ice nucleation rates over both size and contact angle distributions. The parameterization is employed in a parcel model, forced by an ensemble of Lagrangian trajectories extracted from a three-dimensional simulation of a springtime low-level Arctic mixed-phase cloud, in order to evaluate the accuracy and convergence of the method using different settings. The same model setup is then employed to examine the importance of various parameters for the simulated ice production. Modeling the time evolution of the θ PDF is found to be particularly crucial; assuming a time-independent θ PDF significantly overestimates the ice nucleation rates. It is stressed that the capacity of black carbon (BC) to form ice in the condensation/immersion freezing mode is highly uncertain, in particular at temperatures warmer than -20°C. In its current version, the parameterization most likely overestimates ice initiation by BC.

  14. Impact mitigation using kinematic constraints and the full space parameterization method

    Energy Technology Data Exchange (ETDEWEB)

    Morgansen, K.A.; Pin, F.G.

    1996-02-01

    A new method for mitigating unexpected impact of a redundant manipulator with an object in its environment is presented. Kinematic constraints are utilized with the recently developed method known as Full Space Parameterization (FSP). System performance criterion and constraints are changed at impact to return the end effector to the point of impact and halt the arm. Since large joint accelerations could occur as the manipulator is halted, joint acceleration bounds are imposed to simulate physical actuator limitations. Simulation results are presented for the case of a simple redundant planar manipulator.

  15. A whole-brain computational modeling approach to explain the alterations in resting-state functional connectivity during progression of Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Murat Demirtaş

    2017-01-01

    Full Text Available Alzheimer's disease (AD is the most common dementia with dramatic consequences. The research in structural and functional neuroimaging showed altered brain connectivity in AD. In this study, we investigated the whole-brain resting state functional connectivity (FC of the subjects with preclinical Alzheimer's disease (PAD, mild cognitive impairment due to AD (MCI and mild dementia due to Alzheimer's disease (AD, the impact of APOE4 carriership, as well as in relation to variations in core AD CSF biomarkers. The synchronization in the whole-brain was monotonously decreasing during the course of the disease progression. Furthermore, in AD patients we found widespread significant decreases in functional connectivity (FC strengths particularly in the brain regions with high global connectivity. We employed a whole-brain computational modeling approach to study the mechanisms underlying these alterations. To characterize the causal interactions between brain regions, we estimated the effective connectivity (EC in the model. We found that the significant EC differences in AD were primarily located in left temporal lobe. Then, we systematically manipulated the underlying dynamics of the model to investigate simulated changes in FC based on the healthy control subjects. Furthermore, we found distinct patterns involving CSF biomarkers of amyloid-beta (Aβ1−42 total tau (t-tau and phosphorylated tau (p-tau. CSF Aβ1−42 was associated to the contrast between healthy control subjects and clinical groups. Nevertheless, tau CSF biomarkers were associated to the variability in whole-brain synchronization and sensory integration regions. These associations were robust across clinical groups, unlike the associations that were found for CSF Aβ1−42. APOE4 carriership showed no significant correlations with the connectivity measures.

  16. Towards Quantitative Optical Cross Sections in Entomological Laser Radar - Potential of Temporal and Spherical Parameterizations for Identifying Atmospheric Fauna.

    Directory of Open Access Journals (Sweden)

    Mikkel Brydegaard

    Full Text Available In recent years, the field of remote sensing of birds and insects in the atmosphere (the aerial fauna has advanced considerably, and modern electro-optic methods now allow the assessment of the abundance and fluxes of pests and beneficials on a landscape scale. These techniques have the potential to significantly increase our understanding of, and ability to quantify and manage, the ecological environment. This paper presents a concept whereby laser radar observations of atmospheric fauna can be parameterized and table values for absolute cross sections can be catalogued to allow for the study of focal species such as disease vectors and pests. Wing-beat oscillations are parameterized with a discrete set of harmonics and the spherical scatter function is parameterized by a reduced set of symmetrical spherical harmonics. A first order spherical model for insect scatter is presented and supported experimentally, showing angular dependence of wing beat harmonic content. The presented method promises to give insights into the flight heading directions of species in the atmosphere and has the potential to shed light onto the km-range spread of pests and disease vectors.

  17. A New WRF-Chem Treatment for Studying Regional Scale Impacts of Cloud-Aerosol Interactions in Parameterized Cumuli

    Energy Technology Data Exchange (ETDEWEB)

    Berg, Larry K.; Shrivastava, ManishKumar B.; Easter, Richard C.; Fast, Jerome D.; Chapman, Elaine G.; Liu, Ying

    2015-01-01

    A new treatment of cloud-aerosol interactions within parameterized shallow and deep convection has been implemented in WRF-Chem that can be used to better understand the aerosol lifecycle over regional to synoptic scales. The modifications to the model to represent cloud-aerosol interactions include treatment of the cloud dropletnumber mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. Thesechanges have been implemented in both the WRF-Chem chemistry packages as well as the Kain-Fritsch cumulus parameterization that has been modified to better represent shallow convective clouds. Preliminary testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS) as well as a high-resolution simulation that does not include parameterized convection. The simulation results are used to investigate the impact of cloud-aerosol interactions on the regional scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column integrated BC can be as large as -50% when cloud-aerosol interactions are considered (due largely to wet removal), or as large as +35% for sulfate in non-precipitating conditions due to the sulfate production in the parameterized clouds. The modifications to WRF-Chem version 3.2.1 are found to account for changes in the cloud drop number concentration (CDNC) and changes in the chemical composition of cloud-drop residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to WRF-Chem version 3.5, and it is anticipated that they

  18. Bioavailability of radiocaesium in soil: parameterization using soil characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Syssoeva, A.A.; Konopleva, I.V. [Russian Institute of Agricultural Radiology and Agroecology, Obninsk (Russian Federation)

    2004-07-01

    It has been shown that radiocaesium availability to plants strongly influenced by soil properties. For the best evaluation of TFs it necessary to use mechanistic models that predict radionuclide uptake by plants based on consideration of sorption-desorption and fixation-remobilization of the radionuclide in the soil as well as root uptake processes controlled by the plant. The aim of the research was to characterise typical Russian soils on the basis of the radiocaesium availability. The parameter of the radiocaesium availability in soils (A) has been developed which consist on radiocaesium exchangeability; CF -concentration factor which is the ratio of the radiocaesium in plant to that in soil solution; K{sub Dex} - exchangeable solid-liquid distribution coefficient of radiocaesium. The approach was tested for a wide range of Russian soils using radiocaesium uptake data from a barley pot trial and parameters of the radiocaesium bioavailability. Soils were collected from the arable horizons in different soil climatic zones of Russia and artificially contaminated by {sup 137}Cs. The classification of soils in terms of the radiocaesium availability corresponds quite well to observed linear relationship between {sup 137}Cs TF for barley and A. K{sub Dex} is related to the soil radiocaesium interception potential (RIP), which was found to be positively and strongly related to clay and physical clay (<0,01 mm) content. The {sup 137}Cs exchangeability were found to be in close relation to the soil vermiculite content, which was estimated by the method of Cs{sup +} fixation. It's shown radiocaesium availability to plants in soils under study can be parameterized through mineralogical soil characteristics: % clay and the soil vermiculite content. (author)

  19. Soft Computing Methods for Disulfide Connectivity Prediction.

    Science.gov (United States)

    Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.

  20. Measurements of brain microstructure and connectivity with diffusion MRI

    Directory of Open Access Journals (Sweden)

    Ching-Po Lin

    2011-12-01

    Full Text Available By probing direction-dependent diffusivity of water molecules, diffusion MRI has shown its capability to reflect the microstructural tissue status and to estimate the neural orientation and pathways in the living brain. This approach has supplied novel insights into in-vivo human brain connections. By detecting the connection patterns, anatomical architecture and structural integrity between cortical regions or subcortical nuclei in the living human brain can be easily identified. It thus opens a new window on brain connectivity studies and disease processes. During the past years, there is a growing interest in exploring the connectivity patterns of the human brain. Specifically, the utilities of noninvasive neuroimaging data and graph theoretical analysis have provided important insights into the anatomical connections and topological pattern of human brain structural networks in vivo. Here, we review the progress of this important technique and the recent methodological and application studies utilizing graph theoretical approaches on brain structural networks with structural MRI and diffusion MRI.

  1. Connected Vehicle Pilot Deployment Program phase 1 : comprehensive deployment plan : New York City : volume 1 : technical application : part I : technical and management approach.

    Science.gov (United States)

    2016-08-01

    This document describes the Deployment Plan for the New York City Department of Transportation (NYC) Connected Vehicle Pilot Deployment (CVPD) Project. This plan describes the approach to complete Phase 2 Design/Build/Test, and Phase 3 Operate and Ma...

  2. The multifacet graphically contracted function method. II. A general procedure for the parameterization of orthogonal matrices and its application to arc factors

    Science.gov (United States)

    Shepard, Ron; Gidofalvi, Gergely; Brozell, Scott R.

    2014-08-01

    Practical algorithms are presented for the parameterization of orthogonal matrices Q ∈ {R} m×n in terms of the minimal number of essential parameters {φ}. Both square n = m and rectangular n applications such as the representation of the arc factors in the multifacet graphically contracted function method and the representation of orbital coefficients in SCF and DFT methods. The parameterizations are represented formally using products of elementary Householder reflector matrices. Standard mathematical libraries, such as LAPACK, may be used to perform the basic low-level factorization, reduction, and other algebraic operations. Some care must be taken with the choice of phase factors in order to ensure stability and continuity. The transformation of gradient arrays between the Q and {φ} parameterizations is also considered. Operation counts for all factorizations and transformations are determined. Numerical results are presented which demonstrate the robustness, stability, and accuracy of these algorithms.

  3. Development and testing of an aerosol-stratus cloud parameterization scheme for middle and high latitudes

    Energy Technology Data Exchange (ETDEWEB)

    Olsson, P.Q.; Meyers, M.P.; Kreidenweis, S.; Cotton, W.R. [Colorado State Univ., Fort Collins, CO (United States)

    1996-04-01

    The aim of this new project is to develop an aerosol/cloud microphysics parameterization of mixed-phase stratus and boundary layer clouds. Our approach is to create, test, and implement a bulk-microphysics/aerosol model using data from Atmospheric Radiation Measurement (ARM) Cloud and Radiation Testbed (CART) sites and large-eddy simulation (LES) explicit bin-resolving aerosol/microphysics models. The primary objectives of this work are twofold. First, we need the prediction of number concentrations of activated aerosol which are transferred to the droplet spectrum, so that the aerosol population directly affects the cloud formation and microphysics. Second, we plan to couple the aerosol model to the gas and aqueous-chemistry module that will drive the aerosol formation and growth. We begin by exploring the feasibility of performing cloud-resolving simulations of Arctic stratus clouds over the North Slope CART site. These simulations using Colorado State University`s regional atmospheric modeling system (RAMS) will be useful in designing the structure of the cloud-resolving model and in interpreting data acquired at the North Slope site.

  4. Physically sound parameterization of incomplete ionization in aluminum-doped silicon

    Directory of Open Access Journals (Sweden)

    Heiko Steinkemper

    2016-12-01

    Full Text Available Incomplete ionization is an important issue when modeling silicon devices featuring aluminum-doped p+ (Al-p+ regions. Aluminum has a rather deep state in the band gap compared to boron or phosphorus, causing strong incomplete ionization. In this paper, we considerably improve our recent parameterization [Steinkemper et al., J. Appl. Phys. 117, 074504 (2015]. On the one hand, we found a fundamental criterion to further reduce the number of free parameters in our fitting procedure. And on the other hand, we address a mistake in the original publication of the incomplete ionization formalism in Altermatt et al., J. Appl. Phys. 100, 113715 (2006.

  5. Effect of electrocardiogram interference on cortico-cortical connectivity analysis and a possible solution.

    Science.gov (United States)

    Govindan, R B; Kota, Srinivas; Al-Shargabi, Tareq; Massaro, An N; Chang, Taeun; du Plessis, Adre

    2016-09-01

    Electroencephalogram (EEG) signals are often contaminated by the electrocardiogram (ECG) interference, which affects quantitative characterization of EEG. We propose null-coherence, a frequency-based approach, to attenuate the ECG interference in EEG using simultaneously recorded ECG as a reference signal. After validating the proposed approach using numerically simulated data, we apply this approach to EEG recorded from six newborns receiving therapeutic hypothermia for neonatal encephalopathy. We compare our approach with an independent component analysis (ICA), a previously proposed approach to attenuate ECG artifacts in the EEG signal. The power spectrum and the cortico-cortical connectivity of the ECG attenuated EEG was compared against the power spectrum and the cortico-cortical connectivity of the raw EEG. The null-coherence approach attenuated the ECG contamination without leaving any residual of the ECG in the EEG. We show that the null-coherence approach performs better than ICA in attenuating the ECG contamination without enhancing cortico-cortical connectivity. Our analysis suggests that using ICA to remove ECG contamination from the EEG suffers from redistribution problems, whereas the null-coherence approach does not. We show that both the null-coherence and ICA approaches attenuate the ECG contamination. However, the EEG obtained after ICA cleaning displayed higher cortico-cortical connectivity compared with that obtained using the null-coherence approach. This suggests that null-coherence is superior to ICA in attenuating the ECG interference in EEG for cortico-cortical connectivity analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Linear Approach for Synchronous State Stability in Fully Connected PLL Networks

    Directory of Open Access Journals (Sweden)

    José R. C. Piqueira

    2008-01-01

    Full Text Available Synchronization is an essential feature for the use of digital systems in telecommunication networks, integrated circuits, and manufacturing automation. Formerly, master-slave (MS architectures, with precise master clock generators sending signals to phase-locked loops (PLLs working as slave oscillators, were considered the best solution. Nowadays, the development of wireless networks with dynamical connectivity and the increase of the size and the operation frequency of integrated circuits suggest that the distribution of clock signals could be more efficient if distributed solutions with fully connected oscillators are used. Here, fully connected networks with second-order PLLs as nodes are considered. In previous work, how the synchronous state frequency for this type of network depends on the node parameters and delays was studied and an expression for the long-term frequency was derived (Piqueira, 2006. Here, by taking the first term of the Taylor series expansion for the dynamical system description, it is shown that for a generic network with N nodes, the synchronous state is locally asymptotically stable.

  7. Sensitivity of U.S. summer precipitation to model resolution and convective parameterizations across gray zone resolutions

    Science.gov (United States)

    Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson

    2017-03-01

    Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.

  8. Effects of microphysics parameterization on simulations of summer heavy precipitation in the Yangtze-Huaihe Region, China

    Science.gov (United States)

    Kan, Yu; Chen, Bo; Shen, Tao; Liu, Chaoshun; Qiao, Fengxue

    2017-09-01

    It has been a longstanding problem for current weather/climate models to accurately predict summer heavy precipitation over the Yangtze-Huaihe Region (YHR) which is the key flood-prone area in China with intensive population and developed economy. Large uncertainty has been identified with model deficiencies in representing precipitation processes such as microphysics and cumulus parameterizations. This study focuses on examining the effects of microphysics parameterization on the simulation of different type of heavy precipitation over the YHR taking into account two different cumulus schemes. All regional persistent heavy precipitation events over the YHR during 2008-2012 are classified into three types according to their weather patterns: the type I associated with stationary front, the type II directly associated with typhoon or with its spiral rain band, and the type III associated with strong convection along the edge of the Subtropical High. Sixteen groups of experiments are conducted for three selected cases with different types and a local short-time rainstorm in Shanghai, using the WRF model with eight microphysics and two cumulus schemes. Results show that microphysics parameterization has large but different impacts on the location and intensity of regional heavy precipitation centers. The Ferrier (microphysics) -BMJ (cumulus) scheme and Thompson (microphysics) - KF (cumulus) scheme most realistically simulates the rain-bands with the center location and intensity for type I and II respectively. For type III, the Lin microphysics scheme shows advantages in regional persistent cases over YHR, while the WSM5 microphysics scheme is better in local short-term case, both with the BMJ cumulus scheme.

  9. Real-time image parameterization in high energy gamma-ray astronomy using transputers

    International Nuclear Information System (INIS)

    Punch, M.; Fegan, D.J.

    1991-01-01

    Recently, significant advances in Very-High-Energy gamma-ray astronomy have been made by parameterization of the Cherenkov images arising from gamma-ray initiated showers in the Earth's atmosphere. A prototype system to evaluate the use of Transputers as a parallel-processing elements for real-time analysis of data from a Cherenkov imaging camera is described in this paper. The operation of and benefits resulting from such a system are described, and the viability of an applicaiton of the prototype system is discussed

  10. The Histogram-Area Connection

    Science.gov (United States)

    Gratzer, William; Carpenter, James E.

    2008-01-01

    This article demonstrates an alternative approach to the construction of histograms--one based on the notion of using area to represent relative density in intervals of unequal length. The resulting histograms illustrate the connection between the area of the rectangles associated with particular outcomes and the relative frequency (probability)…

  11. Differential approach to planning of training loads in person with connective tissue dysplasia symptoms

    Directory of Open Access Journals (Sweden)

    Олег Борисович Неханевич

    2015-05-01

    Full Text Available Introduction. When dealing with issues of access and planning of training and competitive pressures special interest cause the person with signs of connective tissue dysplasia.Aim. Improvement of medical support of training process of athletes with signs of connective tissue dysplasia.Materials and methods. 188 athletes are examined, including 59 with signs of connective tissue dysplasia. There are made the basic group. Signs of systemic involvement of connective tissue are determined using anthropometry and somatoscopy. An echocardiographic examination is conducted for all athletes at rest and during bicycle ergometry in regenerative period conducted.Results. Underweight body, acromacria, hypermobility of joints and flat feet are often observed with signs of systemic involvement of connective tissue.During veloergometry it was established deterioration of myocardial relaxation during diastole core group of athletes while performing load average power, which led to a drop in ejection fraction at submaximal levels of exertion.Conclusions. Existence of connective tissue dysplasia in athletes with different prognosis states requires sports physicians an in-depth analysis and differential diagnosis of clinical forms in order to prevent complications during training and competitive pressures. Early signs of cardiac strain while performing physical activity in athletes with signs of connective tissue dysplasia were symptoms of myocardial relaxation on indicators of diastolic heart function. Ejection fraction at rest remained at normal levels

  12. Simulation of heavy precipitation episode over eastern Peninsular Malaysia using MM5: sensitivity to cumulus parameterization schemes

    Science.gov (United States)

    Salimun, Ester; Tangang, Fredolin; Juneng, Liew

    2010-06-01

    A comparative study has been conducted to investigate the skill of four convection parameterization schemes, namely the Anthes-Kuo (AK), the Betts-Miller (BM), the Kain-Fritsch (KF), and the Grell (GR) schemes in the numerical simulation of an extreme precipitation episode over eastern Peninsular Malaysia using the Pennsylvania State University—National Center for Atmospheric Research Center (PSU-NCAR) Fifth Generation Mesoscale Model (MM5). The event is a commonly occurring westward propagating tropical depression weather system during a boreal winter resulting from an interaction between a cold surge and the quasi-stationary Borneo vortex. The model setup and other physical parameterizations are identical in all experiments and hence any difference in the simulation performance could be associated with the cumulus parameterization scheme used. From the predicted rainfall and structure of the storm, it is clear that the BM scheme has an edge over the other schemes. The rainfall intensity and spatial distribution were reasonably well simulated compared to observations. The BM scheme was also better in resolving the horizontal and vertical structures of the storm. Most of the rainfall simulated by the BM simulation was of the convective type. The failure of other schemes (AK, GR and KF) in simulating the event may be attributed to the trigger function, closure assumption, and precipitation scheme. On the other hand, the appropriateness of the BM scheme for this episode may not be generalized for other episodes or convective environments.

  13. Timescale stretch parameterization of Type Ia supernova B-band light curves

    International Nuclear Information System (INIS)

    Goldhaber, G.; Groom, D.E.; Kim, A.; Aldering, G.; Astier, P.; Conley, A.; Deustua, S.E.; Ellis, R.; Fabbro, S.; Fruchter, A.S.; Goobar, A.; Hook, I.; Irwin, M.; Kim, M.; Knop, R.A.; Lidman, C.; McMahon, R.; Nugent, P.E.; Pain, R.; Panagia, N.; Pennypacker, C.R.; Perlmutter, S.; Ruiz-Lapuente, P.; Schaefer, B.; Walton, N.A.; York, T.

    2001-01-01

    R-band intensity measurements along the light curve of Type Ia supernovae discovered by the Cosmology Project (SCP) are fitted in brightness to templates allowing a free parameter the time-axis width factor w identically equal to s times (1+z). The data points are then individually aligned in the time-axis, normalized and K-corrected back to the rest frame, after which the nearly 1300 normalized intensity measurements are found to lie on a well-determined common rest-frame B-band curve which we call the ''composite curve.'' The same procedure is applied to 18 low-redshift Calan/Tololo SNe with Z < 0.11; these nearly 300 B-band photometry points are found to lie on the composite curve equally well. The SCP search technique produces several measurements before maximum light for each supernova. We demonstrate that the linear stretch factor, s, which parameterizes the light-curve timescale appears independent of z, and applies equally well to the declining and rising parts of the light curve. In fact, the B band template that best fits this composite curve fits the individual supernova photometry data when stretched by a factor s with chi 2/DoF ∼ 1, thus as well as any parameterization can, given the current data sets. The measurement of the data of explosion, however, is model dependent and not tightly constrained by the current data. We also demonstrate the 1 + z light-cure time-axis broadening expected from cosmological expansion. This argues strongly against alternative explanations, such as tired light, for the redshift of distant objects

  14. Assessing uncertainty and sensitivity of model parameterizations and parameters in WRF affecting simulated surface fluxes and land-atmosphere coupling over the Amazon region

    Science.gov (United States)

    Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.

    2016-12-01

    This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for

  15. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

  16. Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling

    OpenAIRE

    Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.

    2016-01-01

    The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...

  17. Creating and parameterizing patient-specific deep brain stimulation pathway-activation models using the hyperdirect pathway as an example.

    Science.gov (United States)

    Gunalan, Kabilar; Chaturvedi, Ashutosh; Howell, Bryan; Duchin, Yuval; Lempka, Scott F; Patriat, Remi; Sapiro, Guillermo; Harel, Noam; McIntyre, Cameron C

    2017-01-01

    Deep brain stimulation (DBS) is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports. Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM) and predict the response of the hyperdirect pathway to clinical stimulation. Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python) enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson's disease (PD). This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution. Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings. Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation.

  18. Creating and parameterizing patient-specific deep brain stimulation pathway-activation models using the hyperdirect pathway as an example.

    Directory of Open Access Journals (Sweden)

    Kabilar Gunalan

    Full Text Available Deep brain stimulation (DBS is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports.Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM and predict the response of the hyperdirect pathway to clinical stimulation.Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson's disease (PD. This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution.Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings.Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation.

  19. Remodeling Functional Connectivity in Multiple Sclerosis: A Challenging Therapeutic Approach.

    Science.gov (United States)

    Stampanoni Bassi, Mario; Gilio, Luana; Buttari, Fabio; Maffei, Pierpaolo; Marfia, Girolama A; Restivo, Domenico A; Centonze, Diego; Iezzi, Ennio

    2017-01-01

    Neurons in the central nervous system are organized in functional units interconnected to form complex networks. Acute and chronic brain damage disrupts brain connectivity producing neurological signs and/or symptoms. In several neurological diseases, particularly in Multiple Sclerosis (MS), structural imaging studies cannot always demonstrate a clear association between lesion site and clinical disability, originating the "clinico-radiological paradox." The discrepancy between structural damage and disability can be explained by a complex network perspective. Both brain networks architecture and synaptic plasticity may play important roles in modulating brain networks efficiency after brain damage. In particular, long-term potentiation (LTP) may occur in surviving neurons to compensate network disconnection. In MS, inflammatory cytokines dramatically interfere with synaptic transmission and plasticity. Importantly, in addition to acute and chronic structural damage, inflammation could contribute to reduce brain networks efficiency in MS leading to worse clinical recovery after a relapse and worse disease progression. These evidence suggest that removing inflammation should represent the main therapeutic target in MS; moreover, as synaptic plasticity is particularly altered by inflammation, specific strategies aimed at promoting LTP mechanisms could be effective for enhancing clinical recovery. Modulation of plasticity with different non-invasive brain stimulation (NIBS) techniques has been used to promote recovery of MS symptoms. Better knowledge of features inducing brain disconnection in MS is crucial to design specific strategies to promote recovery and use NIBS with an increasingly tailored approach.

  20. Remodeling Functional Connectivity in Multiple Sclerosis: A Challenging Therapeutic Approach

    Directory of Open Access Journals (Sweden)

    Mario Stampanoni Bassi

    2017-12-01

    Full Text Available Neurons in the central nervous system are organized in functional units interconnected to form complex networks. Acute and chronic brain damage disrupts brain connectivity producing neurological signs and/or symptoms. In several neurological diseases, particularly in Multiple Sclerosis (MS, structural imaging studies cannot always demonstrate a clear association between lesion site and clinical disability, originating the “clinico-radiological paradox.” The discrepancy between structural damage and disability can be explained by a complex network perspective. Both brain networks architecture and synaptic plasticity may play important roles in modulating brain networks efficiency after brain damage. In particular, long-term potentiation (LTP may occur in surviving neurons to compensate network disconnection. In MS, inflammatory cytokines dramatically interfere with synaptic transmission and plasticity. Importantly, in addition to acute and chronic structural damage, inflammation could contribute to reduce brain networks efficiency in MS leading to worse clinical recovery after a relapse and worse disease progression. These evidence suggest that removing inflammation should represent the main therapeutic target in MS; moreover, as synaptic plasticity is particularly altered by inflammation, specific strategies aimed at promoting LTP mechanisms could be effective for enhancing clinical recovery. Modulation of plasticity with different non-invasive brain stimulation (NIBS techniques has been used to promote recovery of MS symptoms. Better knowledge of features inducing brain disconnection in MS is crucial to design specific strategies to promote recovery and use NIBS with an increasingly tailored approach.

  1. Learning Control Over Emotion Networks Through Connectivity-Based Neurofeedback.

    Science.gov (United States)

    Koush, Yury; Meskaldji, Djalel-E; Pichon, Swann; Rey, Gwladys; Rieger, Sebastian W; Linden, David E J; Van De Ville, Dimitri; Vuilleumier, Patrik; Scharnowski, Frank

    2017-02-01

    Most mental functions are associated with dynamic interactions within functional brain networks. Thus, training individuals to alter functional brain networks might provide novel and powerful means to improve cognitive performance and emotions. Using a novel connectivity-neurofeedback approach based on functional magnetic resonance imaging (fMRI), we show for the first time that participants can learn to change functional brain networks. Specifically, we taught participants control over a key component of the emotion regulation network, in that they learned to increase top-down connectivity from the dorsomedial prefrontal cortex, which is involved in cognitive control, onto the amygdala, which is involved in emotion processing. After training, participants successfully self-regulated the top-down connectivity between these brain areas even without neurofeedback, and this was associated with concomitant increases in subjective valence ratings of emotional stimuli of the participants. Connectivity-based neurofeedback goes beyond previous neurofeedback approaches, which were limited to training localized activity within a brain region. It allows to noninvasively and nonpharmacologically change interconnected functional brain networks directly, thereby resulting in specific behavioral changes. Our results demonstrate that connectivity-based neurofeedback training of emotion regulation networks enhances emotion regulation capabilities. This approach can potentially lead to powerful therapeutic emotion regulation protocols for neuropsychiatric disorders. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Polynomial Chaos–Based Bayesian Inference of K-Profile Parameterization in a General Circulation Model of the Tropical Pacific

    KAUST Repository

    Sraj, Ihab; Zedler, Sarah E.; Knio, Omar; Jackson, Charles S.; Hoteit, Ibrahim

    2016-01-01

    The authors present a polynomial chaos (PC)-based Bayesian inference method for quantifying the uncertainties of the K-profile parameterization (KPP) within the MIT general circulation model (MITgcm) of the tropical Pacific. The inference

  3. Understanding ecohydrological connectivity in savannas: A system dynamics modeling approach

    Science.gov (United States)

    Ecohydrological connectivity is a system-level property that results from the linkages in the networks of water transport through ecosystems, by which feedback effects and other emergent system behaviors may be generated. We created a systems dynamic model that represents primary ecohydrological net...

  4. Wireless device connection problems and design solutions

    Science.gov (United States)

    Song, Ji-Won; Norman, Donald; Nam, Tek-Jin; Qin, Shengfeng

    2016-09-01

    Users, especially the non-expert users, commonly experience problems when connecting multiple devices with interoperability. While studies on multiple device connections are mostly concentrated on spontaneous device association techniques with a focus on security aspects, the research on user interaction for device connection is still limited. More research into understanding people is needed for designers to devise usable techniques. This research applies the Research-through-Design method and studies the non-expert users' interactions in establishing wireless connections between devices. The "Learning from Examples" concept is adopted to develop a study focus line by learning from the expert users' interaction with devices. This focus line is then used for guiding researchers to explore the non-expert users' difficulties at each stage of the focus line. Finally, the Research-through-Design approach is used to understand the users' difficulties, gain insights to design problems and suggest usable solutions. When connecting a device, the user is required to manage not only the device's functionality but also the interaction between devices. Based on learning from failures, an important insight is found that the existing design approach to improve single-device interaction issues, such as improvements to graphical user interfaces or computer guidance, cannot help users to handle problems between multiple devices. This study finally proposes a desirable user-device interaction in which images of two devices function together with a system image to provide the user with feedback on the status of the connection, which allows them to infer any required actions.

  5. Enhanced fuzzy-connective-based hierarchical aggregation network using particle swarm optimization

    Science.gov (United States)

    Wang, Fang-Fang; Su, Chao-Ton

    2014-11-01

    The fuzzy-connective-based aggregation network is similar to the human decision-making process. It is capable of aggregating and propagating degrees of satisfaction of a set of criteria in a hierarchical manner. Its interpreting ability and transparency make it especially desirable. To enhance its effectiveness and further applicability, a learning approach is successfully developed based on particle swarm optimization to determine the weights and parameters of the connectives in the network. By experimenting on eight datasets with different characteristics and conducting further statistical tests, it has been found to outperform the gradient- and genetic algorithm-based learning approaches proposed in the literature; furthermore, it is capable of generating more accurate estimates. The present approach retains the original benefits of fuzzy-connective-based aggregation networks and is widely applicable. The characteristics of the learning approaches are also discussed and summarized, providing better understanding of the similarities and differences among these three approaches.

  6. Modeling canopy-induced turbulence in the Earth system: a unified parameterization of turbulent exchange within plant canopies and the roughness sublayer (CLM-ml v0)

    Science.gov (United States)

    Bonan, Gordon B.; Patton, Edward G.; Harman, Ian N.; Oleson, Keith W.; Finnigan, John J.; Lu, Yaqiong; Burakowski, Elizabeth A.

    2018-04-01

    Land surface models used in climate models neglect the roughness sublayer and parameterize within-canopy turbulence in an ad hoc manner. We implemented a roughness sublayer turbulence parameterization in a multilayer canopy model (CLM-ml v0) to test if this theory provides a tractable parameterization extending from the ground through the canopy and the roughness sublayer. We compared the canopy model with the Community Land Model (CLM4.5) at seven forest, two grassland, and three cropland AmeriFlux sites over a range of canopy heights, leaf area indexes, and climates. CLM4.5 has pronounced biases during summer months at forest sites in midday latent heat flux, sensible heat flux, gross primary production, nighttime friction velocity, and the radiative temperature diurnal range. The new canopy model reduces these biases by introducing new physics. Advances in modeling stomatal conductance and canopy physiology beyond what is in CLM4.5 substantially improve model performance at the forest sites. The signature of the roughness sublayer is most evident in nighttime friction velocity and the diurnal cycle of radiative temperature, but is also seen in sensible heat flux. Within-canopy temperature profiles are markedly different compared with profiles obtained using Monin-Obukhov similarity theory, and the roughness sublayer produces cooler daytime and warmer nighttime temperatures. The herbaceous sites also show model improvements, but the improvements are related less systematically to the roughness sublayer parameterization in these canopies. The multilayer canopy with the roughness sublayer turbulence improves simulations compared with CLM4.5 while also advancing the theoretical basis for surface flux parameterizations.

  7. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    Science.gov (United States)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  8. Can we trace biotic dispersals back in time? Introducing backward flow connectivity

    Directory of Open Access Journals (Sweden)

    Alessandro Ferrarini

    2014-06-01

    Full Text Available Connectivity in ecology deals with the problem of how species dispersal will happen given actual landscape and species presence/absence over such landscape. Hence it can be considered a forward (ahead in time scientific problem. I observe here that a backward theory of connectivity could be of deep interest as well: given the actual species presence/absence on the landscape, where with the highest probability such species is coming from? In other words, can we trace biotic dispersals back in time? Recently I have introduced a modelling and theoretical approach to ecological connectivity that is alternative to circuit theory and is able to fix the weak point of the "from-to" connectivity approach. The proposed approach holds also for mountain and hilly landscapes. In addition, it doesn't assume any intention for a species to go from source points to sink ones, because the expected path for the species is determined locally (pixel by pixel by landscape features. In this paper, I introduce a new theoretical and modelling approach called "backward flow connectivity". While flow connectivity predicts future species dispersal by minimizing at each step the potential energy due to fictional gravity over a frictional landscape, backward flow connectivity does exactly the opposite, i.e. maximizes potential energy at each step sending back the species to higher levels of potential energy due to fictional gravity on the frictional landscape. Using backward flow connectivity, one has at hand a new tool to revert timeline of species dispersal, hence being able to trace backward biotic dispersals. With few modifications, the applications of backward flow connectivity can be countless, for instance tracing back-in-time not only plants and animals but also ancient human migrations and viral paths.

  9. Parameterizing road construction in route-based road weather models: can ground-penetrating radar provide any answers?

    International Nuclear Information System (INIS)

    Hammond, D S; Chapman, L; Thornes, J E

    2011-01-01

    A ground-penetrating radar (GPR) survey of a 32 km mixed urban and rural study route is undertaken to assess the usefulness of GPR as a tool for parameterizing road construction in a route-based road weather forecast model. It is shown that GPR can easily identify even the smallest of bridges along the route, which previous thermal mapping surveys have identified as thermal singularities with implications for winter road maintenance. Using individual GPR traces measured at each forecast point along the route, an inflexion point detection algorithm attempts to identify the depth of the uppermost subsurface layers at each forecast point for use in a road weather model instead of existing ordinal road-type classifications. This approach has the potential to allow high resolution modelling of road construction and bridge decks on a scale previously not possible within a road weather model, but initial results reveal that significant future research will be required to unlock the full potential that this technology can bring to the road weather industry. (technical design note)

  10. STRUCTURAL CONNECTIVITY VIA THE TENSOR-BASED MORPHOMETRY.

    Science.gov (United States)

    Kim, Seung-Goo; Chung, Moo K; Hanson, Jamie L; Avants, Brian B; Gee, James C; Davidson, Richard J; Pollak, Seth D

    2011-01-01

    The tensor-based morphometry (TBM) has been widely used in characterizing tissue volume difference between populations at voxel level. We present a novel computational framework for investigating the white matter connectivity using TBM. Unlike other diffusion tensor imaging (DTI) based white matter connectivity studies, we do not use DTI but only T1-weighted magnetic resonance imaging (MRI). To construct brain network graphs, we have developed a new data-driven approach called the ε -neighbor method that does not need any predetermined parcellation. The proposed pipeline is applied in detecting the topological alteration of the white matter connectivity in maltreated children.

  11. Reliability and parameterization of Romberg Test in people who have suffered a stroke

    OpenAIRE

    Perez Cruzado, David; Gonzalez Sanchez, Manuel; Cuesta-Vargas, Antonio

    2014-01-01

    AIM: To analyze the reliability and describe the parameterization with inertial sensors, of Romberg test in people who have had a stroke. METHODS: Romberg's Test was performed during 20 seconds in four different setting, depending from supporting leg and position of the eyes (opened eyes / dominant leg; closed eyes / dominant leg; opened eyes / non-dominant leg; closed eyes / non-dominant leg) in people who have suffered a stroke over a year ago. Two inertial sensors (sampli...

  12. Impact of Vegetation Cover Fraction Parameterization schemes on Land Surface Temperature Simulation in the Tibetan Plateau

    Science.gov (United States)

    Lv, M.; Li, C.; Lu, H.; Yang, K.; Chen, Y.

    2017-12-01

    The parameterization of vegetation cover fraction (VCF) is an important component of land surface models. This paper investigates the impacts of three VCF parameterization schemes on land surface temperature (LST) simulation by the Common Land Model (CoLM) in the Tibetan Plateau (TP). The first scheme is a simple land cover (LC) based method; the second one is based on remote sensing observation (hereafter named as RNVCF) , in which multi-year climatology VCFs is derived from Moderate-resolution Imaging Spectroradiometer (MODIS) NDVI (Normalized Difference Vegetation Index); the third VCF parameterization scheme derives VCF from the LAI simulated by LSM and clump index at every model time step (hereafter named as SMVCF). Simulated land surface temperature(LST) and soil temperature by CoLM with three VCF parameterization schemes were evaluated by using satellite LST observation and in situ soil temperature observation, respectively, during the period of 2010 to 2013. The comparison against MODIS Aqua LST indicates that (1) CTL produces large biases for both four seasons in early afternoon (about 13:30, local solar time), while the mean bias in spring reach to 12.14K; (2) RNVCF and SMVCF reduce the mean bias significantly, especially in spring as such reduce is about 6.5K. Surface soil temperature observed at 5 cm depth from three soil moisture and temperature monitoring networks is also employed to assess the skill of three VCF schemes. The three networks, crossing TP from West to East, have different climate and vegetation conditions. In the Ngari network, located in the Western TP with an arid climate, there are not obvious differences among three schemes. In Naqu network, located in central TP with a semi-arid climate condition, CTL shows a severe overestimates (12.1 K), but such overestimations can be reduced by 79% by RNVCF and 87% by SMVCF. In the third humid network (Maqu in eastern TP), CoLM performs similar to Naqu. However, at both Naqu and Maqu networks

  13. The cloud-phase feedback in the Super-parameterized Community Earth System Model

    Science.gov (United States)

    Burt, M. A.; Randall, D. A.

    2016-12-01

    Recent comparisons of observations and climate model simulations by I. Tan and colleagues have suggested that the Wegener-Bergeron-Findeisen (WBF) process tends to be too active in climate models, making too much cloud ice, and resulting in an exaggerated negative cloud-phase feedback on climate change. We explore the WBF process and its effect on shortwave cloud forcing in present-day and future climate simulations with the Community Earth System Model, and its super-parameterized counterpart. Results show that SP-CESM has much less cloud ice and a weaker cloud-phase feedback than CESM.

  14. Investigation of The Effects of Connective Tissue Mobilisation on ...

    African Journals Online (AJOL)

    Background: Connective Tissue Massage (CTM) or Manipulation is a bodywork technique which lies at the interface between alternative approaches. The autonomic balancing responses to CTM can be useful in the treatment of anxiety. Aim: This study was planned to investigate the effects of connective tissue mobilization ...

  15. Assessing Impact, DIF, and DFF in Accommodated Item Scores: A Comparison of Multilevel Measurement Model Parameterizations

    Science.gov (United States)

    Beretvas, S. Natasha; Cawthon, Stephanie W.; Lockhart, L. Leland; Kaye, Alyssa D.

    2012-01-01

    This pedagogical article is intended to explain the similarities and differences between the parameterizations of two multilevel measurement model (MMM) frameworks. The conventional two-level MMM that includes item indicators and models item scores (Level 1) clustered within examinees (Level 2) and the two-level cross-classified MMM (in which item…

  16. Connectivity as a multiple: In, with and as "nature".

    Science.gov (United States)

    Hodgetts, Timothy

    2018-03-01

    Connectivity is a central concept in contemporary geographies of nature, but the concept is often understood and utilised in plural ways. This is problematic because of the separation, rather than the confusion, of these different approaches. While the various understandings of connectivity are rarely considered as working together, the connections between them have significant implications. This paper thus proposes re-thinking connectivity as a "multiple". It develops a taxonomy of existing connectivity concepts from the fields of biogeography and landscape ecology, conservation biology, socio-economic systems theory, political ecology and more-than-human geography. It then considers how these various understandings might be re-thought not as separate concerns, but (following Annemarie Mol) as "more than one, but less than many". The implications of using the connectivity multiple as an analytic for understanding conservation practices are demonstrated through considering the creation of wildlife corridors in conservation practice. The multiple does not just serve to highlight the practical and theoretical linkages between ecological theories, social inequities and affectual relationships in more-than-human worlds. It is also suggestive of a normative approach to environmental management that does not give temporal priority to biological theories, but considers these as always already situated in these social, often unequal, always more-than-human ecologies.

  17. Deforestation and rainfall recycling in Brazil: Is decreased forest cover connectivity associated with decreased rainfall connectivity?

    Science.gov (United States)

    Adera, S.; Larsen, L.; Levy, M. C.; Thompson, S. E.

    2017-12-01

    In the Brazilian rainforest-savanna transition zone, deforestation has the potential to significantly affect rainfall by disrupting rainfall recycling, the process by which regional evapotranspiration contributes to regional rainfall. Understanding rainfall recycling in this region is important not only for sustaining Amazon and Cerrado ecosystems, but also for cattle ranching, agriculture, hydropower generation, and drinking water management. Simulations in previous studies suggest complex, scale-dependent interactions between forest cover connectivity and rainfall. For example, the size and distribution of deforested patches has been found to affect rainfall quantity and spatial distribution. Here we take an empirical approach, using the spatial connectivity of rainfall as an indicator of rainfall recycling, to ask: as forest cover connectivity decreased from 1981 - 2015, how did the spatial connectivity of rainfall change in the Brazilian rainforest-savanna transition zone? We use satellite forest cover and rainfall data covering this period of intensive forest cover loss in the region (forest cover from the Hansen Global Forest Change dataset; rainfall from the Climate Hazards Infrared Precipitation with Stations dataset). Rainfall spatial connectivity is quantified using transfer entropy, a metric from information theory, and summarized using network statistics. Networks of connectivity are quantified for paired deforested and non-deforested regions before deforestation (1981-1995) and during/after deforestation (2001-2015). Analyses reveal a decline in spatial connectivity networks of rainfall following deforestation.

  18. Detecting Brain State Changes via Fiber-Centered Functional Connectivity Analysis

    Science.gov (United States)

    Li, Xiang; Lim, Chulwoo; Li, Kaiming; Guo, Lei; Liu, Tianming

    2013-01-01

    Diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI) have been widely used to study structural and functional brain connectivity in recent years. A common assumption used in many previous functional brain connectivity studies is the temporal stationarity. However, accumulating literature evidence has suggested that functional brain connectivity is under temporal dynamic changes in different time scales. In this paper, a novel and intuitive approach is proposed to model and detect dynamic changes of functional brain states based on multimodal fMRI/DTI data. The basic idea is that functional connectivity patterns of all fiber-connected cortical voxels are concatenated into a descriptive functional feature vector to represent the brain’s state, and the temporal change points of brain states are decided by detecting the abrupt changes of the functional vector patterns via the sliding window approach. Our extensive experimental results have shown that meaningful brain state change points can be detected in task-based fMRI/DTI, resting state fMRI/DTI, and natural stimulus fMRI/DTI data sets. Particularly, the detected change points of functional brain states in task-based fMRI corresponded well to the external stimulus paradigm administered to the participating subjects, thus partially validating the proposed brain state change detection approach. The work in this paper provides novel perspective on the dynamic behaviors of functional brain connectivity and offers a starting point for future elucidation of the complex patterns of functional brain interactions and dynamics. PMID:22941508

  19. A comparison study of convective and microphysical parameterization schemes associated with lightning occurrence in southeastern Brazil using the WRF model

    Science.gov (United States)

    Zepka, G. D.; Pinto, O.

    2010-12-01

    The intent of this study is to identify the combination of convective and microphysical WRF parameterizations that better adjusts to lightning occurrence over southeastern Brazil. Twelve thunderstorm days were simulated with WRF model using three different convective parameterizations (Kain-Fritsch, Betts-Miller-Janjic and Grell-Devenyi ensemble) and two different microphysical schemes (Purdue-Lin and WSM6). In order to test the combinations of parameterizations at the same time of lightning occurrence, a comparison was made between the WRF grid point values of surface-based Convective Available Potential Energy (CAPE), Lifted Index (LI), K-Index (KI) and equivalent potential temperature (theta-e), and the lightning locations nearby those grid points. Histograms were built up to show the ratio of the occurrence of different values of these variables for WRF grid points associated with lightning to all WRF grid points. The first conclusion from this analysis was that the choice of microphysics did not change appreciably the results as much as different convective schemes. The Betts-Miller-Janjic parameterization has generally worst skill to relate higher magnitudes for all four variables to lightning occurrence. The differences between the Kain-Fritsch and Grell-Devenyi ensemble schemes were not large. This fact can be attributed to the similar main assumptions used by these schemes that consider entrainment/detrainment processes along the cloud boundaries. After that, we examined three case studies using the combinations of convective and microphysical options without the Betts-Miller-Janjic scheme. Differently from the traditional verification procedures, fields of surface-based CAPE from WRF 10 km domain were compared to the Eta model, satellite images and lightning data. In general the more reliable convective scheme was Kain-Fritsch since it provided more consistent distribution of the CAPE fields with respect to satellite images and lightning data.

  20. New parameterization of external and induced fields in geomagnetic field modeling, and a candidate model for IGRF 2005

    DEFF Research Database (Denmark)

    Olsen, Nils; Sabaka, T.J.; Lowes, F.

    2005-01-01

    When deriving spherical harmonic models of the Earth's magnetic field, low-degree external field contributions are traditionally considered by assuming that their expansion coefficient q(1)(0) varies linearly with the D-st-index, while induced contributions are considered assuming a constant ratio...... Q(1) of induced to external coefficients. A value of Q(1) = 0.27 was found from Magsat data and has been used by several authors when deriving recent field models from Orsted and CHAMP data. We describe a new approach that considers external and induced field based on a separation of D-st = E-st + I......-st into external (E-st) and induced (I-st) parts using a 1D model of mantle conductivity. The temporal behavior of q(1)(0) and of the corresponding induced coefficient are parameterized by E-st and I-st, respectively. In addition, we account for baseline-instabilities of D-st by estimating a value of q(1...

  1. New representation of water activity based on a single solute specific constant to parameterize the hygroscopic growth of aerosols in atmospheric models

    Directory of Open Access Journals (Sweden)

    S. Metzger

    2012-06-01

    Full Text Available Water activity is a key factor in aerosol thermodynamics and hygroscopic growth. We introduce a new representation of water activity (aw, which is empirically related to the solute molality (μs through a single solute specific constant, νi. Our approach is widely applicable, considers the Kelvin effect and covers ideal solutions at high relative humidity (RH, including cloud condensation nuclei (CCN activation. It also encompasses concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD. The constant νi can thus be used to parameterize the aerosol hygroscopic growth over a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. In contrast to other aw-representations, our νi factor corrects the solute molality both linearly and in exponent form x · ax. We present four representations of our basic aw-parameterization at different levels of complexity for different aw-ranges, e.g. up to 0.95, 0.98 or 1. νi is constant over the selected aw-range, and in its most comprehensive form, the parameterization describes the entire aw range (0–1. In this work we focus on single solute solutions. νi can be pre-determined with a root-finding method from our water activity representation using an aw−μs data pair, e.g. at solute saturation using RHD and solubility measurements. Our aw and supersaturation (Köhler-theory results compare well with the thermodynamic reference model E-AIM for the key compounds NaCl and (NH42SO4 relevant for CCN modeling and calibration studies. Envisaged applications include regional and global atmospheric chemistry and

  2. Sensitivity analysis of a parameterization of the stomatal component of the DO3SE model for Quercus ilex to estimate ozone fluxes

    International Nuclear Information System (INIS)

    Alonso, Rocio; Elvira, Susana; Sanz, Maria J.; Gerosa, Giacomo; Emberson, Lisa D.; Bermejo, Victoria; Gimeno, Benjamin S.

    2008-01-01

    A sensitivity analysis of a proposed parameterization of the stomatal conductance (g s ) module of the European ozone deposition model (DO 3 SE) for Quercus ilex was performed. The performance of the model was tested against measured g s in the field at three sites in Spain. The best fit of the model was found for those sites, or during those periods, facing no or mild stress conditions, but a worse performance was found under severe drought or temperature stress, mostly occurring at continental sites. The best performance was obtained when both f phen and f SWP were included. A local parameterization accounting for the lower temperatures recorded in winter and the higher water shortage at the continental sites resulted in a better performance of the model. The overall results indicate that two different parameterizations of the model are needed, one for marine-influenced sites and another one for continental sites. - No redundancy between phenological and water-related modifying functions was found when estimating stomatal behavior of Holm oak

  3. Quantifying Riverscape Connectivity with Graph Theory

    Science.gov (United States)

    Carbonneau, P.; Milledge, D.; Sinha, R.; Tandon, S. K.

    2013-12-01

    connectivity structure of the Gangetic riverscape with fluvial remote sensing. Our study reach extends from the heavily dammed headwaters of the Bhagirathi, Mandakini and Alaknanda rivers which form the source of the Ganga to Allahabad ~900 km downstream on the main stem. We use Landsat-8 imagery as the baseline dataset. Channel width along the Ganga (i.e. Ganges) is often several kilometres. Therefore, the pan-sharpened 15m pixels of Landsat-8 are in fact capable of resolving inner channel features for over 80% of the channel length thus allowing a riverscape approach to be adopted. We examine the following connectivity metrics: size distribution of connected components, betweeness centrality and the integrated index of connectivity. A geographic perspective is added by mapping local (25 km-scale) values for these metrics in order to examine spatial patterns of connectivity. This approach allows us to map impacts of dam construction and has the potential to inform policy decisions in the area as well as open-up new avenues of investigation.

  4. Resting state cortico-cerebellar functional connectivity networks: A comparison of anatomical and self-organizing map approaches

    Directory of Open Access Journals (Sweden)

    Jessica A Bernard

    2012-08-01

    Full Text Available The cerebellum plays a role in a wide variety of complex behaviors. In order to better understand the role of the cerebellum in human behavior, it is important to know how this structure interacts with cortical and other subcortical regions of the brain. To date, several studies have investigated the cerebellum using resting-state functional connectivity magnetic resonance imaging (fcMRI; Buckner et al., 2011; Krienen & Buckner, 2009; O’Reilly et al., 2009. However, none of this work has taken an anatomically-driven approach. Furthermore, though detailed maps of cerebral cortex and cerebellum networks have been proposed using different network solutions based on the cerebral cortex (Buckner et al., 2011, it remains unknown whether or not an anatomical lobular breakdown best encompasses the networks of the cerebellum. Here, we used fcMRI to create an anatomically-driven cerebellar connectivity atlas. Timecourses were extracted from the lobules of the right hemisphere and vermis. We found distinct networks for the individual lobules with a clear division into motor and non-motor regions. We also used a self-organizing map algorithm to parcellate the cerebellum. This allowed us to investigate redundancy and independence of the anatomically identified cerebellar networks. We found that while anatomical boundaries in the anterior cerebellum provide functional subdivisions of a larger motor grouping defined using our self-organizing map algorithm, in the posterior cerebellum, the lobules were made up of sub-regions associated with distinct functional networks. Together, our results indicate that the lobular boundaries of the human cerebellum are not indicative of functional boundaries, though anatomical divisions can be useful, as is the case of the anterior cerebellum. Additionally, driving the analyses from the cerebellum is key to determining the complete picture of functional connectivity within the structure.

  5. A mass-flux cumulus parameterization scheme for large-scale models: description and test with observations

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Tongwen [China Meteorological Administration (CMA), National Climate Center (Beijing Climate Center), Beijing (China)

    2012-02-15

    A simple mass-flux cumulus parameterization scheme suitable for large-scale atmospheric models is presented. The scheme is based on a bulk-cloud approach and has the following properties: (1) Deep convection is launched at the level of maximum moist static energy above the top of the boundary layer. It is triggered if there is positive convective available potential energy (CAPE) and relative humidity of the air at the lifting level of convection cloud is greater than 75%; (2) Convective updrafts for mass, dry static energy, moisture, cloud liquid water and momentum are parameterized by a one-dimensional entrainment/detrainment bulk-cloud model. The lateral entrainment of the environmental air into the unstable ascending parcel before it rises to the lifting condensation level is considered. The entrainment/detrainment amount for the updraft cloud parcel is separately determined according to the increase/decrease of updraft parcel mass with altitude, and the mass change for the adiabatic ascent cloud parcel with altitude is derived from a total energy conservation equation of the whole adiabatic system in which involves the updraft cloud parcel and the environment; (3) The convective downdraft is assumed saturated and originated from the level of minimum environmental saturated equivalent potential temperature within the updraft cloud; (4) The mass flux at the base of convective cloud is determined by a closure scheme suggested by Zhang (J Geophys Res 107(D14)), in which the increase/decrease of CAPE due to changes of the thermodynamic states in the free troposphere resulting from convection approximately balances the decrease/increase resulting from large-scale processes. Evaluation of the proposed convection scheme is performed by using a single column model (SCM) forced by the Atmospheric Radiation Measurement Program's (ARM) summer 1995 and 1997 Intensive Observing Period (IOP) observations, and field observations from the Global Atmospheric Research

  6. Modeling canopy-induced turbulence in the Earth system: a unified parameterization of turbulent exchange within plant canopies and the roughness sublayer (CLM-ml v0

    Directory of Open Access Journals (Sweden)

    G. B. Bonan

    2018-04-01

    Full Text Available Land surface models used in climate models neglect the roughness sublayer and parameterize within-canopy turbulence in an ad hoc manner. We implemented a roughness sublayer turbulence parameterization in a multilayer canopy model (CLM-ml v0 to test if this theory provides a tractable parameterization extending from the ground through the canopy and the roughness sublayer. We compared the canopy model with the Community Land Model (CLM4.5 at seven forest, two grassland, and three cropland AmeriFlux sites over a range of canopy heights, leaf area indexes, and climates. CLM4.5 has pronounced biases during summer months at forest sites in midday latent heat flux, sensible heat flux, gross primary production, nighttime friction velocity, and the radiative temperature diurnal range. The new canopy model reduces these biases by introducing new physics. Advances in modeling stomatal conductance and canopy physiology beyond what is in CLM4.5 substantially improve model performance at the forest sites. The signature of the roughness sublayer is most evident in nighttime friction velocity and the diurnal cycle of radiative temperature, but is also seen in sensible heat flux. Within-canopy temperature profiles are markedly different compared with profiles obtained using Monin–Obukhov similarity theory, and the roughness sublayer produces cooler daytime and warmer nighttime temperatures. The herbaceous sites also show model improvements, but the improvements are related less systematically to the roughness sublayer parameterization in these canopies. The multilayer canopy with the roughness sublayer turbulence improves simulations compared with CLM4.5 while also advancing the theoretical basis for surface flux parameterizations.

  7. Connecting Architecture and Implementation

    Science.gov (United States)

    Buchgeher, Georg; Weinreich, Rainer

    Software architectures are still typically defined and described independently from implementation. To avoid architectural erosion and drift, architectural representation needs to be continuously updated and synchronized with system implementation. Existing approaches for architecture representation like informal architecture documentation, UML diagrams, and Architecture Description Languages (ADLs) provide only limited support for connecting architecture descriptions and implementations. Architecture management tools like Lattix, SonarJ, and Sotoarc and UML-tools tackle this problem by extracting architecture information directly from code. This approach works for low-level architectural abstractions like classes and interfaces in object-oriented systems but fails to support architectural abstractions not found in programming languages. In this paper we present an approach for linking and continuously synchronizing a formalized architecture representation to an implementation. The approach is a synthesis of functionality provided by code-centric architecture management and UML tools and higher-level architecture analysis approaches like ADLs.

  8. Parameterization of Surface Roughness Based on ICESat/GLAS Full Waveforms : A Case Study on the Tibetan Plateau

    NARCIS (Netherlands)

    Shi, J.; Menenti, M.; Lindenbergh, R.

    2013-01-01

    Glaciers in the Tibetan mountains are expected to be sensitive to turbulent sensible and latent heat fluxes. One of the most significant factors of the energy exchange between the atmospheric boundary layer and the glacier is the roughness of the glacier surface. However, methods to parameterize

  9. Structural connectivity via the tensor-based morphometry

    OpenAIRE

    Kim, S.; Chung, M.; Hanson, J.; Avants, B.; Gee, J.; Davidson, R.; Pollak, S.

    2011-01-01

    The tensor-based morphometry (TBM) has been widely used in characterizing tissue volume difference between populations at voxel level. We present a novel computational framework for investigating the white matter connectivity using TBM. Unlike other diffusion tensor imaging (DTI) based white matter connectivity studies, we do not use DTI but only T1-weighted magnetic resonance imaging (MRI). To construct brain network graphs, we have developed a new data-driven approach called the ε-neighbor ...

  10. Prediction of heavy rainfall over Chennai Metropolitan City, Tamil Nadu, India: Impact of microphysical parameterization schemes

    Science.gov (United States)

    Singh, K. S.; Bonthu, Subbareddy; Purvaja, R.; Robin, R. S.; Kannan, B. A. M.; Ramesh, R.

    2018-04-01

    This study attempts to investigate the real-time prediction of a heavy rainfall event over the Chennai Metropolitan City, Tamil Nadu, India that occurred on 01 December 2015 using Advanced Research Weather Research and Forecasting (WRF-ARW) model. The study evaluates the impact of six microphysical (Lin, WSM6, Goddard, Thompson, Morrison and WDM6) parameterization schemes of the model on prediction of heavy rainfall event. In addition, model sensitivity has also been evaluated with six Planetary Boundary Layer (PBL) and two Land Surface Model (LSM) schemes. Model forecast was carried out using nested domain and the impact of model horizontal grid resolutions were assessed at 9 km, 6 km and 3 km. Analysis of the synoptic features using National Center for Environmental Prediction Global Forecast System (NCEP-GFS) analysis data revealed strong upper-level divergence and high moisture content at lower level were favorable for the occurrence of heavy rainfall event over the northeast coast of Tamil Nadu. The study signified that forecasted rainfall was more sensitive to the microphysics and PBL schemes compared to the LSM schemes. The model provided better forecast of the heavy rainfall event using the logical combination of Goddard microphysics, YSU PBL and Noah LSM schemes, and it was mostly attributed to timely initiation and development of the convective system. The forecast with different horizontal resolutions using cumulus parameterization indicated that the rainfall prediction was not well represented at 9 km and 6 km. The forecast with 3 km horizontal resolution provided better prediction in terms of timely initiation and development of the event. The study highlights that forecast of heavy rainfall events using a high-resolution mesoscale model with suitable representations of physical parameterization schemes are useful for disaster management and planning to minimize the potential loss of life and property.

  11. Cross-section parameterization of the pebble bed modular reactor using the dimension-wise expansion model

    International Nuclear Information System (INIS)

    Zivanovic, Rastko; Bokov, Pavel M.

    2010-01-01

    This paper discusses the use of the dimension-wise expansion model for cross-section parameterization. The components of the model were approximated with tensor products of orthogonal polynomials. As we demonstrate, the model for a specific cross-section can be built in a systematic way directly from data without any a priori knowledge of its structure. The methodology is able to construct a finite basis of orthogonal polynomials that is required to approximate a cross-section with pre-specified accuracy. The methodology includes a global sensitivity analysis that indicates irrelevant state parameters which can be excluded from the model without compromising the accuracy of the approximation and without repetition of the fitting process. To fit the dimension-wise expansion model, Randomised Quasi-Monte-Carlo Integration and Sparse Grid Integration methods were used. To test the parameterization methods with different integrations embedded we have used the OECD PBMR 400 MW benchmark problem. It has been shown in this paper that the Sparse Grid Integration achieves pre-specified accuracy with a significantly (up to 1-2 orders of magnitude) smaller number of samples compared to Randomised Quasi-Monte-Carlo Integration.

  12. Genome-Enabled Modeling of Biogeochemical Processes Predicts Metabolic Dependencies that Connect the Relative Fitness of Microbial Functional Guilds

    Science.gov (United States)

    Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Steefel, C. I.; Banfield, J. F.; Beller, H. R.; Anantharaman, K.; Ligocki, T. J.; Trebotich, D.

    2015-12-01

    Pore-scale processes mediated by microorganisms underlie a range of critical ecosystem services, regulating carbon stability, nutrient flux, and the purification of water. Advances in cultivation-independent approaches now provide us with the ability to reconstruct thousands of genomes from microbial populations from which functional roles may be assigned. With this capability to reveal microbial metabolic potential, the next step is to put these microbes back where they belong to interact with their natural environment, i.e. the pore scale. At this scale, microorganisms communicate, cooperate and compete across their fitness landscapes with communities emerging that feedback on the physical and chemical properties of their environment, ultimately altering the fitness landscape and selecting for new microbial communities with new properties and so on. We have developed a trait-based model of microbial activity that simulates coupled functional guilds that are parameterized with unique combinations of traits that govern fitness under dynamic conditions. Using a reactive transport framework, we simulate the thermodynamics of coupled electron donor-acceptor reactions to predict energy available for cellular maintenance, respiration, biomass development, and enzyme production. From metagenomics, we directly estimate some trait values related to growth and identify the linkage of key traits associated with respiration and fermentation, macromolecule depolymerizing enzymes, and other key functions such as nitrogen fixation. Our simulations were carried out to explore abiotic controls on community emergence such as seasonally fluctuating water table regimes across floodplain organic matter hotspots. Simulations and metagenomic/metatranscriptomic observations highlighted the many dependencies connecting the relative fitness of functional guilds and the importance of chemolithoautotrophic lifestyles. Using an X-Ray microCT-derived soil microaggregate physical model combined

  13. Mutual Connectivity Analysis (MCA) Using Generalized Radial Basis Function Neural Networks for Nonlinear Functional Connectivity Network Recovery in Resting-State Functional MRI.

    Science.gov (United States)

    DSouza, Adora M; Abidin, Anas Zainul; Nagarajan, Mahesh B; Wismüller, Axel

    2016-03-29

    We investigate the applicability of a computational framework, called mutual connectivity analysis (MCA), for directed functional connectivity analysis in both synthetic and resting-state functional MRI data. This framework comprises of first evaluating non-linear cross-predictability between every pair of time series prior to recovering the underlying network structure using community detection algorithms. We obtain the non-linear cross-prediction score between time series using Generalized Radial Basis Functions (GRBF) neural networks. These cross-prediction scores characterize the underlying functionally connected networks within the resting brain, which can be extracted using non-metric clustering approaches, such as the Louvain method. We first test our approach on synthetic models with known directional influence and network structure. Our method is able to capture the directional relationships between time series (with an area under the ROC curve = 0.92 ± 0.037) as well as the underlying network structure (Rand index = 0.87 ± 0.063) with high accuracy. Furthermore, we test this method for network recovery on resting-state fMRI data, where results are compared to the motor cortex network recovered from a motor stimulation sequence, resulting in a strong agreement between the two (Dice coefficient = 0.45). We conclude that our MCA approach is effective in analyzing non-linear directed functional connectivity and in revealing underlying functional network structure in complex systems.

  14. A novel explicit approach to model bromide and pesticide transport in connected soil structures

    Directory of Open Access Journals (Sweden)

    J. Klaus

    2011-07-01

    Full Text Available The present study tests whether an explicit treatment of worm burrows and tile drains as connected structures is feasible for simulating water flow, bromide and pesticide transport in structured heterogeneous soils at hillslope scale. The essence is to represent worm burrows as morphologically connected paths of low flow resistance in a hillslope model. A recent Monte Carlo study (Klaus and Zehe, 2010, Hydrological Processes, 24, p. 1595–1609 revealed that this approach allowed successful reproduction of tile drain event discharge recorded during an irrigation experiment at a tile drained field site. However, several "hillslope architectures" that were all consistent with the available extensive data base allowed a good reproduction of tile drain flow response. Our second objective was thus to find out whether this "equifinality" in spatial model setups may be reduced when including bromide tracer data in the model falsification process. We thus simulated transport of bromide for the 13 spatial model setups that performed best with respect to reproduce tile drain event discharge, without any further calibration. All model setups allowed a very good prediction of the temporal dynamics of cumulated bromide leaching into the tile drain, while only four of them matched the accumulated water balance and accumulated bromide loss into the tile drain. The number of behavioural model architectures could thus be reduced to four. One of those setups was used for simulating transport of Isoproturon, using different parameter combinations to characterise adsorption according to the Footprint data base. Simulations could, however, only reproduce the observed leaching behaviour, when we allowed for retardation coefficients that were very close to one.

  15. A novel explicit approach to model bromide and pesticide transport in connected soil structures

    Science.gov (United States)

    Klaus, J.; Zehe, E.

    2011-07-01

    The present study tests whether an explicit treatment of worm burrows and tile drains as connected structures is feasible for simulating water flow, bromide and pesticide transport in structured heterogeneous soils at hillslope scale. The essence is to represent worm burrows as morphologically connected paths of low flow resistance in a hillslope model. A recent Monte Carlo study (Klaus and Zehe, 2010, Hydrological Processes, 24, p. 1595-1609) revealed that this approach allowed successful reproduction of tile drain event discharge recorded during an irrigation experiment at a tile drained field site. However, several "hillslope architectures" that were all consistent with the available extensive data base allowed a good reproduction of tile drain flow response. Our second objective was thus to find out whether this "equifinality" in spatial model setups may be reduced when including bromide tracer data in the model falsification process. We thus simulated transport of bromide for the 13 spatial model setups that performed best with respect to reproduce tile drain event discharge, without any further calibration. All model setups allowed a very good prediction of the temporal dynamics of cumulated bromide leaching into the tile drain, while only four of them matched the accumulated water balance and accumulated bromide loss into the tile drain. The number of behavioural model architectures could thus be reduced to four. One of those setups was used for simulating transport of Isoproturon, using different parameter combinations to characterise adsorption according to the Footprint data base. Simulations could, however, only reproduce the observed leaching behaviour, when we allowed for retardation coefficients that were very close to one.

  16. Empirical validation of directed functional connectivity.

    Science.gov (United States)

    Mill, Ravi D; Bagic, Anto; Bostan, Andreea; Schneider, Walter; Cole, Michael W

    2017-02-01

    Mapping directions of influence in the human brain connectome represents the next phase in understanding its functional architecture. However, a host of methodological uncertainties have impeded the application of directed connectivity methods, which have primarily been validated via "ground truth" connectivity patterns embedded in simulated functional MRI (fMRI) and magneto-/electro-encephalography (MEG/EEG) datasets. Such simulations rely on many generative assumptions, and we hence utilized a different strategy involving empirical data in which a ground truth directed connectivity pattern could be anticipated with confidence. Specifically, we exploited the established "sensory reactivation" effect in episodic memory, in which retrieval of sensory information reactivates regions involved in perceiving that sensory modality. Subjects performed a paired associate task in separate fMRI and MEG sessions, in which a ground truth reversal in directed connectivity between auditory and visual sensory regions was instantiated across task conditions. This directed connectivity reversal was successfully recovered across different algorithms, including Granger causality and Bayes network (IMAGES) approaches, and across fMRI ("raw" and deconvolved) and source-modeled MEG. These results extend simulation studies of directed connectivity, and offer practical guidelines for the use of such methods in clarifying causal mechanisms of neural processing. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. [A population-targeted approach to connect prevention, care and welfare: visualising the trend].

    Science.gov (United States)

    Lemmens, L C; Drewes, H W; Lette, M; Baan, C A

    2017-01-01

    To map initiatives in the Netherlands using a population-targeted approach to link prevention, care and welfare. Descriptive investigation, based on conversations and structured interviews. We searched for initiatives in which providers in the areas of prevention, care and welfare together with health insurers and/or local authorities attempted to provide the 'triple aim': improving the health of the population and the quality of care, and managing costs. We found potential initiatives on the basis of interviews with key figures, project databases and congress programmes. We looked for additional information on websites and via contact persons to gather additional information to determine whether the initiative met the inclusion criteria. An initiative should link prevention, care and welfare with a minimum of three players actively pursuing a population-targeted goal through multiple interventions for a non-disease specific and district-transcending population. We described the goal, organisational structure, parties involved, activities and funding on the basis of interviews conducted in the period August-December 2015 with the managers of the initiatives included. We found 19 initiatives which met the criteria where there was experimentation with organisational forms, levels of participation, interventions and funding. It was noticeable that the interventions mostly concerned medical care. There was a lack of insight into the 'triple aim', mostly because data exchange between parties is generally difficult. There is an increasing number of initiatives that follow a population-targeted approach. Although the different parties strive to connect the three domains, they are still searching for an optimal collaboration, organisational form, data exchange and financing.

  18. The Listening Train: A Collaborative, Connective Aesthetics ...

    African Journals Online (AJOL)

    The Listening Train: A Collaborative, Connective Aesthetics Approach to Transgressive Social Learning. ... Southern African Journal of Environmental Education. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue ...

  19. Fedosov’s formal symplectic groupoids and contravariant connections

    Science.gov (United States)

    Karabegov, Alexander V.

    2006-10-01

    Using Fedosov's approach we give a geometric construction of a formal symplectic groupoid over any Poisson manifold endowed with a torsion-free Poisson contravariant connection. In the case of Kähler-Poisson manifolds this construction provides, in particular, the formal symplectic groupoids with separation of variables. We show that the dual of a semisimple Lie algebra does not admit torsion-free Poisson contravariant connections.

  20. Parameterization of ion channeling half-angles and minimum yields

    Energy Technology Data Exchange (ETDEWEB)

    Doyle, Barney L.

    2016-03-15

    A MS Excel program has been written that calculates ion channeling half-angles and minimum yields in cubic bcc, fcc and diamond lattice crystals. All of the tables and graphs in the three Ion Beam Analysis Handbooks that previously had to be manually looked up and read from were programed into Excel in handy lookup tables, or parameterized, for the case of the graphs, using rather simple exponential functions with different power functions of the arguments. The program then offers an extremely convenient way to calculate axial and planar half-angles, minimum yields, effects on half-angles and minimum yields of amorphous overlayers. The program can calculate these half-angles and minimum yields for 〈u v w〉 axes and [h k l] planes up to (5 5 5). The program is open source and available at (http://www.sandia.gov/pcnsc/departments/iba/ibatable.html).

  1. Unsupervised classification of major depression using functional connectivity MRI.

    Science.gov (United States)

    Zeng, Ling-Li; Shen, Hui; Liu, Li; Hu, Dewen

    2014-04-01

    The current diagnosis of psychiatric disorders including major depressive disorder based largely on self-reported symptoms and clinical signs may be prone to patients' behaviors and psychiatrists' bias. This study aims at developing an unsupervised machine learning approach for the accurate identification of major depression based on single resting-state functional magnetic resonance imaging scans in the absence of clinical information. Twenty-four medication-naive patients with major depression and 29 demographically similar healthy individuals underwent resting-state functional magnetic resonance imaging. We first clustered the voxels within the perigenual cingulate cortex into two subregions, a subgenual region and a pregenual region, according to their distinct resting-state functional connectivity patterns and showed that a maximum margin clustering-based unsupervised machine learning approach extracted sufficient information from the subgenual cingulate functional connectivity map to differentiate depressed patients from healthy controls with a group-level clustering consistency of 92.5% and an individual-level classification consistency of 92.5%. It was also revealed that the subgenual cingulate functional connectivity network with the highest discriminative power primarily included the ventrolateral and ventromedial prefrontal cortex, superior temporal gyri and limbic areas, indicating that these connections may play critical roles in the pathophysiology of major depression. The current study suggests that subgenual cingulate functional connectivity network signatures may provide promising objective biomarkers for the diagnosis of major depression and that maximum margin clustering-based unsupervised machine learning approaches may have the potential to inform clinical practice and aid in research on psychiatric disorders. Copyright © 2013 Wiley Periodicals, Inc.

  2. Sensitivity analysis of a parameterization of the stomatal component of the DO{sub 3}SE model for Quercus ilex to estimate ozone fluxes

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, Rocio [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: rocio.alonso@ciemat.es; Elvira, Susana [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: susana.elvira@ciemat.es; Sanz, Maria J. [Fundacion CEAM, Charles Darwin 14, 46980 Paterna, Valencia (Spain)], E-mail: mjose@ceam.es; Gerosa, Giacomo [Department of Mathematics and Physics, Universita Cattolica del Sacro Cuore, via Musei 41, 25121 Brescia (Italy)], E-mail: giacomo.gerosa@unicatt.it; Emberson, Lisa D. [Stockholm Environment Institute, University of York, York YO 10 5DD (United Kingdom)], E-mail: lde1@york.ac.uk; Bermejo, Victoria [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: victoria.bermejo@ciemat.es; Gimeno, Benjamin S. [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: benjamin.gimeno@ciemat.es

    2008-10-15

    A sensitivity analysis of a proposed parameterization of the stomatal conductance (g{sub s}) module of the European ozone deposition model (DO{sub 3}SE) for Quercus ilex was performed. The performance of the model was tested against measured g{sub s} in the field at three sites in Spain. The best fit of the model was found for those sites, or during those periods, facing no or mild stress conditions, but a worse performance was found under severe drought or temperature stress, mostly occurring at continental sites. The best performance was obtained when both f{sub phen} and f{sub SWP} were included. A local parameterization accounting for the lower temperatures recorded in winter and the higher water shortage at the continental sites resulted in a better performance of the model. The overall results indicate that two different parameterizations of the model are needed, one for marine-influenced sites and another one for continental sites. - No redundancy between phenological and water-related modifying functions was found when estimating stomatal behavior of Holm oak.

  3. Theoretical Tools for Relativistic Gravimetry, Gradiometry and Chronometric Geodesy and Application to a Parameterized Post-Newtonian Metric

    Directory of Open Access Journals (Sweden)

    Pacôme Delva

    2017-03-01

    Full Text Available An extensive review of past work on relativistic gravimetry, gradiometry and chronometric geodesy is given. Then, general theoretical tools are presented and applied for the case of a stationary parameterized post-Newtonian metric. The special case of a stationary clock on the surface of the Earth is studied.

  4. An IoT Based Predictive Connected Car Maintenance Approach

    OpenAIRE

    Rohit Dhall; Vijender Kumar Solanki

    2017-01-01

    Internet of Things (IoT) is fast emerging and becoming an almost basic necessity in general life. The concepts of using technology in our daily life is not new, but with the advancements in technology, the impact of technology in daily activities of a person can be seen in almost all the aspects of life. Today, all aspects of our daily life, be it health of a person, his location, movement, etc. can be monitored and analyzed using information captured from various connected devices. This pape...

  5. Performance assessment on high strength steel endplate connections after fire

    NARCIS (Netherlands)

    Qiang, X.; Wu, N.; Jiang, X.; Bijlaard, F.S.K.; Kolstein, M.H.

    2017-01-01

    Purpose – This study aims to reveal more information and understanding on performance and failure mechanisms of high strength steel endplate connections after fire. Design/methodology/approach – An experimental and numerical study on seven endplate connections after

  6. A spectroscopic approach toward depression diagnosis: local metabolism meets functional connectivity.

    Science.gov (United States)

    Demenescu, Liliana Ramona; Colic, Lejla; Li, Meng; Safron, Adam; Biswal, B; Metzger, Coraline Danielle; Li, Shijia; Walter, Martin

    2017-03-01

    Abnormal anterior insula (AI) response and functional connectivity (FC) is associated with depression. In addition to clinical features, such as severity, AI FC and its metabolism further predicted therapeutic response. Abnormal FC between anterior cingulate and AI covaried with reduced glutamate level within cingulate cortex. Recently, deficient glial glutamate conversion was found in AI in major depression disorder (MDD). We therefore postulate a local glutamatergic mechanism in insula cortex of depressive patients, which is correlated with symptoms severity and itself influences AI's network connectivity in MDD. Twenty-five MDD patients and 25 healthy controls (HC) matched on age and sex underwent resting state functional magnetic resonance imaging and magnetic resonance spectroscopy scans. To determine the role of local glutamate-glutamine complex (Glx) ratio on whole brain AI FC, we conducted regression analysis with Glx relative to creatine (Cr) ratio as factor of interest and age, sex, and voxel tissue composition as nuisance factors. We found that in MDD, but not in HC, AI Glx/Cr ratio correlated positively with AI FC to right supramarginal gyrus and negatively with AI FC toward left occipital cortex (p family wise error). AI Glx/Cr level was negatively correlated with HAMD score (p disintegration of insula toward low level and supramodal integration areas, in MDD. While causality cannot directly be inferred from such correlation, our finding helps to define a multilevel network of response-predicting regions based on local metabolism and connectivity strength.

  7. Facebook and the engineering of connectivity: a multi-layered approach to social media platforms

    NARCIS (Netherlands)

    van Dijck, J.

    2013-01-01

    This article aims to explain how Web 2.0 platforms in general, and Facebook in particular, engineers online connections. Connectivity has become the material and metaphorical wiring of our culture, a culture in which technologies shape and are shaped not only by economic and legal frames, but also

  8. Structural and Functional Brain Connectivity of People with Obesity and Prediction of Body Mass Index Using Connectivity.

    Directory of Open Access Journals (Sweden)

    Bo-yong Park

    Full Text Available Obesity is a medical condition affecting billions of people. Various neuroimaging methods including magnetic resonance imaging (MRI have been used to obtain information about obesity. We adopted a multi-modal approach combining diffusion tensor imaging (DTI and resting state functional MRI (rs-fMRI to incorporate complementary information and thus better investigate the brains of non-healthy weight subjects. The objective of this study was to explore multi-modal neuroimaging and use it to predict a practical clinical score, body mass index (BMI. Connectivity analysis was applied to DTI and rs-fMRI. Significant regions and associated imaging features were identified based on group-wise differences between healthy weight and non-healthy weight subjects. Six DTI-driven connections and 10 rs-fMRI-driven connectivities were identified. DTI-driven connections better reflected group-wise differences than did rs-fMRI-driven connectivity. We predicted BMI values using multi-modal imaging features in a partial least-square regression framework (percent error 15.0%. Our study identified brain regions and imaging features that can adequately explain BMI. We identified potentially good imaging biomarker candidates for obesity-related diseases.

  9. Connectivity inference from neural recording data: Challenges, mathematical bases and research directions.

    Science.gov (United States)

    Magrans de Abril, Ildefons; Yoshimoto, Junichiro; Doya, Kenji

    2018-06-01

    This article presents a review of computational methods for connectivity inference from neural activity data derived from multi-electrode recordings or fluorescence imaging. We first identify biophysical and technical challenges in connectivity inference along the data processing pipeline. We then review connectivity inference methods based on two major mathematical foundations, namely, descriptive model-free approaches and generative model-based approaches. We investigate representative studies in both categories and clarify which challenges have been addressed by which method. We further identify critical open issues and possible research directions. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  10. Designing connected marine reserves in the face of global warming.

    Science.gov (United States)

    Álvarez-Romero, Jorge G; Munguía-Vega, Adrián; Beger, Maria; Del Mar Mancha-Cisneros, Maria; Suárez-Castillo, Alvin N; Gurney, Georgina G; Pressey, Robert L; Gerber, Leah R; Morzaria-Luna, Hem Nalini; Reyes-Bonilla, Héctor; Adams, Vanessa M; Kolb, Melanie; Graham, Erin M; VanDerWal, Jeremy; Castillo-López, Alejandro; Hinojosa-Arango, Gustavo; Petatán-Ramírez, David; Moreno-Baez, Marcia; Godínez-Reyes, Carlos R; Torre, Jorge

    2018-02-01

    Marine reserves are widely used to protect species important for conservation and fisheries and to help maintain ecological processes that sustain their populations, including recruitment and dispersal. Achieving these goals requires well-connected networks of marine reserves that maximize larval connectivity, thus allowing exchanges between populations and recolonization after local disturbances. However, global warming can disrupt connectivity by shortening potential dispersal pathways through changes in larval physiology. These changes can compromise the performance of marine reserve networks, thus requiring adjusting their design to account for ocean warming. To date, empirical approaches to marine prioritization have not considered larval connectivity as affected by global warming. Here, we develop a framework for designing marine reserve networks that integrates graph theory and changes in larval connectivity due to potential reductions in planktonic larval duration (PLD) associated with ocean warming, given current socioeconomic constraints. Using the Gulf of California as case study, we assess the benefits and costs of adjusting networks to account for connectivity, with and without ocean warming. We compare reserve networks designed to achieve representation of species and ecosystems with networks designed to also maximize connectivity under current and future ocean-warming scenarios. Our results indicate that current larval connectivity could be reduced significantly under ocean warming because of shortened PLDs. Given the potential changes in connectivity, we show that our graph-theoretical approach based on centrality (eigenvector and distance-weighted fragmentation) of habitat patches can help design better-connected marine reserve networks for the future with equivalent costs. We found that maintaining dispersal connectivity incidentally through representation-only reserve design is unlikely, particularly in regions with strong asymmetric patterns of

  11. Parameterized combinatorial geometry modeling in Moritz

    International Nuclear Information System (INIS)

    Van Riper, K.A.

    2005-01-01

    We describe the use of named variables as surface and solid body coefficients in the Moritz geometry editing program. Variables can also be used as material numbers, cell densities, and transformation values. A variable is defined as a constant or an arithmetic combination of constants and other variables. A variable reference, such as in a surface coefficient, can be a single variable or an expression containing variables and constants. Moritz can read and write geometry models in MCNP and ITS ACCEPT format; support for other codes will be added. The geometry can be saved with either the variables in place, for modifying the models in Moritz, or with the variables evaluated for use in the transport codes. A program window shows a list of variables and provides fields for editing them. Surface coefficients and other values that use a variable reference are shown in a distinctive style on object property dialogs; associated buttons show fields for editing the reference. We discuss our use of variables in defining geometry models for shielding studies in PET clinics. When a model is parameterized through the use of variables, changes such as room dimensions, shielding layer widths, and cell compositions can be quickly achieved by changing a few numbers without requiring knowledge of the input syntax for the transport code or the tedious and error prone work of recalculating many surface or solid body coefficients. (author)

  12. A new simple parameterization of daily clear-sky global solar radiation including horizon effects

    International Nuclear Information System (INIS)

    Lopez, Gabriel; Javier Batlles, F.; Tovar-Pescador, Joaquin

    2007-01-01

    Estimation of clear-sky global solar radiation is usually an important previous stage for calculating global solar radiation under all sky conditions. This is, for instance, a common procedure to derive incoming solar radiation from remote sensing or by using digital elevation models. In this work, we present a new model to calculate daily values of clear-sky global solar irradiation. The main goal is the simple parameterization in terms of atmospheric temperature and relative humidity, Angstroem's turbidity coefficient, ground albedo and site elevation, including a factor to take into account horizon obstructions. This allows us to obtain estimates even though a free horizon is not present as is the case of mountainous locations. Comparisons of calculated daily values with measured data show that this model is able to provide a good level of accurate estimates using either daily or mean monthly values of the input parameters. This new model has also been shown to improve daily estimates against those obtained using the clear-sky model from the European Solar Radiation Atlas and other accurate parameterized daily irradiation models. The introduction of Angstroem's turbidity coefficient and ground albedo should allow us to use the increasing worldwide aerosol information available and to consider those sites affected by snow covers in an easy and fast way. In addition, the proposed model is intended to be a useful tool to select clear-sky conditions

  13. Microphysical Parameterizations for NWP: It's All About the Sizes and Production Pathways of Hydrometeors

    Science.gov (United States)

    Michelson, Sara A.; Bao, Jian-Wen; Grell, Evelyn D.

    2017-04-01

    Bulk microphysical parameterization schemes are popularly used in numerical weather prediction (NWP) models to simulate clouds and precpitation. These schemes are based on assumed number distribution functions for individual hydrometeor species, which are integratable over size distributions of diameters from zero to infinity. Typically, hydrometeor mass and number mixing ratios are predicted in these schemes. Some schemes also predict a third parameter of hydrometeor distribution characteristics. In this study, four commonly-used microphysics schemes of various complexity that are available in the Weather Research and Forecasting Model (WRF) are investigated and compared using numerical model simulations of an idealized 2-D squall line and microphysics budget analysis. Diagnoses of the parameterized pathways for hydrometeor production reveal that the differences related to the assumptions of hydrometeor size distributions between the schemes lead to the differences in the simulations due to the net effect of various microphysical processes on the interaction between latent heating/evaporative cooling and flow dynamics as the squall line develops. Results from this study also highlight the possibility that the advantage of double-moment formulations can be overshadowed by the uncertainties in the spectral definition of individual hydrometeor categories and spectrum-dependent microphysical processes. It is concluded that the major differences between the schemes investigated here are in the assumed hydrometeor size distributions and pathways for their production.

  14. Altered functional connectivity of the default mode network in Williams syndrome: a multimodal approach.

    Science.gov (United States)

    Sampaio, Adriana; Moreira, Pedro Silva; Osório, Ana; Magalhães, Ricardo; Vasconcelos, Cristiana; Férnandez, Montse; Carracedo, Angel; Alegria, Joana; Gonçalves, Óscar F; Soares, José Miguel

    2016-07-01

    Resting state brain networks are implicated in a variety of relevant brain functions. Importantly, abnormal patterns of functional connectivity (FC) have been reported in several neurodevelopmental disorders. In particular, the Default Mode Network (DMN) has been found to be associated with social cognition. We hypothesize that the DMN may be altered in Williams syndrome (WS), a neurodevelopmental genetic disorder characterized by an unique cognitive and behavioral phenotype. In this study, we assessed the architecture of the DMN using fMRI in WS patients and typically developing matched controls (sex and age) in terms of FC and volumetry of the DMN. Moreover, we complemented the analysis with a functional connectome approach. After excluding participants due to movement artifacts (n = 3), seven participants with WS and their respective matched controls were included in the analyses. A decreased FC between the DMN regions was observed in the WS group when compared with the typically developing group. Specifically, we found a decreased FC in a posterior hub of the DMN including the precuneus, calcarine and the posterior cingulate of the left hemisphere. The functional connectome approach showed a focalized and global increased FC connectome in the WS group. The reduced FC of the posterior hub of the DMN in the WS group is consistent with immaturity of the brain FC patterns and may be associated with the singularity of their visual spatial phenotype. © 2016 John Wiley & Sons Ltd.

  15. Stellar Atmospheric Parameterization Based on Deep Learning

    Science.gov (United States)

    Pan, Ru-yang; Li, Xiang-ru

    2017-07-01

    Deep learning is a typical learning method widely studied in the fields of machine learning, pattern recognition, and artificial intelligence. This work investigates the problem of stellar atmospheric parameterization by constructing a deep neural network with five layers, and the node number in each layer of the network is respectively 3821-500-100-50-1. The proposed scheme is verified on both the real spectra measured by the Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with the Kurucz's New Opacity Distribution Function (NEWODF) model, to make an automatic estimation for three physical parameters: the effective temperature (Teff), surface gravitational acceleration (lg g), and metallic abundance (Fe/H). The results show that the stacked autoencoder deep neural network has a better accuracy for the estimation. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for Teff/K, 0.0058 for (lg Teff/K), 0.1706 for lg (g/(cm·s-2)), and 0.1294 dex for the [Fe/H], respectively; On the theoretic spectra, the MAEs are 15.34 for Teff/K, 0.0011 for lg (Teff/K), 0.0214 for lg(g/(cm · s-2)), and 0.0121 dex for [Fe/H], respectively.

  16. Boundary layer parameterizations and long-range transport

    International Nuclear Information System (INIS)

    Irwin, J.S.

    1992-01-01

    A joint work group between the American Meteorological Society (AMS) and the EPA is perusing the construction of an air quality model that incorporates boundary layer parameterizations of dispersion and transport. This model could replace the currently accepted model, the Industrial Source Complex (ISC) model. The ISC model is a Gaussian-plume multiple point-source model that provides for consideration of fugitive emissions, aerodynamic wake effects, gravitational settling and dry deposition. A work group of several Federal and State agencies is perusing the construction of an air quality modeling system for use in assessing and tracking visibility impairment resulting from long-range transport of pollutants. The modeling system is designed to use the hourly vertical profiles of wind, temperature and moisture resulting from a mesoscale meteorological processor that employs four dimensional data assimilation (FDDA). FDDA involves adding forcing functions to the governing model equations to gradually ''nudge'' the model state toward the observations (12-hourly upper air observations of wind, temperature and moisture, and 3-hourly surface observations of wind and moisture). In this way it is possible to generate data sets whose accuracy, in terms of transport, precipitation, and dynamic consistency is superior to both direct interpolation of synoptic-scale analyses of observations and purely predictive mode model result. (AB) ( 19 refs.)

  17. Integrated approach for power quality requirements at the point of connection

    NARCIS (Netherlands)

    Cobben, J.F.G.; Bhattacharyya, S.; Myrzik, J.M.A.; Kling, W.L.

    2007-01-01

    Given the nature of electricity, every party connected to the power system influences voltage quality, which means that every party also should meet requirements. In this field, a sound coordination among technical standards (system-related, installation-related and product-related) is of paramount

  18. BOLD signal and functional connectivity associated with loving kindness meditation

    Science.gov (United States)

    Garrison, Kathleen A; Scheinost, Dustin; Constable, R Todd; Brewer, Judson A

    2014-01-01

    Loving kindness is a form of meditation involving directed well-wishing, typically supported by the silent repetition of phrases such as “may all beings be happy,” to foster a feeling of selfless love. Here we used functional magnetic resonance imaging to assess the neural substrate of loving kindness meditation in experienced meditators and novices. We first assessed group differences in blood oxygen level-dependent (BOLD) signal during loving kindness meditation. We next used a relatively novel approach, the intrinsic connectivity distribution of functional connectivity, to identify regions that differ in intrinsic connectivity between groups, and then used a data-driven approach to seed-based connectivity analysis to identify which connections differ between groups. Our findings suggest group differences in brain regions involved in self-related processing and mind wandering, emotional processing, inner speech, and memory. Meditators showed overall reduced BOLD signal and intrinsic connectivity during loving kindness as compared to novices, more specifically in the posterior cingulate cortex/precuneus (PCC/PCu), a finding that is consistent with our prior work and other recent neuroimaging studies of meditation. Furthermore, meditators showed greater functional connectivity during loving kindness between the PCC/PCu and the left inferior frontal gyrus, whereas novices showed greater functional connectivity during loving kindness between the PCC/PCu and other cortical midline regions of the default mode network, the bilateral posterior insula lobe, and the bilateral parahippocampus/hippocampus. These novel findings suggest that loving kindness meditation involves a present-centered, selfless focus for meditators as compared to novices. PMID:24944863

  19. Parameter Estimation and Sensitivity Analysis of an Urban Surface Energy Balance Parameterization at a Tropical Suburban Site

    Science.gov (United States)

    Harshan, S.; Roth, M.; Velasco, E.

    2014-12-01

    Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model

  20. Sensitivity test of parameterizations of subgrid-scale orographic form drag in the NCAR CESM1

    Science.gov (United States)

    Liang, Yishuang; Wang, Lanning; Zhang, Guang Jun; Wu, Qizhong

    2017-05-01

    Turbulent drag caused by subgrid orographic form drag has significant effects on the atmosphere. It is represented through parameterization in large-scale numerical prediction models. An indirect parameterization scheme, the Turbulent Mountain Stress scheme (TMS), is currently used in the National Center for Atmospheric Research Community Earth System Model v1.0.4. In this study we test a direct scheme referred to as BBW04 (Beljaars et al. in Q J R Meteorol Soc 130:1327-1347, 10.1256/qj.03.73), which has been used in several short-term weather forecast models and earth system models. Results indicate that both the indirect and direct schemes increase surface wind stress and improve the model's performance in simulating low-level wind speed over complex orography compared to the simulation without subgrid orographic effect. It is shown that the TMS scheme produces a more intense wind speed adjustment, leading to lower wind speed near the surface. The low-level wind speed by the BBW04 scheme agrees better with the ERA-Interim reanalysis and is more sensitive to complex orography as a direct method. Further, the TMS scheme increases the 2-m temperature and planetary boundary layer height over large areas of tropical and subtropical Northern Hemisphere land.

  1. 3D surface parameterization using manifold learning for medial shape representation

    Science.gov (United States)

    Ward, Aaron D.; Hamarneh, Ghassan

    2007-03-01

    The choice of 3D shape representation for anatomical structures determines the effectiveness with which segmentation, visualization, deformation, and shape statistics are performed. Medial axis-based shape representations have attracted considerable attention due to their inherent ability to encode information about the natural geometry of parts of the anatomy. In this paper, we propose a novel approach, based on nonlinear manifold learning, to the parameterization of medial sheets and object surfaces based on the results of skeletonization. For each single-sheet figure in an anatomical structure, we skeletonize the figure, and classify its surface points according to whether they lie on the upper or lower surface, based on their relationship to the skeleton points. We then perform nonlinear dimensionality reduction on the skeleton, upper, and lower surface points, to find the intrinsic 2D coordinate system of each. We then center a planar mesh over each of the low-dimensional representations of the points, and map the meshes back to 3D using the mappings obtained by manifold learning. Correspondence between mesh vertices, established in their intrinsic 2D coordinate spaces, is used in order to compute the thickness vectors emanating from the medial sheet. We show results of our algorithm on real brain and musculoskeletal structures extracted from MRI, as well as an artificial multi-sheet example. The main advantages to this method are its relative simplicity and noniterative nature, and its ability to correctly compute nonintersecting thickness vectors for a medial sheet regardless of both the amount of coincident bending and thickness in the object, and of the incidence of local concavities and convexities in the object's surface.

  2. Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel

    DEFF Research Database (Denmark)

    Pai, Akshay Sadananda Uppinakudru; Sommer, Stefan Horst; Sørensen, Lauge

    by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only...... approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed...... that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p

  3. Remote Sensing Image Enhancement Based on Non-subsampled Shearlet Transform and Parameterized Logarithmic Image Processing Model

    Directory of Open Access Journals (Sweden)

    TAO Feixiang

    2015-08-01

    Full Text Available Aiming at parts of remote sensing images with dark brightness and low contrast, a remote sensing image enhancement method based on non-subsampled Shearlet transform and parameterized logarithmic image processing model is proposed in this paper to improve the visual effects and interpretability of remote sensing images. Firstly, a remote sensing image is decomposed into a low-frequency component and high frequency components by non-subsampled Shearlet transform.Then the low frequency component is enhanced according to PLIP (parameterized logarithmic image processing model, which can improve the contrast of image, while the improved fuzzy enhancement method is used to enhance the high frequency components in order to highlight the information of edges and details. A large number of experimental results show that, compared with five kinds of image enhancement methods such as bidirectional histogram equalization method, the method based on stationary wavelet transform and the method based on non-subsampled contourlet transform, the proposed method has advantages in both subjective visual effects and objective quantitative evaluation indexes such as contrast and definition, which can more effectively improve the contrast of remote sensing image and enhance edges and texture details with better visual effects.

  4. Modeling investor optimism with fuzzy connectives

    NARCIS (Netherlands)

    Lovric, M.; Almeida, R.J.; Kaymak, U.; Spronk, J.; Carvalho, J.P.; Dubois, D.; Kaymak, U.; Sousa, J.M.C.

    2009-01-01

    Optimism or pessimism of investors is one of the important characteristics that determine the investment behavior in financial markets. In this paper, we propose a model of investor optimism based on a fuzzy connective. The advantage of the proposed approach is that the influence of different levels

  5. Connective tissue graft vs. emdogain: A new approach to compare the outcomes.

    Science.gov (United States)

    Sayar, Ferena; Akhundi, Nasrin; Gholami, Sanaz

    2013-01-01

    The aim of this clinical trial study was to clinically evaluate the use of enamel matrix protein derivative combined with the coronally positioned flap to treat gingival recession compared to the subepithelial connective tissue graft by a new method to obtain denuded root surface area. Thirteen patients, each with two or more similar bilateral Miller class I or II gingival recession (40 recessions) were randomly assigned to the test (enamel matrix protein derivative + coronally positioned flap) or control group (subepithelial connective tissue graft). Recession depth, width, probing depth, keratinized gingival, and plaque index were recorded at baseline and at one, three, and six months after treatment. A stent was used to measure the denuded root surface area at each examination session. Results were analyzed using Kolmogorov-Smirnov, Wilcoxon, Friedman, paired-sample t test. The average percentages of root coverage for control and test groups were 63.3% and 55%, respectively. Both groups showed significant keratinized gingival increase (P 0.05). The results of Friedman test were significant for clinical indices (P < 0.05), except for probing depth in control group (P = 0.166). Enamel matrix protein derivative showed the same results as subepithelial connective tissue graft with relatively easy procedure to perform and low patient morbidity.

  6. A universal parameterization of chaos in various beam-wave interactions

    International Nuclear Information System (INIS)

    Lee, Jae Koo; Lee, Hae June; Hur, Min Sup; Bae, InDeog; Yang, Yi

    1998-01-01

    The comprehensive parameter space of sell-oscillation and its period-doubling route to chaos are shown for a bounded collisionless beam-plasma system. In this parameterization, it is helpful to use a potentially universal parameter in close analogy with free-electron-laser chaos. A common parameter, which is related to the velocity slippage and the ratio of bounce to oscillation frequencies, is shown to have similar significance for different physical systems. This single parameter replaces the dependences on many input parameters, thus suitable for a simplifying and diagnostic measure of nonlinear dynamical and chaotic phenomena for various systems of particle-wave interactions. The results of independent kinetic-simulations verify those of nonlinear fluid-simulations. Other standard routes to chaos via intermittent or quasiperiodic oscillations are also shown for the undriven plasma systems. Some correlation of linear characteristics to nonlinear phenomena was noted. (author)

  7. 2D supergravity and its connection to integrable models

    International Nuclear Information System (INIS)

    Arnaudov, L.N.; Prodanov, E.M.; Rashkov, R.C.

    1993-05-01

    In the recent work two different approaches for obtaining the covariant W 2 -action of 2-d quantum supergravity are considered. The first one is based on Hamiltonian reduction of flat Osp(2/1) connection in holomorphic polarization. Adding extra degrees of freedom with the help of gauging procedure the W 2 -action and the superconformal identities are obtained. It is shown that the super Virasoro transformations preserve the form of the Lax connection and therefore are symmetries of the sKdV equations. In the second approach starting with Chern-Simons theory and using non-canonical polarization the zero-curvature condition entails the same results. (author). 7 refs

  8. Parameterized Post-Newtonian Expansion of Scalar-Vector-Tensor Theory of Gravity

    International Nuclear Information System (INIS)

    Arianto; Zen, Freddy P.; Gunara, Bobby E.; Hartanto, Andreas

    2010-01-01

    We investigate the weak-field, post-Newtonian expansion to the solution of the field equations in scalar-vector-tensor theory of gravity. In the calculation we restrict ourselves to the first post Newtonian. The parameterized post Newtonian (PPN) parameters are determined by expanding the modified field equations in the metric perturbation. Then, we compare the solution to the PPN formalism in first PN approximation proposed by Will and Nordtvedt and read of the coefficients (the PPN parameters) of post Newtonian potentials of the theory. We find that the values of γ PPN and β PPN are the same as in General Relativity but the coupling functions β 1 , β 2 , and β 3 are the effect of the preferred frame.

  9. A systematic study on the influence of nuclear surface tension and temperature upon the parameterization of the fusion dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Gharaei, R.; Hadikhani, A. [Hakim Sabzevari University, Department of Physics, Sciences Faculty, Sabzevar (Iran, Islamic Republic of)

    2017-07-15

    For the first time the influence of the surface energy coefficient γ and temperature T on the parameterization of the fusion barriers is systematically analyzed within the framework of the proximity formalism, namely proximity 1977, proximity 1988 and proximity 2010 models. A total of 114 fusion reactions with the condition 39 ≤ Z{sub 1}Z{sub 2} ≤ 1520 for the charge product of their participant nuclei have been studied. We present γ-dependent and T -dependent pocket formulas which reproduce the theoretical and empirical data of the fusion barrier height and position for our considered reactions with good accuracy. It is shown that the quality of the γ-dependent formula enhances by increasing the strength of the surface energy coefficient. Moreover, the obtained results confirm that imposing the thermal effects improves the agreement between the parameterized and empirical data of the barrier characteristics. (orig.)

  10. Determination of the potential scattering parameter and parameterization of neutron cross-sections in the low-energy region

    International Nuclear Information System (INIS)

    Novoselov, G.M.; Litvinskij, L.L

    2001-01-01

    Different cross-section parameterization methods in the low-energy region are considered. It is shown that the potential scattering parameter value derived from analysis of experimental cross-section data is dependent essentially on the method used to take account of the nearest resonances. A formula describing this dependence is obtained. The results are verified by numerical model calculations. (author)

  11. Quantifying Individual Brain Connectivity with Functional Principal Component Analysis for Networks.

    Science.gov (United States)

    Petersen, Alexander; Zhao, Jianyang; Carmichael, Owen; Müller, Hans-Georg

    2016-09-01

    In typical functional connectivity studies, connections between voxels or regions in the brain are represented as edges in a network. Networks for different subjects are constructed at a given graph density and are summarized by some network measure such as path length. Examining these summary measures for many density values yields samples of connectivity curves, one for each individual. This has led to the adoption of basic tools of functional data analysis, most commonly to compare control and disease groups through the average curves in each group. Such group differences, however, neglect the variability in the sample of connectivity curves. In this article, the use of functional principal component analysis (FPCA) is demonstrated to enrich functional connectivity studies by providing increased power and flexibility for statistical inference. Specifically, individual connectivity curves are related to individual characteristics such as age and measures of cognitive function, thus providing a tool to relate brain connectivity with these variables at the individual level. This individual level analysis opens a new perspective that goes beyond previous group level comparisons. Using a large data set of resting-state functional magnetic resonance imaging scans, relationships between connectivity and two measures of cognitive function-episodic memory and executive function-were investigated. The group-based approach was implemented by dichotomizing the continuous cognitive variable and testing for group differences, resulting in no statistically significant findings. To demonstrate the new approach, FPCA was implemented, followed by linear regression models with cognitive scores as responses, identifying significant associations of connectivity in the right middle temporal region with both cognitive scores.

  12. Fault tolerant control based on active fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2005-01-01

    An active fault diagnosis (AFD) method will be considered in this paper in connection with a Fault Tolerant Control (FTC) architecture based on the YJBK parameterization of all stabilizing controllers. The architecture consists of a fault diagnosis (FD) part and a controller reconfiguration (CR......) part. The FTC architecture can be applied for additive faults, parametric faults, and for system structural changes. Only parametric faults will be considered in this paper. The main focus in this paper is on the use of the new approach of active fault diagnosis in connection with FTC. The active fault...... diagnosis approach is based on including an auxiliary input in the system. A fault signature matrix is introduced in connection with AFD, given as the transfer function from the auxiliary input to the residual output. This can be considered as a generalization of the passive fault diagnosis case, where...

  13. Assessment of the turbulence parameterization schemes for the Martian mesoscale simulations

    Science.gov (United States)

    Temel, Orkun; Karatekin, Ozgur; Van Beeck, Jeroen

    2016-07-01

    Turbulent transport within the Martian atmospheric boundary layer (ABL) is one of the most important physical processes in the Martian atmosphere due to the very thin structure of Martian atmosphere and super-adiabatic conditions during the diurnal cycle [1]. The realistic modeling of turbulent fluxes within the Martian ABL has a crucial effect on the many physical phenomena including dust devils [2], methane dispersion [3] and nocturnal jets [4]. Moreover, the surface heat and mass fluxes, which are related with the mass transport within the sub-surface of Mars, are being computed by the turbulence parameterization schemes. Therefore, in addition to the possible applications within the Martian boundary layer, parameterization of turbulence has an important effect on the biological research on Mars including the investigation of water cycle or sub-surface modeling. In terms of the turbulence modeling approaches being employed for the Martian ABL, the "planetary boundary layer (PBL) schemes" have been applied not only for the global circulation modeling but also for the mesoscale simulations [5]. The PBL schemes being used for Mars are the variants of the PBL schemes which had been developed for the Earth and these schemes are either based on the empirical determination of turbulent fluxes [6] or based on solving a one dimensional turbulent kinetic energy equation [7]. Even though, the Large Eddy Simulation techniques had also been applied with the regional models for Mars, it must be noted that these advanced models also use the features of these traditional PBL schemes for sub-grid modeling [8]. Therefore, assessment of these PBL schemes is vital for a better understanding the atmospheric processes of Mars. In this framework, this present study is devoted to the validation of different turbulence modeling approaches for the Martian ABL in comparison to Viking Lander [9] and MSL [10] datasets. The GCM/Mesoscale code being used is the PlanetWRF, the extended version

  14. Metabolic connectivity mapping reveals effective connectivity in the resting human brain.

    Science.gov (United States)

    Riedl, Valentin; Utz, Lukas; Castrillón, Gabriel; Grimmer, Timo; Rauschecker, Josef P; Ploner, Markus; Friston, Karl J; Drzezga, Alexander; Sorg, Christian

    2016-01-12

    Directionality of signaling among brain regions provides essential information about human cognition and disease states. Assessing such effective connectivity (EC) across brain states using functional magnetic resonance imaging (fMRI) alone has proven difficult, however. We propose a novel measure of EC, termed metabolic connectivity mapping (MCM), that integrates undirected functional connectivity (FC) with local energy metabolism from fMRI and positron emission tomography (PET) data acquired simultaneously. This method is based on the concept that most energy required for neuronal communication is consumed postsynaptically, i.e., at the target neurons. We investigated MCM and possible changes in EC within the physiological range using "eyes open" versus "eyes closed" conditions in healthy subjects. Independent of condition, MCM reliably detected stable and bidirectional communication between early and higher visual regions. Moreover, we found stable top-down signaling from a frontoparietal network including frontal eye fields. In contrast, we found additional top-down signaling from all major clusters of the salience network to early visual cortex only in the eyes open condition. MCM revealed consistent bidirectional and unidirectional signaling across the entire cortex, along with prominent changes in network interactions across two simple brain states. We propose MCM as a novel approach for inferring EC from neuronal energy metabolism that is ideally suited to study signaling hierarchies in the brain and their defects in brain disorders.

  15. A multi-scale qualitative approach to assess the impact of urbanization on natural habitats and their connectivity

    Energy Technology Data Exchange (ETDEWEB)

    Scolozzi, Rocco, E-mail: rocco.scolozzi@fmach.it [Sustainable Agro-ecosystems and Bioresources Department, IASMA Research and Innovation Centre, Fondazione Edmund Mach, Via E. Mach 1, 38010 San Michele all& #x27; Adige, (Italy); Geneletti, Davide, E-mail: geneletti@ing.unitn.it [Department of Civil and Environmental Engineering, University of Trento, Trento (Italy)

    2012-09-15

    Habitat loss and fragmentation are often concurrent to land conversion and urbanization. Simple application of GIS-based landscape pattern indicators may be not sufficient to support meaningful biodiversity impact assessment. A review of the literature reveals that habitat definition and habitat fragmentation are frequently inadequately considered in environmental assessment, notwithstanding the increasing number of tools and approaches reported in the landscape ecology literature. This paper presents an approach for assessing impacts on habitats on a local scale, where availability of species data is often limited, developed for an alpine valley in northern Italy. The perspective of the methodology is multiple scale and species-oriented, and provides both qualitative and quantitative definitions of impact significance. A qualitative decision model is used to assess ecological values in order to support land-use decisions at the local level. Building on recent studies in the same region, the methodology integrates various approaches, such as landscape graphs, object-oriented rule-based habitat assessment and expert knowledge. The results provide insights into future habitat loss and fragmentation caused by land-use changes, and aim at supporting decision-making in planning and suggesting possible ecological compensation. - Highlights: Black-Right-Pointing-Pointer Many environmental assessments inadequately consider habitat loss and fragmentation. Black-Right-Pointing-Pointer Species-perspective for defining habitat quality and connectivity is claimed. Black-Right-Pointing-Pointer Species-based tools are difficult to be applied with limited availability of data. Black-Right-Pointing-Pointer We propose a species-oriented and multiple scale-based qualitative approach. Black-Right-Pointing-Pointer Advantages include being species-oriented and providing value-based information.

  16. A multi-scale qualitative approach to assess the impact of urbanization on natural habitats and their connectivity

    International Nuclear Information System (INIS)

    Scolozzi, Rocco; Geneletti, Davide

    2012-01-01

    Habitat loss and fragmentation are often concurrent to land conversion and urbanization. Simple application of GIS-based landscape pattern indicators may be not sufficient to support meaningful biodiversity impact assessment. A review of the literature reveals that habitat definition and habitat fragmentation are frequently inadequately considered in environmental assessment, notwithstanding the increasing number of tools and approaches reported in the landscape ecology literature. This paper presents an approach for assessing impacts on habitats on a local scale, where availability of species data is often limited, developed for an alpine valley in northern Italy. The perspective of the methodology is multiple scale and species-oriented, and provides both qualitative and quantitative definitions of impact significance. A qualitative decision model is used to assess ecological values in order to support land-use decisions at the local level. Building on recent studies in the same region, the methodology integrates various approaches, such as landscape graphs, object-oriented rule-based habitat assessment and expert knowledge. The results provide insights into future habitat loss and fragmentation caused by land-use changes, and aim at supporting decision-making in planning and suggesting possible ecological compensation. - Highlights: ► Many environmental assessments inadequately consider habitat loss and fragmentation. ► Species-perspective for defining habitat quality and connectivity is claimed. ► Species-based tools are difficult to be applied with limited availability of data. ► We propose a species-oriented and multiple scale-based qualitative approach. ► Advantages include being species-oriented and providing value-based information.

  17. Parameterization of atmosphere-surface exchange of CO2 over sea ice

    DEFF Research Database (Denmark)

    Sørensen, L. L.; Jensen, B.; Glud, Ronnie N.

    2014-01-01

    are discussed. We found the flux to be small during the late winter with fluxes in both directions. Not surprisingly we find that the resistance across the surface controls the fluxes and detailed knowledge of the brine volume and carbon chemistry within the brines as well as knowledge of snow cover and carbon...... chemistry in the ice are essential to estimate the partial pressure of pCO2 and CO2 flux. Further investigations of surface structure and snow cover and driving parameters such as heat flux, radiation, ice temperature and brine processes are required to adequately parameterize the surface resistance....

  18. Cyborg and Autism: Exploring New Social Articulations via Posthuman Connections

    Science.gov (United States)

    Reddington, Sarah; Price, Deborah

    2016-01-01

    This paper explores the connections a young man with autism spectrum (AS) made using cyborg imagery having attended school in Nova Scotia, Canada. Cyborg is applied as a conceptual approach to explore the young man's connections to human and nonhuman elements. We also make use of rhizomes as a methodological framework to support the exploration of…

  19. Inequalities and Duality in Gene Coexpression Networks of HIV-1 Infection Revealed by the Combination of the Double-Connectivity Approach and the Gini's Method

    Directory of Open Access Journals (Sweden)

    Chuang Ma

    2011-01-01

    Full Text Available The symbiosis (Sym and pathogenesis (Pat is a duality problem of microbial infection, including HIV/AIDS. Statistical analysis of inequalities and duality in gene coexpression networks (GCNs of HIV-1 infection may gain novel insights into AIDS. In this study, we focused on analysis of GCNs of uninfected subjects and HIV-1-infected patients at three different stages of viral infection based on data deposited in the GEO database of NCBI. The inequalities and duality in these GCNs were analyzed by the combination of the double-connectivity (DC approach and the Gini's method. DC analysis reveals that there are significant differences between positive and negative connectivity in HIV-1 stage-specific GCNs. The inequality measures of negative connectivity and edge weight are changed more significantly than those of positive connectivity and edge weight in GCNs from the HIV-1 uninfected to the AIDS stages. With the permutation test method, we identified a set of genes with significant changes in the inequality and duality measure of edge weight. Functional analysis shows that these genes are highly enriched for the immune system, which plays an essential role in the Sym-Pat duality (SPD of microbial infections. Understanding of the SPD problems of HIV-1 infection may provide novel intervention strategies for AIDS.

  20. Multiscale connectivity and graph theory highlight critical areas for conservation under climate change.

    Science.gov (United States)

    Dilt, Thomas E; Weisberg, Peter J; Leitner, Philip; Matocq, Marjorie D; Inman, Richard D; Nussear, Kenneth E; Esque, Todd C

    2016-06-01

    Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multiscale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods, including graph theory, circuit theory, and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this threatened Californian species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously distributed habitat and should be applicable across a broad range of taxa.