WorldWideScience

Sample records for policy gradient based

  1. Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.

    Science.gov (United States)

    Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L

    2017-10-01

    The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.

  2. Assessing public health policy approaches to level-up the gradient in health inequalities: the Gradient Evaluation Framework.

    Science.gov (United States)

    Davies, J K; Sherriff, N S

    2014-03-01

    This paper seeks to introduce and analyse the development of the Gradient Evaluation Framework (GEF) to facilitate evaluation of policy actions for their current or future use in terms of their 'gradient friendliness'. In particular, this means their potential to level-up the gradient in health inequalities by addressing the social determinants of health and thereby reducing decision-makers' chances of error when developing such policy actions. A qualitative developmental study to produce a policy-based evaluation framework. The scientific basis of GEF was developed using a comprehensive consensus-building process. This process followed an initial narrative review, based on realist review principles, which highlighted the need for production of a dedicated evaluation framework. The consensus-building process included expert workshops, a pretesting phase, and external peer review, together with support from the Gradient project Scientific Advisory Group and all Gradient project partners, including its Project Steering Committee. GEF is presented as a flexible policy tool resulting from a consensus-building process involving experts from 13 European countries. The theoretical foundations which underpin GEF are discussed, together with a range of practical challenges. The importance of systematic evaluation at each stage of the policy development and implementation cycle is highlighted, as well as the socio-political context in which policy actions are located. GEF offers potentially a major contribution to the public health field in the form of a practical, policy-relevant and common frame of reference for the evaluation of public health interventions that aim to level-up the social gradient in health inequalities. Further research, including the need for practical field testing of GEF and the exploration of alternative presentational formats, is recommended. Copyright © 2013 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  3. Sound beam manipulation based on temperature gradients

    Energy Technology Data Exchange (ETDEWEB)

    Qian, Feng [Key Laboratory of Modern Acoustics, Institute of Acoustics and School of Physics, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093 (China); School of Physics & Electronic Engineering, Changshu Institute of Technology, Changshu 215500 (China); Quan, Li; Liu, Xiaozhou, E-mail: xzliu@nju.edu.cn; Gong, Xiufen [Key Laboratory of Modern Acoustics, Institute of Acoustics and School of Physics, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093 (China)

    2015-10-28

    Previous research with temperature gradients has shown the feasibility of controlling airborne sound propagation. Here, we present a temperature gradients based airborne sound manipulation schemes: a cylindrical acoustic omnidirectional absorber (AOA). The proposed AOA has high absorption performance which can almost completely absorb the incident wave. Geometric acoustics is used to obtain the refractive index distributions with different radii, which is then utilized to deduce the desired temperature gradients. Since resonant units are not applied in the scheme, its working bandwidth is expected to be broadband. The scheme is temperature-tuned and easy to realize, which is of potential interest to fields such as noise control or acoustic cloaking.

  4. Gradient based filtering of digital elevation models

    DEFF Research Database (Denmark)

    Knudsen, Thomas; Andersen, Rune Carbuhn

    We present a filtering method for digital terrain models (DTMs). The method is based on mathematical morphological filtering within gradient (slope) defined domains. The intention with the filtering procedure is to improbé the cartographic quality of height contours generated from a DTM based...

  5. Policy Gradient SMDP for Resource Allocation and Routing in Integrated Services Networks

    Science.gov (United States)

    Vien, Ngo Anh; Viet, Nguyen Hoang; Lee, Seunggwan; Chung, Taechoong

    In this paper, we solve the call admission control (CAC) and routing problem in an integrated network that handles several classes of calls of different values and with different resource requirements. The problem of maximizing the average reward (or cost) of admitted calls per unit time is naturally formulated as a semi-Markov Decision Process (SMDP) problem, but is too complex to allow for an exact solution. Thus in this paper, a policy gradient algorithm, together with a decomposition approach, is proposed to find the dynamic (state-dependent) optimal CAC and routing policy among a parameterized policy space. To implement that gradient algorithm, we approximate the gradient of the average reward. Then, we present a simulation-based algorithm to estimate the approximate gradient of the average reward (called GSMDP algorithm), using only a single sample path of the underlying Markov chain for the SMDP of CAC and routing problem. The algorithm enhances performance in terms of convergence speed, rejection probability, robustness to the changing arrival statistics and an overall received average revenue. The experimental simulations will compare our method's performance with other existing methods and show the robustness of our method.

  6. Degraded character recognition based on gradient pattern

    Science.gov (United States)

    Babu, D. R. Ramesh; Ravishankar, M.; Kumar, Manish; Wadera, Kevin; Raj, Aakash

    2010-02-01

    Degraded character recognition is a challenging problem in the field of Optical Character Recognition (OCR). The performance of an optical character recognition depends upon printed quality of the input documents. Many OCRs have been designed which correctly identifies the fine printed documents. But, very few reported work has been found on the recognition of the degraded documents. The efficiency of the OCRs system decreases if the input image is degraded. In this paper, a novel approach based on gradient pattern for recognizing degraded printed character is proposed. The approach makes use of gradient pattern of an individual character for recognition. Experiments were conducted on character image that is either digitally written or a degraded character extracted from historical documents and the results are found to be satisfactory.

  7. Batch Policy Gradient Methods for Improving Neural Conversation Models

    OpenAIRE

    Kandasamy, Kirthevasan; Bachrach, Yoram; Tomioka, Ryota; Tarlow, Daniel; Carter, David

    2017-01-01

    We study reinforcement learning of chatbots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chatbot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language processing uses on-policy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strateg...

  8. Gradient waveform pre-emphasis based on the gradient system transfer function.

    Science.gov (United States)

    Stich, Manuel; Wech, Tobias; Slawig, Anne; Ringler, Ralf; Dewdney, Andrew; Greiser, Andreas; Ruyters, Gudrun; Bley, Thorsten A; Köstler, Herbert

    2018-02-25

    The gradient system transfer function (GSTF) has been used to describe the distorted k-space trajectory for image reconstruction. The purpose of this work was to use the GSTF to determine the pre-emphasis for an undistorted gradient output and intended k-space trajectory. The GSTF of the MR system was determined using only standard MR hardware without special equipment such as field probes or a field camera. The GSTF was used for trajectory prediction in image reconstruction and for a gradient waveform pre-emphasis. As test sequences, a gradient-echo sequence with phase-encoding gradient modulation and a gradient-echo sequence with a spiral read-out trajectory were implemented and subsequently applied on a structural phantom and in vivo head measurements. Image artifacts were successfully suppressed by applying the GSTF-based pre-emphasis. Equivalent results are achieved with images acquired using GSTF-based post-correction of the trajectory as a part of image reconstruction. In contrast, the pre-emphasis approach allows reconstruction using the initially intended trajectory. The artifact suppression shown for two sequences demonstrates that the GSTF can serve for a novel pre-emphasis. A pre-emphasis based on the GSTF information can be applied to any arbitrary sequence type. © 2018 International Society for Magnetic Resonance in Medicine.

  9. Scattering-angle based filtering of the waveform inversion gradients

    KAUST Repository

    Alkhalifah, Tariq Ali

    2014-01-01

    Full waveform inversion (FWI) requires a hierarchical approach to maneuver the complex non-linearity associated with the problem of velocity update. In anisotropic media, the non-linearity becomes far more complex with the potential trade-off between the multiparameter description of the model. A gradient filter helps us in accessing the parts of the gradient that are suitable to combat the potential non-linearity and parameter trade-off. The filter is based on representing the gradient in the time-lag normalized domain, in which the low scattering angle of the gradient update is initially muted out in the FWI implementation, in what we may refer to as a scattering angle continuation process. The result is a low wavelength update dominated by the transmission part of the update gradient. In this case, even 10 Hz data can produce vertically near-zero wavenumber updates suitable for a background correction of the model. Relaxing the filtering at a later stage in the FWI implementation allows for smaller scattering angles to contribute higher-resolution information to the model. The benefits of the extended domain based filtering of the gradient is not only it's ability in providing low wavenumber gradients guided by the scattering angle, but also in its potential to provide gradients free of unphysical energy that may correspond to unrealistic scattering angles.

  10. Scattering-angle based filtering of the waveform inversion gradients

    KAUST Repository

    Alkhalifah, Tariq Ali

    2014-11-22

    Full waveform inversion (FWI) requires a hierarchical approach to maneuver the complex non-linearity associated with the problem of velocity update. In anisotropic media, the non-linearity becomes far more complex with the potential trade-off between the multiparameter description of the model. A gradient filter helps us in accessing the parts of the gradient that are suitable to combat the potential non-linearity and parameter trade-off. The filter is based on representing the gradient in the time-lag normalized domain, in which the low scattering angle of the gradient update is initially muted out in the FWI implementation, in what we may refer to as a scattering angle continuation process. The result is a low wavelength update dominated by the transmission part of the update gradient. In this case, even 10 Hz data can produce vertically near-zero wavenumber updates suitable for a background correction of the model. Relaxing the filtering at a later stage in the FWI implementation allows for smaller scattering angles to contribute higher-resolution information to the model. The benefits of the extended domain based filtering of the gradient is not only it\\'s ability in providing low wavenumber gradients guided by the scattering angle, but also in its potential to provide gradients free of unphysical energy that may correspond to unrealistic scattering angles.

  11. Scattering angle base filtering of the inversion gradients

    KAUST Repository

    Alkhalifah, Tariq Ali

    2014-01-01

    Full waveform inversion (FWI) requires a hierarchical approach based on the availability of low frequencies to maneuver the complex nonlinearity associated with the problem of velocity inversion. I develop a model gradient filter to help us access the parts of the gradient more suitable to combat this potential nonlinearity. The filter is based on representing the gradient in the time-lag normalized domain, in which low scattering angles of the gradient update are initially muted. The result are long-wavelength updates controlled by the ray component of the wavefield. In this case, even 10 Hz data can produce near zero wavelength updates suitable for a background correction of the model. Allowing smaller scattering angle to contribute provides higher resolution information to the model.

  12. A density gradient theory based method for surface tension calculations

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Michelsen, Michael Locht; Kontogeorgis, Georgios

    2016-01-01

    The density gradient theory has been becoming a widely used framework for calculating surface tension, within which the same equation of state is used for the interface and bulk phases, because it is a theoretically sound, consistent and computationally affordable approach. Based on the observation...... that the optimal density path from the geometric mean density gradient theory passes the saddle point of the tangent plane distance to the bulk phases, we propose to estimate surface tension with an approximate density path profile that goes through this saddle point. The linear density gradient theory, which...... assumes linearly distributed densities between the two bulk phases, has also been investigated. Numerical problems do not occur with these density path profiles. These two approximation methods together with the full density gradient theory have been used to calculate the surface tension of various...

  13. Evidence-based policy: implications for nursing and policy involvement.

    Science.gov (United States)

    Hewison, Alistair

    2008-11-01

    Evidence-based policy making is espoused as a central feature of government in the United Kingdom. However, an expectation that this will improve the quality of policy produced and provide a path to increased involvement of nurses in the policy process is misplaced. The purpose of this article is to demonstrate that the emphasis on evidence-based policy is problematic and cannot be regarded as a "new model" of policy making. Also, it could deflect attention from more practical approaches to policy involvement on the part of nurses. Policy development activities, acquisition of skills in policy analysis, and other forms of involvement are needed if nurses are to move along the continuum from policy literacy, through policy acumen, to policy competence. This involves taking a critical stance on the notion of evidence-based policy.

  14. Pixel-based OPC optimization based on conjugate gradients.

    Science.gov (United States)

    Ma, Xu; Arce, Gonzalo R

    2011-01-31

    Optical proximity correction (OPC) methods are resolution enhancement techniques (RET) used extensively in the semiconductor industry to improve the resolution and pattern fidelity of optical lithography. In pixel-based OPC (PBOPC), the mask is divided into small pixels, each of which is modified during the optimization process. Two critical issues in PBOPC are the required computational complexity of the optimization process, and the manufacturability of the optimized mask. Most current OPC optimization methods apply the steepest descent (SD) algorithm to improve image fidelity augmented by regularization penalties to reduce the complexity of the mask. Although simple to implement, the SD algorithm converges slowly. The existing regularization penalties, however, fall short in meeting the mask rule check (MRC) requirements often used in semiconductor manufacturing. This paper focuses on developing OPC optimization algorithms based on the conjugate gradient (CG) method which exhibits much faster convergence than the SD algorithm. The imaging formation process is represented by the Fourier series expansion model which approximates the partially coherent system as a sum of coherent systems. In order to obtain more desirable manufacturability properties of the mask pattern, a MRC penalty is proposed to enlarge the linear size of the sub-resolution assistant features (SRAFs), as well as the distances between the SRAFs and the main body of the mask. Finally, a projection method is developed to further reduce the complexity of the optimized mask pattern.

  15. Regularized image denoising based on spectral gradient optimization

    International Nuclear Information System (INIS)

    Lukić, Tibor; Lindblad, Joakim; Sladoje, Nataša

    2011-01-01

    Image restoration methods, such as denoising, deblurring, inpainting, etc, are often based on the minimization of an appropriately defined energy function. We consider energy functions for image denoising which combine a quadratic data-fidelity term and a regularization term, where the properties of the latter are determined by a used potential function. Many potential functions are suggested for different purposes in the literature. We compare the denoising performance achieved by ten different potential functions. Several methods for efficient minimization of regularized energy functions exist. Most are only applicable to particular choices of potential functions, however. To enable a comparison of all the observed potential functions, we propose to minimize the objective function using a spectral gradient approach; spectral gradient methods put very weak restrictions on the used potential function. We present and evaluate the performance of one spectral conjugate gradient and one cyclic spectral gradient algorithm, and conclude from experiments that both are well suited for the task. We compare the performance with three total variation-based state-of-the-art methods for image denoising. From the empirical evaluation, we conclude that denoising using the Huber potential (for images degraded by higher levels of noise; signal-to-noise ratio below 10 dB) and the Geman and McClure potential (for less noisy images), in combination with the spectral conjugate gradient minimization algorithm, shows the overall best performance

  16. Gradient-based adaptation of general gaussian kernels.

    Science.gov (United States)

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  17. Evidence-based policy

    DEFF Research Database (Denmark)

    Vohnsen, Nina Holm

    2013-01-01

    -makers and the research community (e.g. Boden & Epstein 2006; House of Commons 2006; Cartwright et al 2009; Rod 2010; Vohnsen 2011). This article intends to draw out some general pitfalls in the curious meeting of science and politics by focusing on a particular attempt to make evidence-based legislation in Denmark (for...

  18. A study of gradient strengthening based on a finite-deformation gradient crystal-plasticity model

    Science.gov (United States)

    Pouriayevali, Habib; Xu, Bai-Xiang

    2017-11-01

    A comprehensive study on a finite-deformation gradient crystal-plasticity model which has been derived based on Gurtin's framework (Int J Plast 24:702-725, 2008) is carried out here. This systematic investigation on the different roles of governing components of the model represents the strength of this framework in the prediction of a wide range of hardening behaviors as well as rate-dependent and scale-variation responses in a single crystal. The model is represented in the reference configuration for the purpose of numerical implementation and then implemented in the FEM software ABAQUS via a user-defined subroutine (UEL). Furthermore, a function of accumulation rates of dislocations is employed and viewed as a measure of formation of short-range interactions. Our simulation results reveal that the dissipative gradient strengthening can be identified as a source of isotropic-hardening behavior, which may represent the effect of irrecoverable work introduced by Gurtin and Ohno (J Mech Phys Solids 59:320-343, 2011). Here, the variation of size dependency at different magnitude of a rate-sensitivity parameter is also discussed. Moreover, an observation of effect of a distinctive feature in the model which explains the effect of distortion of crystal lattice in the reference configuration is reported in this study for the first time. In addition, plastic flows in predefined slip systems and expansion of accumulation of GNDs are distinctly observed in varying scales and under different loading conditions.

  19. Multimodal image registration based on binary gradient angle descriptor.

    Science.gov (United States)

    Jiang, Dongsheng; Shi, Yonghong; Yao, Demin; Fan, Yifeng; Wang, Manning; Song, Zhijian

    2017-12-01

    Multimodal image registration plays an important role in image-guided interventions/therapy and atlas building, and it is still a challenging task due to the complex intensity variations in different modalities. The paper addresses the problem and proposes a simple, compact, fast and generally applicable modality-independent binary gradient angle descriptor (BGA) based on the rationale of gradient orientation alignment. The BGA can be easily calculated at each voxel by coding the quadrant in which a local gradient vector falls, and it has an extremely low computational complexity, requiring only three convolutions, two multiplication operations and two comparison operations. Meanwhile, the binarized encoding of the gradient orientation makes the BGA more resistant to image degradations compared with conventional gradient orientation methods. The BGA can extract similar feature descriptors for different modalities and enable the use of simple similarity measures, which makes it applicable within a wide range of optimization frameworks. The results for pairwise multimodal and monomodal registrations between various images (T1, T2, PD, T1c, Flair) consistently show that the BGA significantly outperforms localized mutual information. The experimental results also confirm that the BGA can be a reliable alternative to the sum of absolute difference in monomodal image registration. The BGA can also achieve an accuracy of [Formula: see text], similar to that of the SSC, for the deformable registration of inhale and exhale CT scans. Specifically, for the highly challenging deformable registration of preoperative MRI and 3D intraoperative ultrasound images, the BGA achieves a similar registration accuracy of [Formula: see text] compared with state-of-the-art approaches, with a computation time of 18.3 s per case. The BGA improves the registration performance in terms of both accuracy and time efficiency. With further acceleration, the framework has the potential for

  20. Gradient-based methods for production optimization of oil reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Suwartadi, Eka

    2012-07-01

    Production optimization for water flooding in the secondary phase of oil recovery is the main topic in this thesis. The emphasis has been on numerical optimization algorithms, tested on case examples using simple hypothetical oil reservoirs. Gradientbased optimization, which utilizes adjoint-based gradient computation, is used to solve the optimization problems. The first contribution of this thesis is to address output constraint problems. These kinds of constraints are natural in production optimization. Limiting total water production and water cut at producer wells are examples of such constraints. To maintain the feasibility of an optimization solution, a Lagrangian barrier method is proposed to handle the output constraints. This method incorporates the output constraints into the objective function, thus avoiding additional computations for the constraints gradient (Jacobian) which may be detrimental to the efficiency of the adjoint method. The second contribution is the study of the use of second-order adjoint-gradient information for production optimization. In order to speedup convergence rate in the optimization, one usually uses quasi-Newton approaches such as BFGS and SR1 methods. These methods compute an approximation of the inverse of the Hessian matrix given the first-order gradient from the adjoint method. The methods may not give significant speedup if the Hessian is ill-conditioned. We have developed and implemented the Hessian matrix computation using the adjoint method. Due to high computational cost of the Newton method itself, we instead compute the Hessian-timesvector product which is used in a conjugate gradient algorithm. Finally, the last contribution of this thesis is on surrogate optimization for water flooding in the presence of the output constraints. Two kinds of model order reduction techniques are applied to build surrogate models. These are proper orthogonal decomposition (POD) and the discrete empirical interpolation method (DEIM

  1. Gradient-based model calibration with proxy-model assistance

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  2. Gradient descent for robust kernel-based regression

    Science.gov (United States)

    Guo, Zheng-Chu; Hu, Ting; Shi, Lei

    2018-06-01

    In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.

  3. Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method

    Science.gov (United States)

    Sun, Yong; Meng, Zhaohai; Li, Fengting

    2018-03-01

    Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.

  4. Environment based innovation: policy questions

    Directory of Open Access Journals (Sweden)

    Mario Rui Silva

    2009-12-01

    Full Text Available Natural resources and physical cultural resources, referred to in this paper as “Environmental Resources”, can be important assets for regional competitiveness and innovation. In recent years, these types of assets have been increasingly taken into consideration in the design and implementation of regional development strategies, as a consequence of their potential role as a source of differentiation and of new competitive advantages. However, in contrast to environmental policies, which usually focus on the protection of the environment, innovation policies and their instruments are largely shaped by, and geared towards, knowledge-based innovation.In this paper, we discuss the role played by environmental resources in the context of regional innovation policies. We begin by discussing the relationship between environmental resources and regional development, and by emphasizing some contrasting views with regard to the function of environmental resources in regional development. Then, we address the relationship between regional competitive advantages and innovation strategies. The specific issues and problems that arise whenever the aim is to attain competitive advantages through the valorisation of environmental resources constitute the core of section III. In that section, we highlight the specific characteristics of environmental resources and we discuss the applicability of the “natural resource curse” argument to the dynamics based on the valorisation of environmental resources. The reasons that justify public interventionas well as the difficulties concerning the adequate level of intervention (local / regional / national are also examined. The paper ends with some conclusions and policy implications.

  5. Time-domain full waveform inversion using the gradient preconditioning based on transmitted waves energy

    KAUST Repository

    Zhang, Xiao-bo; Tan, Jun; Song, Peng; Li, Jin-shan; Xia, Dong-ming; Liu, Zhao-lun

    2017-01-01

    The gradient preconditioning approach based on seismic wave energy can effectively avoid the huge storage consumption in the gradient preconditioning algorithms based on Hessian matrices in time-domain full waveform inversion (FWI), but the accuracy

  6. Vandenberg Air Force Base Pressure Gradient Wind Study

    Science.gov (United States)

    Shafer, Jaclyn A.

    2013-01-01

    Warning category winds can adversely impact day-to-day space lift operations at Vandenberg Air Force Base (VAFB) in California. NASA's Launch Services Program and other programs at VAFB use wind forecasts issued by the 30 Operational Support Squadron Weather Flight (30 OSSWF) to determine if they need to limit activities or protect property such as a launch vehicle. The 30 OSSWF tasked the AMU to develop an automated Excel graphical user interface that includes pressure gradient thresholds between specific observing stations under different synoptic regimes to aid forecasters when issuing wind warnings. This required the AMU to determine if relationships between the variables existed.

  7. Gradient Evolution-based Support Vector Machine Algorithm for Classification

    Science.gov (United States)

    Zulvia, Ferani E.; Kuo, R. J.

    2018-03-01

    This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

  8. Segmentation of DTI based on tensorial morphological gradient

    Science.gov (United States)

    Rittner, Leticia; de Alencar Lotufo, Roberto

    2009-02-01

    This paper presents a segmentation technique for diffusion tensor imaging (DTI). This technique is based on a tensorial morphological gradient (TMG), defined as the maximum dissimilarity over the neighborhood. Once this gradient is computed, the tensorial segmentation problem becomes an scalar one, which can be solved by conventional techniques, such as watershed transform and thresholding. Similarity functions, namely the dot product, the tensorial dot product, the J-divergence and the Frobenius norm, were compared, in order to understand their differences regarding the measurement of tensor dissimilarities. The study showed that the dot product and the tensorial dot product turned out to be inappropriate for computation of the TMG, while the Frobenius norm and the J-divergence were both capable of measuring tensor dissimilarities, despite the distortion of Frobenius norm, since it is not an affine invariant measure. In order to validate the TMG as a solution for DTI segmentation, its computation was performed using distinct similarity measures and structuring elements. TMG results were also compared to fractional anisotropy. Finally, synthetic and real DTI were used in the method validation. Experiments showed that the TMG enables the segmentation of DTI by watershed transform or by a simple choice of a threshold. The strength of the proposed segmentation method is its simplicity and robustness, consequences of TMG computation. It enables the use, not only of well-known algorithms and tools from the mathematical morphology, but also of any other segmentation method to segment DTI, since TMG computation transforms tensorial images in scalar ones.

  9. Gradient-based stochastic estimation of the density matrix

    Science.gov (United States)

    Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton

    2018-03-01

    Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.

  10. Democratic population decisions result in robust policy-gradient learning: a parametric study with GPU simulations.

    Directory of Open Access Journals (Sweden)

    Paul Richmond

    2011-05-01

    Full Text Available High performance computing on the Graphics Processing Unit (GPU is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism, achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic" for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated.

  11. Stability of boundary layer flow based on energy gradient theory

    Science.gov (United States)

    Dou, Hua-Shu; Xu, Wenqian; Khoo, Boo Cheong

    2018-05-01

    The flow of the laminar boundary layer on a flat plate is studied with the simulation of Navier-Stokes equations. The mechanisms of flow instability at external edge of the boundary layer and near the wall are analyzed using the energy gradient theory. The simulation results show that there is an overshoot on the velocity profile at the external edge of the boundary layer. At this overshoot, the energy gradient function is very large which results in instability according to the energy gradient theory. It is found that the transverse gradient of the total mechanical energy is responsible for the instability at the external edge of the boundary layer, which induces the entrainment of external flow into the boundary layer. Within the boundary layer, there is a maximum of the energy gradient function near the wall, which leads to intensive flow instability near the wall and contributes to the generation of turbulence.

  12. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    Science.gov (United States)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  13. Nano-resonator frequency response based on strain gradient theory

    International Nuclear Information System (INIS)

    Miandoab, Ehsan Maani; Yousefi-Koma, Aghil; Pishkenari, Hossein Nejat; Fathi, Mohammad

    2014-01-01

    This paper aims to explore the dynamic behaviour of a nano-resonator under ac and dc excitation using strain gradient theory. To achieve this goal, the partial differential equation of nano-beam vibration is first converted to an ordinary differential equation by the Galerkin projection method and the lumped model is derived. Lumped parameters of the nano-resonator, such as linear and nonlinear springs and damper coefficients, are compared with those of classical theory and it is demonstrated that beams with smaller thickness display greater deviation from classical parameters. Stable and unstable equilibrium points based on classic and non-classical theories are also compared. The results show that, regarding the applied dc voltage, the dynamic behaviours expected by classical and non-classical theories are significantly different, such that one theory predicts the un-deformed shape as the stable condition, while the other theory predicts that the beam will experience bi-stability. To obtain the frequency response of the nano-resonator, a general equation including cubic and quadratic nonlinearities in addition to parametric electrostatic excitation terms is derived, and the analytical solution is determined using a second-order multiple scales method. Based on frequency response analysis, the softening and hardening effects given by two theories are investigated and compared, and it is observed that neglecting the size effect can lead to two completely different predictions in the dynamic behaviour of the resonators. The findings of this article can be helpful in the design and characterization of the size-dependent dynamic behaviour of resonators on small scales. (paper)

  14. Electrohydromechanical analysis based on conductivity gradient in microchannel

    International Nuclear Information System (INIS)

    Jiang Hongyuan; Ren Yukun; Ao Hongrui; Ramos, Antonio

    2008-01-01

    Fluid manipulation is very important in any lab-on-a-chip system. This paper analyses phenomena which use the alternating current (AC) electric field to deflect and manipulate coflowing streams of two different electrolytes (with conductivity gradient) within a microfluidic channel. The basic theory of the electrohydrodynamics and simulation of the analytical model are used to explain the phenomena. The velocity induced for different voltages and conductivity gradient are computed. The results show that when the AC electrical signal is applied on the electrodes, the fluid with higher conductivity occupies a larger region of the channel and the interface of the two fluids is deflected. It will provide some basic reference for people who want to do more study in the control of different fluids with conductivity gradient in a microfluidic channel. (classical areas of phenomenology)

  15. The Physics of Compressive Sensing and the Gradient-Based Recovery Algorithms

    OpenAIRE

    Dai, Qi; Sha, Wei

    2009-01-01

    The physics of compressive sensing (CS) and the gradient-based recovery algorithms are presented. First, the different forms for CS are summarized. Second, the physical meanings of coherence and measurement are given. Third, the gradient-based recovery algorithms and their geometry explanations are provided. Finally, we conclude the report and give some suggestion for future work.

  16. Directory Enabled Policy Based Networking; TOPICAL

    International Nuclear Information System (INIS)

    KELIIAA, CURTIS M.

    2001-01-01

    This report presents a discussion of directory-enabled policy-based networking with an emphasis on its role as the foundation for securely scalable enterprise networks. A directory service provides the object-oriented logical environment for interactive cyber-policy implementation. Cyber-policy implementation includes security, network management, operational process and quality of service policies. The leading network-technology vendors have invested in these technologies for secure universal connectivity that transverses Internet, extranet and intranet boundaries. Industry standards are established that provide the fundamental guidelines for directory deployment scalable to global networks. The integration of policy-based networking with directory-service technologies provides for intelligent management of the enterprise network environment as an end-to-end system of related clients, services and resources. This architecture allows logical policies to protect data, manage security and provision critical network services permitting a proactive defense-in-depth cyber-security posture. Enterprise networking imposes the consideration of supporting multiple computing platforms, sites and business-operation models. An industry-standards based approach combined with principled systems engineering in the deployment of these technologies allows these issues to be successfully addressed. This discussion is focused on a directory-based policy architecture for the heterogeneous enterprise network-computing environment and does not propose specific vendor solutions. This document is written to present practical design methodology and provide an understanding of the risks, complexities and most important, the benefits of directory-enabled policy-based networking

  17. Design of LED projector based on gradient-index lens

    Science.gov (United States)

    Qian, Liyong; Zhu, Xiangbing; Cui, Haitian; Wang, Yuanhang

    2018-01-01

    In this study, a new type of projector light path is designed to eliminate the deficits of existing projection systems, such as complex structure and low collection efficiency. Using a three-color LED array as the lighting source, by means of the special optical properties of a gradient-index lens, the complex structure of the traditional projector is simplified. Traditional components, such as the color wheel, relay lens, and mirror, become unnecessary. In this way, traditional problems, such as low utilization of light energy and loss of light energy, are solved. With the help of Zemax software, the projection lens is optimized. The optimized projection lens, LED, gradient-index lens, and digital micromirror device are imported into Tracepro. The ray tracing results show that both the utilization of light energy and the uniformity are improved significantly.

  18. Moving force identification based on modified preconditioned conjugate gradient method

    Science.gov (United States)

    Chen, Zhen; Chan, Tommy H. T.; Nguyen, Andy

    2018-06-01

    This paper develops a modified preconditioned conjugate gradient (M-PCG) method for moving force identification (MFI) by improving the conjugate gradient (CG) and preconditioned conjugate gradient (PCG) methods with a modified Gram-Schmidt algorithm. The method aims to obtain more accurate and more efficient identification results from the responses of bridge deck caused by vehicles passing by, which are known to be sensitive to ill-posed problems that exist in the inverse problem. A simply supported beam model with biaxial time-varying forces is used to generate numerical simulations with various analysis scenarios to assess the effectiveness of the method. Evaluation results show that regularization matrix L and number of iterations j are very important influence factors to identification accuracy and noise immunity of M-PCG. Compared with the conventional counterpart SVD embedded in the time domain method (TDM) and the standard form of CG, the M-PCG with proper regularization matrix has many advantages such as better adaptability and more robust to ill-posed problems. More importantly, it is shown that the average optimal numbers of iterations of M-PCG can be reduced by more than 70% compared with PCG and this apparently makes M-PCG a preferred choice for field MFI applications.

  19. ADAPTIVE ANT COLONY OPTIMIZATION BASED GRADIENT FOR EDGE DETECTION

    Directory of Open Access Journals (Sweden)

    Febri Liantoni

    2014-08-01

    Full Text Available Ant Colony Optimization (ACO is a nature-inspired optimization algorithm which is motivated by ants foraging behavior. Due to its favorable advantages, ACO has been widely used to solve several NP-hard problems, including edge detection. Since ACO initially distributes ants at random, it may cause imbalance ant distribution which later affects path discovery process. In this paper an adaptive ACO is proposed to optimize edge detection by adaptively distributing ant according to gradient analysis. Ants are adaptively distributed according to gradient ratio of each image regions. Region which has bigger gradient ratio, will have bigger number of ant distribution. Experiments are conducted using images from various datasets. Precision and recall are used to quantitatively evaluate performance of the proposed algorithm. Precision and recall of adaptive ACO reaches 76.98 % and 96.8 %. Whereas highest precision and recall for standard ACO are 69.74 % and 74.85 %. Experimental results show that the adaptive ACO outperforms standard ACO which randomly distributes ants.

  20. Approximated Function Based Spectral Gradient Algorithm for Sparse Signal Recovery

    Directory of Open Access Journals (Sweden)

    Weifeng Wang

    2014-02-01

    Full Text Available Numerical algorithms for the l0-norm regularized non-smooth non-convex minimization problems have recently became a topic of great interest within signal processing, compressive sensing, statistics, and machine learning. Nevertheless, the l0-norm makes the problem combinatorial and generally computationally intractable. In this paper, we construct a new surrogate function to approximate l0-norm regularization, and subsequently make the discrete optimization problem continuous and smooth. Then we use the well-known spectral gradient algorithm to solve the resulting smooth optimization problem. Experiments are provided which illustrate this method is very promising.

  1. Weighted graph based ordering techniques for preconditioned conjugate gradient methods

    Science.gov (United States)

    Clift, Simon S.; Tang, Wei-Pai

    1994-01-01

    We describe the basis of a matrix ordering heuristic for improving the incomplete factorization used in preconditioned conjugate gradient techniques applied to anisotropic PDE's. Several new matrix ordering techniques, derived from well-known algorithms in combinatorial graph theory, which attempt to implement this heuristic, are described. These ordering techniques are tested against a number of matrices arising from linear anisotropic PDE's, and compared with other matrix ordering techniques. A variation of RCM is shown to generally improve the quality of incomplete factorization preconditioners.

  2. Algorithm for image retrieval based on edge gradient orientation statistical code.

    Science.gov (United States)

    Zeng, Jiexian; Zhao, Yonggang; Li, Weiye; Fu, Xiang

    2014-01-01

    Image edge gradient direction not only contains important information of the shape, but also has a simple, lower complexity characteristic. Considering that the edge gradient direction histograms and edge direction autocorrelogram do not have the rotation invariance, we put forward the image retrieval algorithm which is based on edge gradient orientation statistical code (hereinafter referred to as EGOSC) by sharing the application of the statistics method in the edge direction of the chain code in eight neighborhoods to the statistics of the edge gradient direction. Firstly, we construct the n-direction vector and make maximal summation restriction on EGOSC to make sure this algorithm is invariable for rotation effectively. Then, we use Euclidean distance of edge gradient direction entropy to measure shape similarity, so that this method is not sensitive to scaling, color, and illumination change. The experimental results and the algorithm analysis demonstrate that the algorithm can be used for content-based image retrieval and has good retrieval results.

  3. Cohort-based income gradients in obesity among U.S. adults.

    Science.gov (United States)

    Heo, Jongho; Beck, Audrey N; Lin, Shih-Fan; Marcelli, Enrico; Lindsay, Suzanne; Karl Finch, Brian

    2018-03-01

    No studies have focused on socioeconomic disparities in obesity within and between cohorts. Our objectives were to examine income gradients in obesity between birth-cohorts (inter-cohort variations) and within each birth-cohort (intra-cohort variations) by gender and race/ethnicity. Our sample includes 56,820 white and black adults from pooled, cross-sectional National Health and Nutrition Examination Surveys (1971-2012). We fit a series of logistic hierarchical Age-Period-Cohort models to control for the effects of age and period, simultaneously. Predicted probabilities of obesity by poverty-to-income ratio were estimated and graphed for 5-year cohort groups from 1901-1990. We also stratified this relationship for four gender and racial/ethnic subgroups. Obesity disparities due to income were weaker for post-World War I and II generations, specifically the mid-1920s and the mid-1940s to 1950s cohorts, than for other cohorts. In contrast, we found greater income gradients in obesity among cohorts from the 1930s to mid-1940s and mid-1960s to 1970s. Moreover, obesity disparities due to income across cohorts vary markedly by gender and race/ethnicity. White women with higher income consistently exhibited a lower likelihood of obesity than those with lower income since early 1900s cohorts; whereas, black men with higher income exhibited higher risks of obesity than those with lower income in most cohorts. Our findings suggest that strategies that address race and/or gender inequalities in obesity should be cognizant of significant historical factors that may be unique to cohorts. Period-based approaches that ignore life-course experiences captured in significant cohort-based experiences may limit the utility of policies and interventions. © 2017 Wiley Periodicals, Inc.

  4. Time-domain full waveform inversion using the gradient preconditioning based on transmitted waves energy

    KAUST Repository

    Zhang, Xiao-bo

    2017-06-01

    The gradient preconditioning approach based on seismic wave energy can effectively avoid the huge storage consumption in the gradient preconditioning algorithms based on Hessian matrices in time-domain full waveform inversion (FWI), but the accuracy is affected by the energy of reflected waves when strong reflectors are present in velocity model. To address this problem, we propose a gradient preconditioning method, which scales the gradient based on the energy of the “approximated transmitted wavefield” simulated by the nonreflecting acoustic wave equation. The method does not require computing or storing the Hessian matrix or its inverse. Furthermore, it can effectively eliminate the effects caused by geometric diffusion and non-uniformity illumination on gradient. The results of model experiments confirm that the time-domain FWI using the gradient preconditioning based on transmitted waves energy can achieve higher inversion precision for high-velocity body and the deep strata below when compared with using the gradient preconditioning based on seismic waves energy.

  5. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    Energy Technology Data Exchange (ETDEWEB)

    Aguilo Valentin, Miguel Alejandro [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  6. Policy administration in tag-based authorization

    NARCIS (Netherlands)

    Etalle, Sandro; Hinrichs, Timothy L.; Lee, Adam J.; Trivellato, Daniel; Zannone, Nicola

    2013-01-01

    Tag-Based Authorization (TBA) is a hybrid access control model that combines the ease of use of extensional access control models with the expressivity of logic-based formalisms. The main limitation of TBA is that it lacks support for policy administration. More precisely, it does not allow

  7. A gradient-based method for segmenting FDG-PET images: methodology and validation

    International Nuclear Information System (INIS)

    Geets, Xavier; Lee, John A.; Gregoire, Vincent; Bol, Anne; Lonneux, Max

    2007-01-01

    A new gradient-based method for segmenting FDG-PET images is described and validated. The proposed method relies on the watershed transform and hierarchical cluster analysis. To allow a better estimation of the gradient intensity, iteratively reconstructed images were first denoised and deblurred with an edge-preserving filter and a constrained iterative deconvolution algorithm. Validation was first performed on computer-generated 3D phantoms containing spheres, then on a real cylindrical Lucite phantom containing spheres of different volumes ranging from 2.1 to 92.9 ml. Moreover, laryngeal tumours from seven patients were segmented on PET images acquired before laryngectomy by the gradient-based method and the thresholding method based on the source-to-background ratio developed by Daisne (Radiother Oncol 2003;69:247-50). For the spheres, the calculated volumes and radii were compared with the known values; for laryngeal tumours, the volumes were compared with the macroscopic specimens. Volume mismatches were also analysed. On computer-generated phantoms, the deconvolution algorithm decreased the mis-estimate of volumes and radii. For the Lucite phantom, the gradient-based method led to a slight underestimation of sphere volumes (by 10-20%), corresponding to negligible radius differences (0.5-1.1 mm); for laryngeal tumours, the segmented volumes by the gradient-based method agreed with those delineated on the macroscopic specimens, whereas the threshold-based method overestimated the true volume by 68% (p = 0.014). Lastly, macroscopic laryngeal specimens were totally encompassed by neither the threshold-based nor the gradient-based volumes. The gradient-based segmentation method applied on denoised and deblurred images proved to be more accurate than the source-to-background ratio method. (orig.)

  8. Policy-Based Management Natural Language Parser

    Science.gov (United States)

    James, Mark

    2009-01-01

    The Policy-Based Management Natural Language Parser (PBEM) is a rules-based approach to enterprise management that can be used to automate certain management tasks. This parser simplifies the management of a given endeavor by establishing policies to deal with situations that are likely to occur. Policies are operating rules that can be referred to as a means of maintaining order, security, consistency, or other ways of successfully furthering a goal or mission. PBEM provides a way of managing configuration of network elements, applications, and processes via a set of high-level rules or business policies rather than managing individual elements, thus switching the control to a higher level. This software allows unique management rules (or commands) to be specified and applied to a cross-section of the Global Information Grid (GIG). This software embodies a parser that is capable of recognizing and understanding conversational English. Because all possible dialect variants cannot be anticipated, a unique capability was developed that parses passed on conversation intent rather than the exact way the words are used. This software can increase productivity by enabling a user to converse with the system in conversational English to define network policies. PBEM can be used in both manned and unmanned science-gathering programs. Because policy statements can be domain-independent, this software can be applied equally to a wide variety of applications.

  9. Time dependent policy-based access control

    DEFF Research Database (Denmark)

    Vasilikos, Panagiotis; Nielson, Flemming; Nielson, Hanne Riis

    2017-01-01

    also on other attributes of the environment such as the time. In this paper, we use systems of Timed Automata to model distributed systems and we present a logic in which one can express time-dependent policies for access control. We show how a fragment of our logic can be reduced to a logic......Access control policies are essential to determine who is allowed to access data in a system without compromising the data's security. However, applications inside a distributed environment may require those policies to be dependent on the actual content of the data, the flow of information, while...... that current model checkers for Timed Automata such as UPPAAL can handle and we present a translator that performs this reduction. We then use our translator and UPPAAL to enforce time-dependent policy-based access control on an example application from the aerospace industry....

  10. Microsphere-based gradient implants for osteochondral regeneration: a long-term study in sheep

    Science.gov (United States)

    Mohan, Neethu; Gupta, Vineet; Sridharan, Banu Priya; Mellott, Adam J; Easley, Jeremiah T; Palmer, Ross H; Galbraith, Richard A; Key, Vincent H; Berkland, Cory J; Detamore, Michael S

    2015-01-01

    Background: The microfracture technique for cartilage repair has limited ability to regenerate hyaline cartilage. Aim: The current study made a direct comparison between microfracture and an osteochondral approach with microsphere-based gradient plugs. Materials & methods: The PLGA-based scaffolds had opposing gradients of chondroitin sulfate and β-tricalcium phosphate. A 1-year repair study in sheep was conducted. Results: The repair tissues in the microfracture were mostly fibrous and had scattered fissures with degenerative changes. Cartilage regenerated with the gradient plugs had equal or superior mechanical properties; had lacunated cells and stable matrix as in hyaline cartilage. Conclusion: This first report of gradient scaffolds in a long-term, large animal, osteochondral defect demonstrated potential for equal or better cartilage repair than microfracture. PMID:26418471

  11. Practical mathematical optimization basic optimization theory and gradient-based algorithms

    CERN Document Server

    Snyman, Jan A

    2018-01-01

    This textbook presents a wide range of tools for a course in mathematical optimization for upper undergraduate and graduate students in mathematics, engineering, computer science, and other applied sciences. Basic optimization principles are presented with emphasis on gradient-based numerical optimization strategies and algorithms for solving both smooth and noisy discontinuous optimization problems. Attention is also paid to the difficulties of expense of function evaluations and the existence of multiple minima that often unnecessarily inhibit the use of gradient-based methods. This second edition addresses further advancements of gradient-only optimization strategies to handle discontinuities in objective functions. New chapters discuss the construction of surrogate models as well as new gradient-only solution strategies and numerical optimization using Python. A special Python module is electronically available (via springerlink) that makes the new algorithms featured in the text easily accessible and dir...

  12. On condition based maintenance policy

    Directory of Open Access Journals (Sweden)

    Jong-Ho Shin

    2015-04-01

    Full Text Available In the case of a high-valuable asset, the Operation and Maintenance (O&M phase requires heavy charges and more efforts than the installation (construction phase, because it has long usage life and any accident of an asset during this period causes catastrophic damage to an industry. Recently, with the advent of emerging Information Communication Technologies (ICTs, we can get the visibility of asset status information during its usage period. It gives us new challenging issues for improving the efficiency of asset operations. One issue is to implement the Condition-Based Maintenance (CBM approach that makes a diagnosis of the asset status based on wire or wireless monitored data, predicts the assets abnormality, and executes suitable maintenance actions such as repair and replacement before serious problems happen. In this study, we have addressed several aspects of CBM approach: definition, related international standards, procedure, and techniques with the introduction of some relevant case studies that we have carried out.

  13. Gradient-based reliability maps for ACM-based segmentation of hippocampus.

    Science.gov (United States)

    Zarpalas, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-04-01

    Automatic segmentation of deep brain structures, such as the hippocampus (HC), in MR images has attracted considerable scientific attention due to the widespread use of MRI and to the principal role of some structures in various mental disorders. In this literature, there exists a substantial amount of work relying on deformable models incorporating prior knowledge about structures' anatomy and shape information. However, shape priors capture global shape characteristics and thus fail to model boundaries of varying properties; HC boundaries present rich, poor, and missing gradient regions. On top of that, shape prior knowledge is blended with image information in the evolution process, through global weighting of the two terms, again neglecting the spatially varying boundary properties, causing segmentation faults. An innovative method is hereby presented that aims to achieve highly accurate HC segmentation in MR images, based on the modeling of boundary properties at each anatomical location and the inclusion of appropriate image information for each of those, within an active contour model framework. Hence, blending of image information and prior knowledge is based on a local weighting map, which mixes gradient information, regional and whole brain statistical information with a multi-atlas-based spatial distribution map of the structure's labels. Experimental results on three different datasets demonstrate the efficacy and accuracy of the proposed method.

  14. Model-reduced gradient-based history matching

    NARCIS (Netherlands)

    Kaleta, M.P.

    2011-01-01

    Since the world's energy demand increases every year, the oil & gas industry makes a continuous effort to improve fossil fuel recovery. Physics-based petroleum reservoir modeling and closed-loop model-based reservoir management concept can play an important role here. In this concept measured data

  15. Bidirectional composition on lie groups for gradient-based image alignment.

    Science.gov (United States)

    Mégret, Rémi; Authesserre, Jean-Baptiste; Berthoumieu, Yannick

    2010-09-01

    In this paper, a new formulation based on bidirectional composition on Lie groups (BCL) for parametric gradient-based image alignment is presented. Contrary to the conventional approaches, the BCL method takes advantage of the gradients of both template and current image without combining them a priori. Based on this bidirectional formulation, two methods are proposed and their relationship with state-of-the-art gradient based approaches is fully discussed. The first one, i.e., the BCL method, relies on the compositional framework to provide the minimization of the compensated error with respect to an augmented parameter vector. The second one, the projected BCL (PBCL), corresponds to a close approximation of the BCL approach. A comparative study is carried out dealing with computational complexity, convergence rate and frequence of convergence. Numerical experiments using a conventional benchmark show the performance improvement especially for asymmetric levels of noise, which is also discussed from a theoretical point of view.

  16. A gradient based algorithm to solve inverse plane bimodular problems of identification

    Science.gov (United States)

    Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing

    2018-02-01

    This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.

  17. Design and Performance of Property Gradient Ternary Nitride Coating Based on Process Control.

    Science.gov (United States)

    Yan, Pei; Chen, Kaijie; Wang, Yubin; Zhou, Han; Peng, Zeyu; Jiao, Li; Wang, Xibin

    2018-05-09

    Surface coating is an effective approach to improve cutting tool performance, and multiple or gradient coating structures have become a common development strategy. However, composition mutations at the interfaces decrease the performance of multi-layered coatings. The key mitigation technique has been to reduce the interface effect at the boundaries. This study proposes a structure design method for property-component gradient coatings based on process control. The method produces coatings with high internal cohesion and high external hardness, which could reduce the composition and performance mutations at the interface. A ZrTiN property gradient ternary nitride coating was deposited on cemented carbide by multi-arc ion plating with separated Ti and Zr targets. The mechanical properties, friction behaviors, and cutting performances were systematically investigated, compared with a single-layer coating. The results indicated that the gradient coating had better friction and wear performance with lower wear rate and higher resistance to peeling off during sliding friction. The gradient coating had better wear and damage resistance in cutting processes, with lower machined surface roughness Ra. Gradient-structured coatings could effectively inhibit micro crack initiation and growth under alternating force and temperature load. This method could be extended to similar ternary nitride coatings.

  18. Gradient-based optimization in nonlinear structural dynamics

    DEFF Research Database (Denmark)

    Dou, Suguang

    The intrinsic nonlinearity of mechanical structures can give rise to rich nonlinear dynamics. Recently, nonlinear dynamics of micro-mechanical structures have contributed to developing new Micro-Electro-Mechanical Systems (MEMS), for example, atomic force microscope, passive frequency divider......, frequency stabilization, and disk resonator gyroscope. For advanced design of these structures, it is of considerable value to extend current optimization in linear structural dynamics into nonlinear structural dynamics. In this thesis, we present a framework for modelling, analysis, characterization......, and optimization of nonlinear structural dynamics. In the modelling, nonlinear finite elements are used. In the analysis, nonlinear frequency response and nonlinear normal modes are calculated based on a harmonic balance method with higher-order harmonics. In the characterization, nonlinear modal coupling...

  19. Case-Based Policy and Goal Recognition

    Science.gov (United States)

    2015-09-30

    Policy and Goal Recognizer (PaGR), a case- based system for multiagent keyhole recognition. PaGR is a knowledge recognition component within a decision...However, unlike our agent in the BVR domain, these recognition agents have access to perfect information. Single-agent keyhole plan recognition can be...listed below: 1. Facing Target 2. Closing on Target 3. Target Range 4. Within a Target’s Weapon Range 5. Has Target within Weapon Range 6. Is in Danger

  20. Block-Based Gradient Descent for Local Backlight Dimming and Flicker Reduction

    DEFF Research Database (Denmark)

    Burini, Nino; Mantel, Claire; Nadernejad, Ehsan

    2014-01-01

    Local backlight dimming technology is a two-fold improvement for LED backlit LCD screens that allows to reduce power consumption and increase visual quality. This paper presents a fast version of an iterative backlight dimming algorithm based on gradient descent search. The speed is increased...

  1. An algorithm for gradient-based dynamic optimization of UV flash processes

    DEFF Research Database (Denmark)

    Ritschel, Tobias Kasper Skovborg; Capolei, Andrea; Gaspar, Jozsef

    2017-01-01

    This paper presents a novel single-shooting algorithm for gradient-based solution of optimal control problems with vapor-liquid equilibrium constraints. Such optimal control problems are important in several engineering applications, for instance in control of distillation columns, in certain two...... softwareaswellastheperformanceofdifferentcompilersinaLinuxoperatingsystem. Thesetestsindicatethatreal-timenonlinear model predictive control of UV flash processes is computationally feasible....

  2. Gradient for the acoustic VTI full waveform inversion based on the instantaneous traveltime sensitivity kernels

    KAUST Repository

    Djebbi, Ramzi

    2015-08-19

    The instantaneous traveltime is able to reduce the non-linearity of full waveform inversion (FWI) that originates from the wrapping of the phase. However, the adjoint state method in this case requires a total of 5 modeling calculations to compute the gradient. Also, considering the larger modeling cost for anisotropic wavefield extrapolation and the necessity to use a line-search algorithm to estimate a step length that depends on the parameters scale, we propose to calculate the gradient based on the instantaneous traveltime sensitivity kernels. We, specifically, use the sensitivity kernels computed using dynamic ray-tracing to build the gradient. The resulting update is computed using a matrix decomposition and accordingly the computational cost is reduced. We consider a simple example where an anomaly is embedded into a constant background medium and we compute the update for the VTI wave equation parameterized using vh, η and ε.

  3. Gradient for the acoustic VTI full waveform inversion based on the instantaneous traveltime sensitivity kernels

    KAUST Repository

    Djebbi, Ramzi; Alkhalifah, Tariq Ali

    2015-01-01

    The instantaneous traveltime is able to reduce the non-linearity of full waveform inversion (FWI) that originates from the wrapping of the phase. However, the adjoint state method in this case requires a total of 5 modeling calculations to compute the gradient. Also, considering the larger modeling cost for anisotropic wavefield extrapolation and the necessity to use a line-search algorithm to estimate a step length that depends on the parameters scale, we propose to calculate the gradient based on the instantaneous traveltime sensitivity kernels. We, specifically, use the sensitivity kernels computed using dynamic ray-tracing to build the gradient. The resulting update is computed using a matrix decomposition and accordingly the computational cost is reduced. We consider a simple example where an anomaly is embedded into a constant background medium and we compute the update for the VTI wave equation parameterized using vh, η and ε.

  4. Accelerated gradient methods for total-variation-based CT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Joergensen, Jakob H.; Hansen, Per Christian [Technical Univ. of Denmark, Lyngby (Denmark). Dept. of Informatics and Mathematical Modeling; Jensen, Tobias L.; Jensen, Soeren H. [Aalborg Univ. (Denmark). Dept. of Electronic Systems; Sidky, Emil Y.; Pan, Xiaochuan [Chicago Univ., Chicago, IL (United States). Dept. of Radiology

    2011-07-01

    Total-variation (TV)-based CT image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is well suited for images with piecewise nearly constant regions. Computationally, however, TV-based reconstruction is demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-intensive methods such as Newton's method. The simple gradient method has much lower memory requirements, but exhibits prohibitively slow convergence. In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping criterion to ensure that the TV reconstruction has indeed been found. An implementation of the methods (in C with interface to Matlab) is available for download from http://www2.imm.dtu.dk/~pch/TVReg/. We compare the proposed methods with the standard gradient method, applied to a 3D test problem with synthetic few-view data. We find experimentally that for realistic parameters the proposed methods significantly outperform the standard gradient method. (orig.)

  5. Evidence-based policy versus morality policy: the case of syringe access programs.

    Science.gov (United States)

    de Saxe Zerden, Lisa; O'Quinn, Erin; Davis, Corey

    2015-01-01

    Evidence-based practice (EBP) combines proven interventions with clinical experience, ethics, and client preferences to inform treatment and services. Although EBP is integrated into most aspects of social work and public health, at times EBP is at odds with social policy. In this article the authors explore the paradox of evidence-based policy using syringe access programs (SAP) as a case example, and review methods of bridging the gap between the emphasis on EBP and lack of evidence informing SAP policy. Analysis includes the overuse of morality policy and examines historical and current theories why this paradox exists. Action steps are highlighted for creating effective policy and opportunities for public health change. Strategies on reframing the problem and shifting target population focus to garner support for evidence-based policy change are included. This interdisciplinary understanding of the way in which these factors converge is a critical first step in moving beyond morality-based policy toward evidence-based policy.

  6. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    Science.gov (United States)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  7. Detection of ferromagnetic target based on mobile magnetic gradient tensor system

    Energy Technology Data Exchange (ETDEWEB)

    Gang, Y.I.N., E-mail: gang.gang88@163.com; Yingtang, Zhang; Zhining, Li; Hongbo, Fan; Guoquan, Ren

    2016-03-15

    Attitude change of mobile magnetic gradient tensor system critically affects the precision of gradient measurements, thereby increasing ambiguity in target detection. This paper presents a rotational invariant-based method for locating and identifying ferromagnetic targets. Firstly, unit magnetic moment vector was derived based on the geometrical invariant, such that the intermediate eigenvector of the magnetic gradient tensor is perpendicular to the magnetic moment vector and the source–sensor displacement vector. Secondly, unit source–sensor displacement vector was derived based on the characteristic that the angle between magnetic moment vector and source–sensor displacement is a rotational invariant. By introducing a displacement vector between two measurement points, the magnetic moment vector and the source–sensor displacement vector were theoretically derived. To resolve the problem of measurement noises existing in the realistic detection applications, linear equations were formulated using invariants corresponding to several distinct measurement points and least square solution of magnetic moment vector and source–sensor displacement vector were obtained. Results of simulation and principal verification experiment showed the correctness of the analytical method, along with the practicability of the least square method. - Highlights: • Ferromagnetic target detection method is proposed based on rotational invariants • Intermediate eigenvector is perpendicular to magnetic moment and displacement vector • Angle between magnetic moment and displacement vector is a rotational invariant • Magnetic moment and displacement vector are derived based on invariants of two points.

  8. Constructing regional advantage: platform policies based on related variety and differentiated knowledge bases.

    NARCIS (Netherlands)

    Asheim, B.T.; Boschma, R.A.; Cooke, P.

    2011-01-01

    Constructing regional advantage: platform policies based on related variety and differentiated knowledge bases, Regional Studies. This paper presents a regional innovation policy model based on the idea of constructing regional advantage. This policy model brings together concepts like related

  9. The Urban-Rural Gradient In Asthma: A Population-Based Study in Northern Europe

    Directory of Open Access Journals (Sweden)

    Signe Timm

    2015-12-01

    Full Text Available The early life environment appears to have a persistent impact on asthma risk. We hypothesize that environmental factors related to rural life mediate lower asthma prevalence in rural populations, and aimed to investigate an urban-rural gradient, assessed by place of upbringing, for asthma. The population-based Respiratory Health In Northern Europe (RHINE study includes subjects from Denmark, Norway, Sweden, Iceland and Estonia born 1945–1973. The present analysis encompasses questionnaire data on 11,123 RHINE subjects. Six categories of place of upbringing were defined: farm with livestock, farm without livestock, village in rural area, small town, city suburb and inner city. The association of place of upbringing with asthma onset was analysed with Cox regression adjusted for relevant confounders. Subjects growing up on livestock farms had less asthma (8% than subjects growing up in inner cities (11% (hazard ratio 0.72 95% CI 0.57–0.91, and a significant urban-rural gradient was observed across six urbanisation levels (p = 0.02. An urban-rural gradient was only evident among women, smokers and for late-onset asthma. Analyses on wheeze and place of upbringing revealed similar results. In conclusion, this study suggests a protective effect of livestock farm upbringing on asthma development and an urban-rural gradient in a Northern European population.

  10. Implementing evidence-based policy in a network setting: road safety policy in the Netherlands.

    Science.gov (United States)

    Bax, Charlotte; de Jong, Martin; Koppenjan, Joop

    2010-01-01

    In the early 1990s, in order to improve road safety in The Netherlands, the Institute for Road Safety Research (SWOV) developed an evidence-based "Sustainable Safety" concept. Based on this concept, Dutch road safety policy, was seen as successful and as a best practice in Europe. In The Netherlands, the policy context has now changed from a sectoral policy setting towards a fragmented network in which safety is a facet of other transport-related policies. In this contribution, it is argued that the implementation strategy underlying Sustainable Safety should be aligned with the changed context. In order to explore the adjustments needed, two perspectives of policy implementation are discussed: (1) national evidence-based policies with sectoral implementation; and (2) decentralized negotiation on transport policy in which road safety is but one aspect. We argue that the latter approach matches the characteristics of the newly evolved policy context best, and conclude with recommendations for reformulating the implementation strategy.

  11. Integrating policy-based management and SLA performance monitoring

    Science.gov (United States)

    Liu, Tzong-Jye; Lin, Chin-Yi; Chang, Shu-Hsin; Yen, Meng-Tzu

    2001-10-01

    Policy-based management system provides the configuration capability for the system administrators to focus on the requirements of customers. The service level agreement performance monitoring mechanism helps system administrators to verify the correctness of policies. However, it is difficult for a device to process the policies directly because the policies are the management concept. This paper proposes a mechanism to decompose a policy into rules that can be efficiently processed by a device. Thus, the device may process the rule and collect the performance statistics information efficiently; and the policy-based management system may collect these performance statistics information and report the service-level agreement performance monitoring information to the system administrator. The proposed policy-based management system achieves both the policy configuration and service-level agreement performance monitoring requirements. A policy consists of a condition part and an action part. The condition part is a Boolean expression of a source host IP group, a destination host IP group, etc. The action part is the parameters of services. We say that an address group is compact if it only consists of a range of IP address that can be denoted by a pair of IP address and corresponding IP mask. If the condition part of a policy only consists of the compact address group, we say that the policy is a rule. Since a device can efficiently process a compact address and a system administrator prefers to define a range of IP address, the policy-based management system has to translate policy into rules and supplements the gaps between policy and rules. The proposed policy-based management system builds the relationships between VPN and policies, policy and rules. Since the system administrator wants to monitor the system performance information of VPNs and policies, the proposed policy-based management system downloads the relationships among VPNs, policies and rules to the

  12. Topographic gradient based site characterization in India complemented by strong ground-motion spectral attributes

    KAUST Repository

    Nath, Sankar Kumar; Thingbaijam, Kiran Kumar; Adhikari, M. D.; Nayak, Avinash; Devaraj, N.; Ghosh, Soumalya K.; Mahajan, Arun K.

    2013-01-01

    We appraise topographic-gradient approach for site classification that employs correlations between 30. m column averaged shear-wave velocity and topographic gradients. Assessments based on site classifications reported from cities across India indicate that the approach is reasonably viable at regional level. Additionally, we experiment three techniques for site classification based on strong ground-motion recordings, namely Horizontal-to-Vertical Spectral Ratio (HVSR), Response Spectra Shape (RSS), and Horizontal-to-Vertical Response Spectral Ratio (HVRSR) at the strong motion stations located across the Himalayas and northeast India. Statistical tests on the results indicate that these three techniques broadly differentiate soil and rock sites while RSS and HVRSR yield better signatures. The results also support the implemented site classification in the light of strong ground-motion spectral attributes observed in different parts of the globe. © 2013 Elsevier Ltd.

  13. Topographic gradient based site characterization in India complemented by strong ground-motion spectral attributes

    KAUST Repository

    Nath, Sankar Kumar

    2013-12-01

    We appraise topographic-gradient approach for site classification that employs correlations between 30. m column averaged shear-wave velocity and topographic gradients. Assessments based on site classifications reported from cities across India indicate that the approach is reasonably viable at regional level. Additionally, we experiment three techniques for site classification based on strong ground-motion recordings, namely Horizontal-to-Vertical Spectral Ratio (HVSR), Response Spectra Shape (RSS), and Horizontal-to-Vertical Response Spectral Ratio (HVRSR) at the strong motion stations located across the Himalayas and northeast India. Statistical tests on the results indicate that these three techniques broadly differentiate soil and rock sites while RSS and HVRSR yield better signatures. The results also support the implemented site classification in the light of strong ground-motion spectral attributes observed in different parts of the globe. © 2013 Elsevier Ltd.

  14. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    Science.gov (United States)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  15. The Adjoint Method for Gradient-based Dynamic Optimization of UV Flash Processes

    DEFF Research Database (Denmark)

    Ritschel, Tobias Kasper Skovborg; Capolei, Andrea; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a novel single-shooting algorithm for gradient-based solution of optimal control problems with vapor-liquid equilibrium constraints. Dynamic optimization of UV flash processes is relevant in nonlinear model predictive control of distillation columns, certain two-phase flow pro......-component flash process which demonstrate the importance of the optimization solver, the compiler, and the linear algebra software for the efficiency of dynamic optimization of UV flash processes....

  16. Optimization of offshore wind turbine support structures using analytical gradient-based method

    OpenAIRE

    Chew, Kok Hon; Tai, Kang; Ng, E.Y.K.; Muskulus, Michael

    2015-01-01

    Design optimization of the offshore wind turbine support structure is an expensive task; due to the highly-constrained, non-convex and non-linear nature of the design problem. This report presents an analytical gradient-based method to solve this problem in an efficient and effective way. The design sensitivities of the objective and constraint functions are evaluated analytically while the optimization of the structure is performed, subject to sizing, eigenfrequency, extreme load an...

  17. Asynchronous Gossip-Based Gradient-Free Method for Multiagent Optimization

    OpenAIRE

    Deming Yuan

    2014-01-01

    This paper considers the constrained multiagent optimization problem. The objective function of the problem is a sum of convex functions, each of which is known by a specific agent only. For solving this problem, we propose an asynchronous distributed method that is based on gradient-free oracles and gossip algorithm. In contrast to the existing work, we do not require that agents be capable of computing the subgradients of their objective functions and coordinating their...

  18. Stochastic quasi-gradient based optimization algorithms for dynamic reliability applications

    International Nuclear Information System (INIS)

    Bourgeois, F.; Labeau, P.E.

    2001-01-01

    On one hand, PSA results are increasingly used in decision making, system management and optimization of system design. On the other hand, when severe accidental transients are considered, dynamic reliability appears appropriate to account for the complex interaction between the transitions between hardware configurations, the operator behavior and the dynamic evolution of the system. This paper presents an exploratory work in which the estimation of the system unreliability in a dynamic context is coupled with an optimization algorithm to determine the 'best' safety policy. Because some reliability parameters are likely to be distributed, the cost function to be minimized turns out to be a random variable. Stochastic programming techniques are therefore envisioned to determine an optimal strategy. Monte Carlo simulation is used at all stages of the computations, from the estimation of the system unreliability to that of the stochastic quasi-gradient. The optimization algorithm is illustrated on a HNO 3 supply system

  19. Ionospheric forecasting model using fuzzy logic-based gradient descent method

    Directory of Open Access Journals (Sweden)

    D. Venkata Ratnam

    2017-09-01

    Full Text Available Space weather phenomena cause satellite to ground or satellite to aircraft transmission outages over the VHF to L-band frequency range, particularly in the low latitude region. Global Positioning System (GPS is primarily susceptible to this form of space weather. Faulty GPS signals are attributed to ionospheric error, which is a function of Total Electron Content (TEC. Importantly, precise forecasts of space weather conditions and appropriate hazard observant cautions required for ionospheric space weather observations are limited. In this paper, a fuzzy logic-based gradient descent method has been proposed to forecast the ionospheric TEC values. In this technique, membership functions have been tuned based on the gradient descent estimated values. The proposed algorithm has been tested with the TEC data of two geomagnetic storms in the low latitude station of KL University, Guntur, India (16.44°N, 80.62°E. It has been found that the gradient descent method performs well and the predicted TEC values are close to the original TEC measurements.

  20. A systematic approach to robust preconditioning for gradient-based inverse scattering algorithms

    International Nuclear Information System (INIS)

    Nordebo, Sven; Fhager, Andreas; Persson, Mikael; Gustafsson, Mats

    2008-01-01

    This paper presents a systematic approach to robust preconditioning for gradient-based nonlinear inverse scattering algorithms. In particular, one- and two-dimensional inverse problems are considered where the permittivity and conductivity profiles are unknown and the input data consist of the scattered field over a certain bandwidth. A time-domain least-squares formulation is employed and the inversion algorithm is based on a conjugate gradient or quasi-Newton algorithm together with an FDTD-electromagnetic solver. A Fisher information analysis is used to estimate the Hessian of the error functional. A robust preconditioner is then obtained by incorporating a parameter scaling such that the scaled Fisher information has a unit diagonal. By improving the conditioning of the Hessian, the convergence rate of the conjugate gradient or quasi-Newton methods are improved. The preconditioner is robust in the sense that the scaling, i.e. the diagonal Fisher information, is virtually invariant to the numerical resolution and the discretization model that is employed. Numerical examples of image reconstruction are included to illustrate the efficiency of the proposed technique

  1. Research on n-γ discrimination method based on spectrum gradient analysis of signals

    International Nuclear Information System (INIS)

    Luo Xiaoliang; Liu Guofu; Yang Jun; Wang Yueke

    2013-01-01

    Having discovered that there are distinct differences between the spectrum gradient of the output neutron and γ-ray signal from liquid scintillator detectors, this paper presented a n-γ discrimination method called spectrum gradient analysis (SGA) based on frequency-domain features of the pulse signals. The basic principle and feasibility of SGA method were discussed and the validity of n-γ discrimination results of SGA was verified by the associated particle neutron flight experiment. The discrimination performance of SGA was evaluated under different conditions of sampling rates ranging from 5 G/s to 250 M/s. The results show that SGA method exhibits insensitivity to noise, strong anti-interference ability, stable discrimination performance and lower amount of calculation in contrast with time-domain n-γ discrimination methods. (authors)

  2. A study of microindentation hardness tests by mechanism-based strain gradient plasticity

    International Nuclear Information System (INIS)

    Huang, Y.; Xue, Z.; Gao, H.; Nix, W. D.; Xia, Z. C.

    2000-01-01

    We recently proposed a theory of mechanism-based strain gradient (MSG) plasticity to account for the size dependence of plastic deformation at micron- and submicron-length scales. The MSG plasticity theory connects micron-scale plasticity to dislocation theories via a multiscale, hierarchical framework linking Taylor's dislocation hardening model to strain gradient plasticity. Here we show that the theory of MSG plasticity, when used to study micro-indentation, indeed reproduces the linear dependence observed in experiments, thus providing an important self-consistent check of the theory. The effects of pileup, sink-in, and the radius of indenter tip have been taken into account in the indentation model. In accomplishing this objective, we have generalized the MSG plasticity theory to include the elastic deformation in the hierarchical framework. (c) 2000 Materials Research Society

  3. Conjugate gradient based projection - A new explicit methodology for frictional contact

    Science.gov (United States)

    Tamma, Kumar K.; Li, Maocheng; Sha, Desong

    1993-01-01

    With special attention towards the applicability to parallel computation or vectorization, a new and effective explicit approach for linear complementary formulations involving a conjugate gradient based projection methodology is proposed in this study for contact problems with Coulomb friction. The overall objectives are focussed towards providing an explicit methodology of computation for the complete contact problem with friction. In this regard, the primary idea for solving the linear complementary formulations stems from an established search direction which is projected to a feasible region determined by the non-negative constraint condition; this direction is then applied to the Fletcher-Reeves conjugate gradient method resulting in a powerful explicit methodology which possesses high accuracy, excellent convergence characteristics, fast computational speed and is relatively simple to implement for contact problems involving Coulomb friction.

  4. Enhancing Evidence-Based Public Health Policy: Developing and Using Policy Narratives.

    Science.gov (United States)

    Troy, Lisa M; Kietzman, Kathryn G

    2016-06-01

    Academic researchers and clinicians have a critical role in shaping public policies to improve the health of an aging America. Policy narratives that pair personal stories with research statistics are a powerful tool to share knowledge generated in academic and clinical settings with policymakers. Effective policy narratives rely on a trustworthy and competent narrator and a compelling story that highlights the personal impact of policies under consideration and academic research that bolsters the story. Awareness of the cultural differences in the motivations, expectations, and institutional constraints of academic researchers and clinicians as information producers and U.S. Congress and federal agencies as information users is critical to the development of policy narratives that impact policy decisions. The current article describes the development and use of policy narratives to bridge cultures and enhance evidence-based public health policies that better meet the needs of older adults. [Journal of Gerontological Nursing, 42(6), 11-17.]. Copyright 2016, SLACK Incorporated.

  5. Extrusion-based 3D printing of poly(propylene fumarate) scaffolds with hydroxyapatite gradients

    Science.gov (United States)

    Trachtenberg, Jordan E.; Placone, Jesse K.; Smith, Brandon T.; Fisher, John P.; Mikos, Antonios G.

    2017-01-01

    The primary focus of this work is to present the current challenges of printing scaffolds with concentration gradients of nanoparticles with an aim to improve the processing of these scaffolds. Furthermore, we address how print fidelity is related to material composition and emphasize the importance of considering this relationship when developing complex scaffolds for bone implants. The ability to create complex tissues is becoming increasingly relevant in the tissue engineering community. For bone tissue engineering applications, this work demonstrates the ability to use extrusion-based printing techniques to control the spatial deposition of hydroxyapatite (HA) nanoparticles in a 3D composite scaffold. In doing so, we combined the benefits of synthetic, degradable polymers, such as poly(propylene fumarate) (PPF), with osteoconductive HA nanoparticles that provide robust compressive mechanical properties. Furthermore, the final 3D printed scaffolds consisted of well-defined layers with interconnected pores, two critical features for a successful bone implant. To demonstrate a controlled gradient of HA, thermogravimetric analysis was carried out to quantify HA on a per-layer basis. Moreover, we non-destructively evaluated the tendency of HA particles to aggregate within PPF using micro-computed tomography (µCT). This work provides insight for proper fabrication and characterization of composite scaffolds containing particle gradients and has broad applicability for future efforts in fabricating complex scaffolds for tissue engineering applications. PMID:28125380

  6. Characterization of the Diamond-like Carbon Based Functionally Gradient Film

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Diamond-like carbon coatings have been used as solid lubricating coatings in vacuum technology for their goodphysical and chemical properties. In this paper, the hybrid technique of unbalanced magnetron sputtering and plasmaimmersion ion implantation (PIll) was adopted to fabricate diamond-like carbon-based functionally gradient film,N/TiN/Ti(N,C)/DLC, on the 304 stainless steel substrate. The film was characterized by using Raman spectroscopyand glancing X-ray diffraction (GXRD), and the topography and surface roughness of the film was observed usingAFM. The mechanical properties of the film were evaluated by nano-indentation. The results showed that the surfaceroughness of the film was approximately 0.732 nm. The hardness and elastic modulus, fracture toughness andinterfacial fracture toughness of N/TiN/Ti(N,C)/DLC functionally gradient film were about 19.84 GPa, 190.03 GPa,3.75 MPa.m1/2 and 5.68 MPa@m1/2, respectively. Compared with that of DLC monolayer and C/TiC/DLC multilayer,this DLC gradient film has better qualities as a solid lubricating coating.

  7. Iris Location Algorithm Based on the CANNY Operator and Gradient Hough Transform

    Science.gov (United States)

    Zhong, L. H.; Meng, K.; Wang, Y.; Dai, Z. Q.; Li, S.

    2017-12-01

    In the iris recognition system, the accuracy of the localization of the inner and outer edges of the iris directly affects the performance of the recognition system, so iris localization has important research meaning. Our iris data contain eyelid, eyelashes, light spot and other noise, even the gray transformation of the images is not obvious, so the general methods of iris location are unable to realize the iris location. The method of the iris location based on Canny operator and gradient Hough transform is proposed. Firstly, the images are pre-processed; then, calculating the gradient information of images, the inner and outer edges of iris are coarse positioned using Canny operator; finally, according to the gradient Hough transform to realize precise localization of the inner and outer edge of iris. The experimental results show that our algorithm can achieve the localization of the inner and outer edges of the iris well, and the algorithm has strong anti-interference ability, can greatly reduce the location time and has higher accuracy and stability.

  8. Segmentasi Pembuluh Darah Retina Pada Citra Fundus Menggunakan Gradient Based Adaptive Thresholding Dan Region Growing

    Directory of Open Access Journals (Sweden)

    Deni Sutaji

    2016-07-01

    Full Text Available AbstrakSegmentasi pembuluh darah pada citra fundus retina menjadi hal yang substansial dalam dunia kedokteran, karena dapat digunakan untuk mendeteksi penyakit, seperti: diabetic retinopathy, hypertension, dan cardiovascular. Dokter membutuhkan waktu sekitar dua jam untuk mendeteksi pembuluh darah retina, sehingga diperlukan metode yang dapat membantu screening agar lebih cepat.Penelitian sebelumnya mampu melakukan segmentasi pembuluh darah yang sensitif terhadap variasi ukuran lebar pembuluh darah namun masih terjadi over-segmentasi pada area patologi. Oleh karena itu, penelitian ini bertujuan untuk mengembangkan metode segmentasi pembuluh darah pada citra fundus retina yang dapat mengurangi over-segmentasi pada area patologi menggunakan Gradient Based Adaptive Thresholding dan Region Growing.Metode yang diusulkan terdiri dari 3 tahap, yaitu segmentasi pembuluh darah utama, deteksi area patologi dan segmentasi pembuluh darah tipis. Tahap segmentasi pembuluh darah utama menggunakan high-pass filtering dan tophat reconstruction pada kanal hijau citra yang sudah diperbaiki kontrasnya sehingga lebih jelas perbedaan antara pembuluh darah dan background. Tahap deteksi area patologi menggunakan metode Gradient Based Adaptive Thresholding. Tahap segmentasi pembuluh darah tipis menggunakan Region Growing berdasarkan informasi label pembuluh darah utama dan label area patologi. Hasil segmentasi pembuluh darah utama dan pembuluh darah tipis kemudian digabungkan sehingga menjadi keluaran sistem berupa citra biner pembuluh darah. Berdasarkan hasil uji coba, metode ini mampu melakukan segmentasi pembuluh darah retina dengan baik pada citra fundus DRIVE, yaitu dengan akurasi rata-rata 95.25% dan nilai Area Under Curve (AUC pada kurva Relative Operating Characteristic (ROC sebesar 74.28%.                           Kata Kunci: citra fundus retina, gradient based adaptive thresholding, patologi, pembuluh darah retina, region growing

  9. A Scalable Policy and SNMP Based Network Management Framework

    Institute of Scientific and Technical Information of China (English)

    LIU Su-ping; DING Yong-sheng

    2009-01-01

    Traditional SNMP-based network management can not deal with the task of managing large-scaled distributed network,while policy-based management is one of the effective solutions in network and distributed systems management. However,cross-vendor hardware compatibility is one of the limitations in policy-based management. Devices existing in current network mostly support SNMP rather than Common Open Policy Service (COPS) protocol. By analyzing traditional network management and policy-based network management, a scalable network management framework is proposed. It is combined with Internet Engineering Task Force (IETF) framework for policybased management and SNMP-based network management. By interpreting and translating policy decision to SNMP message,policy can be executed in traditional SNMP-based device.

  10. Structuring AHP-based maintenance policy selection

    NARCIS (Netherlands)

    Goossens, Adriaan; Basten, Robertus Johannes Ida; Hummel, J. Marjan; van der Wegen, Leonardus L.M.

    2015-01-01

    We aim to structure the maintenance policy selection process for ships, using the Analytic Hierarchy Process (AHP). Maintenance is an important contributor to reach the intended life-time of capital technical assets, and it is gaining increasing interest and relevance. A maintenance policy is a

  11. An optical flow algorithm based on gradient constancy assumption for PIV image processing

    International Nuclear Information System (INIS)

    Zhong, Qianglong; Yang, Hua; Yin, Zhouping

    2017-01-01

    Particle image velocimetry (PIV) has matured as a flow measurement technique. It enables the description of the instantaneous velocity field of the flow by analyzing the particle motion obtained from digitally recorded images. Correlation based PIV evaluation technique is widely used because of its good accuracy and robustness. Although very successful, correlation PIV technique has some weakness which can be avoided by optical flow based PIV algorithms. At present, most of the optical flow methods applied to PIV are based on brightness constancy assumption. However, some factors of flow imaging technology and the nature property of the fluids make the brightness constancy assumption less appropriate in real PIV cases. In this paper, an implementation of a 2D optical flow algorithm (GCOF) based on gradient constancy assumption is introduced. The proposed GCOF assumes the edges of the illuminated PIV particles are constant during motion. It comprises two terms: a combined local-global gradient data term and a first-order divergence and vorticity smooth term. The approach can provide accurate dense motion fields. The approach are tested on synthetic images and on two experimental flows. The comparison of GCOF with other optical flow algorithms indicates the proposed method is more accurate especially in conditions of illumination variation. The comparison of GCOF with correlation PIV technique shows that the proposed GCOF has advantages on preserving small divergence and vorticity structures of the motion field and getting less outliers. As a consequence, the GCOF acquire a more accurate and better topological description of the turbulent flow. (paper)

  12. MR-based field-of-view extension in MR/PET: B0 homogenization using gradient enhancement (HUGE).

    Science.gov (United States)

    Blumhagen, Jan O; Ladebeck, Ralf; Fenchel, Matthias; Scheffler, Klaus

    2013-10-01

    In whole-body MR/PET, the human attenuation correction can be based on the MR data. However, an MR-based field-of-view (FoV) is limited due to physical restrictions such as B0 inhomogeneities and gradient nonlinearities. Therefore, for large patients, the MR image and the attenuation map might be truncated and the attenuation correction might be biased. The aim of this work is to explore extending the MR FoV through B0 homogenization using gradient enhancement in which an optimal readout gradient field is determined to locally compensate B0 inhomogeneities and gradient nonlinearities. A spin-echo-based sequence was developed that computes an optimal gradient for certain regions of interest, for example, the patient's arms. A significant distortion reduction was achieved outside the normal MR-based FoV. This FoV extension was achieved without any hardware modifications. In-plane distortions in a transaxially extended FoV of up to 600 mm were analyzed in phantom studies. In vivo measurements of the patient's arms lying outside the normal specified FoV were compared with and without the use of B0 homogenization using gradient enhancement. In summary, we designed a sequence that provides data for reducing the image distortions due to B0 inhomogeneities and gradient nonlinearities and used the data to extend the MR FoV. Copyright © 2011 Wiley Periodicals, Inc.

  13. Penalty Algorithm Based on Conjugate Gradient Method for Solving Portfolio Management Problem

    Directory of Open Access Journals (Sweden)

    Wang YaLin

    2009-01-01

    Full Text Available A new approach was proposed to reformulate the biobjectives optimization model of portfolio management into an unconstrained minimization problem, where the objective function is a piecewise quadratic polynomial. We presented some properties of such an objective function. Then, a class of penalty algorithms based on the well-known conjugate gradient methods was developed to find the solution of portfolio management problem. By implementing the proposed algorithm to solve the real problems from the stock market in China, it was shown that this algorithm is promising.

  14. Analyze the optimal solutions of optimization problems by means of fractional gradient based system using VIM

    Directory of Open Access Journals (Sweden)

    Firat Evirgen

    2016-04-01

    Full Text Available In this paper, a class of Nonlinear Programming problem is modeled with gradient based system of fractional order differential equations in Caputo's sense. To see the overlap between the equilibrium point of the fractional order dynamic system and theoptimal solution of the NLP problem in a longer timespan the Multistage Variational İteration Method isapplied. The comparisons among the multistage variational iteration method, the variationaliteration method and the fourth order Runge-Kutta method in fractional and integer order showthat fractional order model and techniques can be seen as an effective and reliable tool for finding optimal solutions of Nonlinear Programming problems.

  15. Reconstruction for limited-projection fluorescence molecular tomography based on projected restarted conjugate gradient normal residual.

    Science.gov (United States)

    Cao, Xu; Zhang, Bin; Liu, Fei; Wang, Xin; Bai, Jing

    2011-12-01

    Limited-projection fluorescence molecular tomography (FMT) can greatly reduce the acquisition time, which is suitable for resolving fast biology processes in vivo but suffers from severe ill-posedness because of the reconstruction using only limited projections. To overcome the severe ill-posedness, we report a reconstruction method based on the projected restarted conjugate gradient normal residual. The reconstruction results of two phantom experiments demonstrate that the proposed method is feasible for limited-projection FMT. © 2011 Optical Society of America

  16. Stochastic parallel gradient descent based adaptive optics used for a high contrast imaging coronagraph

    International Nuclear Information System (INIS)

    Dong Bing; Ren Deqing; Zhang Xi

    2011-01-01

    An adaptive optics (AO) system based on a stochastic parallel gradient descent (SPGD) algorithm is proposed to reduce the speckle noises in the optical system of a stellar coronagraph in order to further improve the contrast. The principle of the SPGD algorithm is described briefly and a metric suitable for point source imaging optimization is given. The feasibility and good performance of the SPGD algorithm is demonstrated by an experimental system featured with a 140-actuator deformable mirror and a Hartmann-Shark wavefront sensor. Then the SPGD based AO is applied to a liquid crystal array (LCA) based coronagraph to improve the contrast. The LCA can modulate the incoming light to generate a pupil apodization mask of any pattern. A circular stepped pattern is used in our preliminary experiment and the image contrast shows improvement from 10 -3 to 10 -4.5 at an angular distance of 2λ/D after being corrected by SPGD based AO.

  17. Dose gradient analyses in linac-based intracranial stereotactic radiosurgery using paddick's gradient index. Consideration of the optimal method for plan evaluation

    International Nuclear Information System (INIS)

    Ohtakara, Kazuhiro; Hayashi, Shinya; Hoshi, Hiroaki

    2011-01-01

    The objective of our study was to describe the dose gradient characteristics of Linac-based stereotactic radiosurgery using Paddick's gradient index (GI) and to elucidate the factors influencing the GI value. Seventy-three plans for brain metastases using the dynamic conformal arcs were reviewed. The GI values were calculated at the 80% and 90% isodose surfaces (IDSs) and at the different target coverage IDSs (D99, D95, D90, and D85). The GI values significantly decreased as the target coverage of the reference IDS increased (the percentage of the IDS decreased). There was a significant inverse correlation between the GI values and target volume. The plans generated with the addition of a 1-mm leaf margin had worse GI values both at the D99 and D95 relative to those without leaf margin. The number and arrangement of arcs also affected the GI value. The GI values are highly sensitive to the IDS selection variability for dose prescription or evaluation, the target volume, and the planning method. To objectively compare the quality of dose gradient between rival plans, it would be preferable to employ the GI defined at the reference IDS indicating the specific target coverage (exempli gratia (e.g.), D95), irrespective of the intended marginal dose. The modified GI (mGI), defined in this study, substituting the denominator of the original GI with the target volume, would be useful to compensate for the false superior GI value in cases of target over-coverage with the reference IDS and to objectively evaluate the dose gradient outside the target boundary. (author)

  18. Efficient L1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient.

    Science.gov (United States)

    Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-09-15

    For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.

  19. Stable Computation of the Vertical Gradient of Potential Field Data Based on Incorporating the Smoothing Filters

    Science.gov (United States)

    Baniamerian, Jamaledin; Liu, Shuang; Abbas, Mahmoud Ahmed

    2018-04-01

    The vertical gradient is an essential tool in interpretation algorithms. It is also the primary enhancement technique to improve the resolution of measured gravity and magnetic field data, since it has higher sensitivity to changes in physical properties (density or susceptibility) of the subsurface structures than the measured field. If the field derivatives are not directly measured with the gradiometers, they can be calculated from the collected gravity or magnetic data using numerical methods such as those based on fast Fourier transform technique. The gradients behave similar to high-pass filters and enhance the short-wavelength anomalies which may be associated with either small-shallow sources or high-frequency noise content in data, and their numerical computation is susceptible to suffer from amplification of noise. This behaviour can adversely affect the stability of the derivatives in the presence of even a small level of the noise and consequently limit their application to interpretation methods. Adding a smoothing term to the conventional formulation of calculating the vertical gradient in Fourier domain can improve the stability of numerical differentiation of the field. In this paper, we propose a strategy in which the overall efficiency of the classical algorithm in Fourier domain is improved by incorporating two different smoothing filters. For smoothing term, a simple qualitative procedure based on the upward continuation of the field to a higher altitude is introduced to estimate the related parameters which are called regularization parameter and cut-off wavenumber in the corresponding filters. The efficiency of these new approaches is validated by computing the first- and second-order derivatives of noise-corrupted synthetic data sets and then comparing the results with the true ones. The filtered and unfiltered vertical gradients are incorporated into the extended Euler deconvolution to estimate the depth and structural index of a magnetic

  20. Genetically based differentiation in growth of multiple non-native plant species along a steep environmental gradient.

    Science.gov (United States)

    Haider, Sylvia; Kueffer, Christoph; Edwards, Peter J; Alexander, Jake M

    2012-09-01

    A non-native plant species spreading along an environmental gradient may need to adjust its growth to the prevailing conditions that it encounters by a combination of phenotypic plasticity and genetic adaptation. There have been several studies of how non-native species respond to changing environmental conditions along latitudinal gradients, but much less is known about elevational gradients. We conducted a climate chamber experiment to investigate plastic and genetically based growth responses of 13 herbaceous non-native plants along an elevational gradient from 100 to 2,000 m a.s.l. in Tenerife. Conditions in the field ranged from high anthropogenic disturbance but generally favourable temperatures for plant growth in the lower half of the gradient, to low disturbance but much cooler conditions in the upper half. We collected seed from low, mid and high elevations and grew them in climate chambers under the characteristic temperatures at these three elevations. Growth of all species was reduced under lower temperatures along both halves of the gradient. We found consistent genetically based differences in growth over the upper elevational gradient, with plants from high-elevation sites growing more slowly than those from mid-elevation ones, while the pattern in the lower part of the gradient was more mixed. Our data suggest that many non-native plants might respond to climate along elevational gradients by genetically based changes in key traits, especially at higher elevations where low temperatures probably impose a stronger selection pressure. At lower elevations, where anthropogenic influences are greater, higher gene flow and frequent disturbance might favour genotypes with broad ecological amplitudes. Thus the importance of evolutionary processes for invasion success is likely to be context-dependent.

  1. Variational Level Set Method for Two-Stage Image Segmentation Based on Morphological Gradients

    Directory of Open Access Journals (Sweden)

    Zemin Ren

    2014-01-01

    Full Text Available We use variational level set method and transition region extraction techniques to achieve image segmentation task. The proposed scheme is done by two steps. We first develop a novel algorithm to extract transition region based on the morphological gradient. After this, we integrate the transition region into a variational level set framework and develop a novel geometric active contour model, which include an external energy based on transition region and fractional order edge indicator function. The external energy is used to drive the zero level set toward the desired image features, such as object boundaries. Due to this external energy, the proposed model allows for more flexible initialization. The fractional order edge indicator function is incorporated into the length regularization term to diminish the influence of noise. Moreover, internal energy is added into the proposed model to penalize the deviation of the level set function from a signed distance function. The results evolution of the level set function is the gradient flow that minimizes the overall energy functional. The proposed model has been applied to both synthetic and real images with promising results.

  2. Game Algorithm for Resource Allocation Based on Intelligent Gradient in HetNet

    Directory of Open Access Journals (Sweden)

    Fang Ye

    2017-02-01

    Full Text Available In order to improve system performance such as throughput, heterogeneous network (HetNet has become an effective solution in Long Term Evolution-Advanced (LET-A. However, co-channel interference leads to degradation of the HetNet throughput, because femtocells are always arranged to share the spectrum with the macro base station. In this paper, in view of the serious cross-layer interference in double layer HetNet, the Stackelberg game model is adopted to analyze the resource allocation methods of the network. Unlike the traditional system models only focusing on macro base station performance improvement, we take into account the overall system performance and build a revenue function with convexity. System utility functions are defined as the average throughput, which does not adopt frequency spectrum trading method, so as to avoid excessive signaling overhead. Due to the value scope of continuous Nash equilibrium of the built game model, the gradient iterative algorithm is introduced to reduce the computational complexity. As for the solution of Nash equilibrium, one kind of gradient iterative algorithm is proposed, which is able to intelligently choose adjustment factors. The Nash equilibrium can be quickly solved; meanwhile, the step of presetting adjustment factors is avoided according to network parameters in traditional linear iterative model. Simulation results show that the proposed algorithm enhances the overall performance of the system.

  3. Identification and prognostic value of anterior gradient protein 2 expression in breast cancer based on tissue microarray.

    Science.gov (United States)

    Guo, Jilong; Gong, Guohua; Zhang, Bin

    2017-07-01

    Breast cancer has attracted substantial attention as one of the major cancers causing death in women. It is crucial to find potential biomarkers of prognostic value in breast cancer. In this study, the expression pattern of anterior gradient protein 2 in breast cancer was identified based on the main molecular subgroups. Through analysis of 69 samples from the Gene Expression Omnibus database, we found that anterior gradient protein 2 expression was significantly higher in non-triple-negative breast cancer tissues compared with normal tissues and triple-negative breast cancer tissues (p gradient protein 2 expression pattern. Furthermore, we performed immunohistochemical analysis. The quantification results revealed that anterior gradient protein 2 is highly expressed in non-triple-negative breast cancer (grade 3 excluded) and grade 1 + 2 (triple-negative breast cancer excluded) tumours compared with normal tissues. Anterior gradient protein 2 was significantly highly expressed in non-triple-negative breast cancer (grade 3 excluded) and non-triple-negative breast cancer tissues compared with triple-negative breast cancer tissues (p gradient protein 2 was significantly highly expressed in grade 1 + 2 (triple-negative breast cancer excluded) and grade 1 + 2 tissues compared with grade 3 tissues (p gradient protein 2 expression was significantly associated with histologic type, histological grade, oestrogen status and progesterone status. Univariate analysis of clinicopathological variables showed that anterior gradient protein 2 expression, tumour size and lymph node status were significantly correlated with overall survival in patients with grade 1 and 2 tumours. Cox multivariate analysis revealed anterior gradient protein 2 as a putative independent indicator of unfavourable outcomes (p = 0.031). All these data clearly showed that anterior gradient protein 2 is highly expressed in breast cancer and can be regarded as a putative biomarker for

  4. Illicit Drugs, Policing and the Evidence-Based Policy Paradigm

    Science.gov (United States)

    Ritter, Alison; Lancaster, Kari

    2013-01-01

    The mantra of evidence-based policy (EBP) suggests that endeavours to implement evidence-based policing will produce better outcomes. However there is dissonance between the rhetoric of EBP and the actuality of policing policy. This disjuncture is critically analysed using the case study of illicit drugs policing. The dissonance may be ameliorated…

  5. Thickness filters for gradient based multi-material and thickness optimization of laminated composite structures

    DEFF Research Database (Denmark)

    Sørensen, Rene; Lund, Erik

    2015-01-01

    This paper presents a new gradient based method for performing discrete material and thickness optimization of laminated composite structures. The novelty in the new method lies in the application of so-called casting constraints, or thickness filters in this context, to control the thickness...... variation throughout the laminate. The filters replace the layerwise density variables with a single continuous through-the-thickness design variable. Consequently, the filters eliminate the need for having explicit constraints for preventing intermediate void through the thickness of the laminate....... Therefore, the filters reduce both the number of constraints and design variables in the optimization problem. Based upon a continuous approximation of a unit step function, the thickness filters are capable of projecting discrete 0/1 values to the underlying layerwise or ”physical” density variables which...

  6. Full Waveform Inversion Using an Energy-Based Objective Function with Efficient Calculation of the Gradient

    KAUST Repository

    Choi, Yun Seok

    2017-05-26

    Full waveform inversion (FWI) using an energy-based objective function has the potential to provide long wavelength model information even without low frequency in the data. However, without the back-propagation method (adjoint-state method), its implementation is impractical for the model size of general seismic survey. We derive the gradient of the energy-based objective function using the back-propagation method to make its FWI feasible. We also raise the energy signal to the power of a small positive number to properly handle the energy signal imbalance as a function of offset. Examples demonstrate that the proposed FWI algorithm provides a convergent long wavelength structure model even without low-frequency information, which can be used as a good starting model for the subsequent conventional FWI.

  7. Filtering Airborne LIDAR Data by AN Improved Morphological Method Based on Multi-Gradient Analysis

    Science.gov (United States)

    Li, Y.

    2013-05-01

    The technology of airborne Light Detection And Ranging (LIDAR) is capable of acquiring dense and accurate 3D geospatial data. Although many related efforts have been made by a lot of researchers in the last few years, LIDAR data filtering is still a challenging task, especially for area with high relief or hybrid geographic features. In order to address the bare-ground extraction from LIDAR point clouds of complex landscapes, a novel morphological filtering algorithm is proposed based on multi-gradient analysis in terms of the characteristic of LIDAR data distribution in this paper. Firstly, point clouds are organized by an index mesh. Then, the multigradient of each point is calculated using the morphological method. And, objects are removed gradually by choosing some points to carry on an improved opening operation constrained by multi-gradient iteratively. 15 sample data provided by ISPRS Working Group III/3 are employed to test the filtering algorithm proposed. These sample data include those environments that may lead to filtering difficulty. Experimental results show that filtering algorithm proposed by this paper is of high adaptability to various scenes including urban and rural areas. Omission error, commission error and total error can be simultaneously controlled in a relatively small interval. This algorithm can efficiently remove object points while preserves ground points to a great degree.

  8. A Sea-Sky Line Detection Method for Unmanned Surface Vehicles Based on Gradient Saliency.

    Science.gov (United States)

    Wang, Bo; Su, Yumin; Wan, Lei

    2016-04-15

    Special features in real marine environments such as cloud clutter, sea glint and weather conditions always result in various kinds of interference in optical images, which make it very difficult for unmanned surface vehicles (USVs) to detect the sea-sky line (SSL) accurately. To solve this problem a saliency-based SSL detection method is proposed. Through the computation of gradient saliency the line features of SSL are enhanced effectively, while other interference factors are relatively suppressed, and line support regions are obtained by a region growing method on gradient orientation. The SSL identification is achieved according to region contrast, line segment length and orientation features, and optimal state estimation of SSL detection is implemented by introducing a cubature Kalman filter (CKF). In the end, the proposed method is tested on a benchmark dataset from the "XL" USV in a real marine environment, and the experimental results demonstrate that the proposed method is significantly superior to other state-of-the-art methods in terms of accuracy rate and real-time performance, and its accuracy and stability are effectively improved by the CKF.

  9. Image defog algorithm based on open close filter and gradient domain recursive bilateral filter

    Science.gov (United States)

    Liu, Daqian; Liu, Wanjun; Zhao, Qingguo; Fei, Bowen

    2017-11-01

    To solve the problems of fuzzy details, color distortion, low brightness of the image obtained by the dark channel prior defog algorithm, an image defog algorithm based on open close filter and gradient domain recursive bilateral filter, referred to as OCRBF, was put forward. The algorithm named OCRBF firstly makes use of weighted quad tree to obtain more accurate the global atmospheric value, then exploits multiple-structure element morphological open and close filter towards the minimum channel map to obtain a rough scattering map by dark channel prior, makes use of variogram to correct the transmittance map,and uses gradient domain recursive bilateral filter for the smooth operation, finally gets recovery images by image degradation model, and makes contrast adjustment to get bright, clear and no fog image. A large number of experimental results show that the proposed defog method in this paper can be good to remove the fog , recover color and definition of the fog image containing close range image, image perspective, the image including the bright areas very well, compared with other image defog algorithms,obtain more clear and natural fog free images with details of higher visibility, what's more, the relationship between the time complexity of SIDA algorithm and the number of image pixels is a linear correlation.

  10. THE IMAGE REGISTRATION OF FOURIER-MELLIN BASED ON THE COMBINATION OF PROJECTION AND GRADIENT PREPROCESSING

    Directory of Open Access Journals (Sweden)

    D. Gao

    2017-09-01

    Full Text Available Image registration is one of the most important applications in the field of image processing. The method of Fourier Merlin transform, which has the advantages of high precision and good robustness to change in light and shade, partial blocking, noise influence and so on, is widely used. However, not only this method can’t obtain the unique mutual power pulse function for non-parallel image pairs, even part of image pairs also can’t get the mutual power function pulse. In this paper, an image registration method based on Fourier-Mellin transformation in the view of projection-gradient preprocessing is proposed. According to the projection conformational equation, the method calculates the matrix of image projection transformation to correct the tilt image; then, gradient preprocessing and Fourier-Mellin transformation are performed on the corrected image to obtain the registration parameters. Eventually, the experiment results show that the method makes the image registration of Fourier-Mellin transformation not only applicable to the registration of the parallel image pairs, but also to the registration of non-parallel image pairs. What’s more, the better registration effect can be obtained

  11. Fast gradient-based methods for Bayesian reconstruction of transmission and emission PET images

    International Nuclear Information System (INIS)

    Mumcuglu, E.U.; Leahy, R.; Zhou, Z.; Cherry, S.R.

    1994-01-01

    The authors describe conjugate gradient algorithms for reconstruction of transmission and emission PET images. The reconstructions are based on a Bayesian formulation, where the data are modeled as a collection of independent Poisson random variables and the image is modeled using a Markov random field. A conjugate gradient algorithm is used to compute a maximum a posteriori (MAP) estimate of the image by maximizing over the posterior density. To ensure nonnegativity of the solution, a penalty function is used to convert the problem to one of unconstrained optimization. Preconditioners are used to enhance convergence rates. These methods generally achieve effective convergence in 15--25 iterations. Reconstructions are presented of an 18 FDG whole body scan from data collected using a Siemens/CTI ECAT931 whole body system. These results indicate significant improvements in emission image quality using the Bayesian approach, in comparison to filtered backprojection, particularly when reprojections of the MAP transmission image are used in place of the standard attenuation correction factors

  12. A Sea-Sky Line Detection Method for Unmanned Surface Vehicles Based on Gradient Saliency

    Directory of Open Access Journals (Sweden)

    Bo Wang

    2016-04-01

    Full Text Available Special features in real marine environments such as cloud clutter, sea glint and weather conditions always result in various kinds of interference in optical images, which make it very difficult for unmanned surface vehicles (USVs to detect the sea-sky line (SSL accurately. To solve this problem a saliency-based SSL detection method is proposed. Through the computation of gradient saliency the line features of SSL are enhanced effectively, while other interference factors are relatively suppressed, and line support regions are obtained by a region growing method on gradient orientation. The SSL identification is achieved according to region contrast, line segment length and orientation features, and optimal state estimation of SSL detection is implemented by introducing a cubature Kalman filter (CKF. In the end, the proposed method is tested on a benchmark dataset from the “XL” USV in a real marine environment, and the experimental results demonstrate that the proposed method is significantly superior to other state-of-the-art methods in terms of accuracy rate and real-time performance, and its accuracy and stability are effectively improved by the CKF.

  13. Evaluation of stress gradient by x-ray stress measurement based on change in angle phi

    International Nuclear Information System (INIS)

    Sasaki, Toshihiko; Kuramoto, Makoto; Yoshioka, Yasuo.

    1985-01-01

    A new principle of X-ray stress evaluation for a sample with steep stress gradient has been prosed. The feature of this method is that the stress is determined by using so-called phi-method based on the change of phi-angle and thus has no effect on the penetration depth of X-rays. The procedure is as follows; firstly, an average stress within the penetration depth of X-rays is determined by changing only phi-angle under a fixed psi-angle, and then a distribution of the average stress vs. the penetration depth of X-rays is detected by repeating the similar procedure at different psi-angles. The following conclusions were found out as the result of residual stress measurements on a carbon steel of type S 55 C polished by emery paper. This method is practical enough to use for a plane stress problem. And the assumption of a linear stress gradient adopted in the authors' previous investigations is valid. In case of a triaxial stress analysis, this method is effective for the solution of three shearing stresses. However, three normal stresses can not be solved perfectly except particular psi-angles. (author)

  14. Method and means for a spatial and temporal probe for laser-generated plumes based on density gradients

    Science.gov (United States)

    Yeung, E.S.; Chen, G.

    1990-05-01

    A method and means are disclosed for a spatial and temporal probe for laser generated plumes based on density gradients includes generation of a plume of vaporized material from a surface by an energy source. The probe laser beam is positioned so that the plume passes through the probe laser beam. Movement of the probe laser beam caused by refraction from the density gradient of the plume is monitored. Spatial and temporal information, correlated to one another, is then derived. 15 figs.

  15. Policy and Validity Prospects for Performance-Based Assessment.

    Science.gov (United States)

    Baker, Eva L.; And Others

    1994-01-01

    This article describes performance-based assessment as expounded by its proponents, comments on these conceptions, reviews evidence regarding the technical quality of performance-based assessment, and considers its validity under various policy options. (JDD)

  16. Base-stock policies with reservations

    NARCIS (Netherlands)

    van Foreest, Nicky D.; Teunter, Ruud H.; Syntetos, Aris A.

    2018-01-01

    All intensively studied and widely applied inventory control policies satisfy demand in accordance with the First-Come-First-Served (FCFS) rule, whether this demand is in backorder or not. Interestingly, this rule is sub-optimal when the fill-rate is constrained or when the backorder cost structure

  17. Strain gradient plasticity-based modeling of hydrogen environment assisted cracking

    DEFF Research Database (Denmark)

    Martínez Pañeda, Emilio; Niordson, Christian Frithiof; P. Gangloff, Richard

    2016-01-01

    Finite element analysis of stress about a blunt crack tip, emphasizing finite strain and phenomenologicaland mechanism-based strain gradient plasticity (SGP) formulations, is integrated with electrochemical assessment of occluded-crack tip hydrogen (H) solubility and two H-decohesion models...... to predict hydrogen environment assisted crack growth properties. SGP elevates crack tip geometrically necessary dislocation density and flow stress, with enhancement declining with increasing alloy strength. Elevated hydrostatic stress promotes high-trapped H concentration for crack tip damage......; it is imperative to account for SGP in H cracking models. Predictions of the threshold stress intensity factor and H-diffusion limited Stage II crack growth rate agree with experimental data for a high strength austenitic Ni-Cusuperalloy (Monel®K-500) and two modern ultra-high strength martensitic steels (Aer...

  18. Sparse reconstruction for quantitative bioluminescence tomography based on the incomplete variables truncated conjugate gradient method.

    Science.gov (United States)

    He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie

    2010-11-22

    In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.

  19. Accelerated gradient-based free form deformable registration for online adaptive radiotherapy

    International Nuclear Information System (INIS)

    Yu, Gang; Yang, Guanyu; Shu, Huazhong; Li, Baosheng; Liang, Yueqiang; Yin, Yong; Li, Dengwang

    2015-01-01

    The registration of planning fan-beam computed tomography (FBCT) and daily cone-beam CT (CBCT) is a crucial step in adaptive radiation therapy. The current intensity-based registration algorithms, such as Demons, may fail when they are used to register FBCT and CBCT, because the CT numbers in CBCT cannot exactly correspond to the electron densities. In this paper, we investigated the effects of CBCT intensity inaccuracy on the registration accuracy and developed an accurate gradient-based free form deformation algorithm (GFFD). GFFD distinguishes itself from other free form deformable registration algorithms by (a) measuring the similarity using the 3D gradient vector fields to avoid the effect of inconsistent intensities between the two modalities; (b) accommodating image sampling anisotropy using the local polynomial approximation-intersection of confidence intervals (LPA-ICI) algorithm to ensure a smooth and continuous displacement field; and (c) introducing a ‘bi-directional’ force along with an adaptive force strength adjustment to accelerate the convergence process. It is expected that such a strategy can decrease the effect of the inconsistent intensities between the two modalities, thus improving the registration accuracy and robustness. Moreover, for clinical application, the algorithm was implemented by graphics processing units (GPU) through OpenCL framework. The registration time of the GFFD algorithm for each set of CT data ranges from 8 to 13 s. The applications of on-line adaptive image-guided radiation therapy, including auto-propagation of contours, aperture-optimization and dose volume histogram (DVH) in the course of radiation therapy were also studied by in-house-developed software. (paper)

  20. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    Directory of Open Access Journals (Sweden)

    Fasahat Ullah Siddiqui

    2016-07-01

    Full Text Available Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality. Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state

  1. ℓ0 Gradient Minimization Based Image Reconstruction for Limited-Angle Computed Tomography.

    Directory of Open Access Journals (Sweden)

    Wei Yu

    Full Text Available In medical and industrial applications of computed tomography (CT imaging, limited by the scanning environment and the risk of excessive X-ray radiation exposure imposed to the patients, reconstructing high quality CT images from limited projection data has become a hot topic. X-ray imaging in limited scanning angular range is an effective imaging modality to reduce the radiation dose to the patients. As the projection data available in this modality are incomplete, limited-angle CT image reconstruction is actually an ill-posed inverse problem. To solve the problem, image reconstructed by conventional filtered back projection (FBP algorithm frequently results in conspicuous streak artifacts and gradual changed artifacts nearby edges. Image reconstruction based on total variation minimization (TVM can significantly reduce streak artifacts in few-view CT, but it suffers from the gradual changed artifacts nearby edges in limited-angle CT. To suppress this kind of artifacts, we develop an image reconstruction algorithm based on ℓ0 gradient minimization for limited-angle CT in this paper. The ℓ0-norm of the image gradient is taken as the regularization function in the framework of developed reconstruction model. We transformed the optimization problem into a few optimization sub-problems and then, solved these sub-problems in the manner of alternating iteration. Numerical experiments are performed to validate the efficiency and the feasibility of the developed algorithm. From the statistical analysis results of the performance evaluations peak signal-to-noise ratio (PSNR and normalized root mean square distance (NRMSD, it shows that there are significant statistical differences between different algorithms from different scanning angular ranges (p<0.0001. From the experimental results, it also indicates that the developed algorithm outperforms classical reconstruction algorithms in suppressing the streak artifacts and the gradual changed

  2. Drawing for Traffic Marking Using Bidirectional Gradient-Based Detection with MMS LIDAR Intensity

    Science.gov (United States)

    Takahashi, G.; Takeda, H.; Nakamura, K.

    2016-06-01

    Recently, the development of autonomous cars is accelerating on the integration of highly advanced artificial intelligence, which increases demand for a digital map with high accuracy. In particular, traffic markings are required to be precisely digitized since automatic driving utilizes them for position detection. To draw traffic markings, we benefit from Mobile Mapping Systems (MMS) equipped with high-density Laser imaging Detection and Ranging (LiDAR) scanners, which produces large amount of data efficiently with XYZ coordination along with reflectance intensity. Digitizing this data, on the other hand, conventionally has been dependent on human operation, which thus suffers from human errors, subjectivity errors, and low reproductivity. We have tackled this problem by means of automatic extraction of traffic marking, which partially accomplished to draw several traffic markings (G. Takahashi et al., 2014). The key idea of the method was extracting lines using the Hough transform strategically focused on changes in local reflection intensity along scan lines. However, it failed to extract traffic markings properly in a densely marked area, especially when local changing points are close each other. In this paper, we propose a bidirectional gradient-based detection method where local changing points are labelled with plus or minus group. Given that each label corresponds to the boundary between traffic markings and background, we can identify traffic markings explicitly, meaning traffic lines are differentiated correctly by the proposed method. As such, our automated method, a highly accurate and non-human-operator-dependent method using bidirectional gradient-based algorithm, can successfully extract traffic lines composed of complex shapes such as a cross walk, resulting in minimizing cost and obtaining highly accurate results.

  3. DRAWING FOR TRAFFIC MARKING USING BIDIRECTIONAL GRADIENT-BASED DETECTION WITH MMS LIDAR INTENSITY

    Directory of Open Access Journals (Sweden)

    G. Takahashi

    2016-06-01

    Full Text Available Recently, the development of autonomous cars is accelerating on the integration of highly advanced artificial intelligence, which increases demand for a digital map with high accuracy. In particular, traffic markings are required to be precisely digitized since automatic driving utilizes them for position detection. To draw traffic markings, we benefit from Mobile Mapping Systems (MMS equipped with high-density Laser imaging Detection and Ranging (LiDAR scanners, which produces large amount of data efficiently with XYZ coordination along with reflectance intensity. Digitizing this data, on the other hand, conventionally has been dependent on human operation, which thus suffers from human errors, subjectivity errors, and low reproductivity. We have tackled this problem by means of automatic extraction of traffic marking, which partially accomplished to draw several traffic markings (G. Takahashi et al., 2014. The key idea of the method was extracting lines using the Hough transform strategically focused on changes in local reflection intensity along scan lines. However, it failed to extract traffic markings properly in a densely marked area, especially when local changing points are close each other. In this paper, we propose a bidirectional gradient-based detection method where local changing points are labelled with plus or minus group. Given that each label corresponds to the boundary between traffic markings and background, we can identify traffic markings explicitly, meaning traffic lines are differentiated correctly by the proposed method. As such, our automated method, a highly accurate and non-human-operator-dependent method using bidirectional gradient-based algorithm, can successfully extract traffic lines composed of complex shapes such as a cross walk, resulting in minimizing cost and obtaining highly accurate results.

  4. Policy-Based Negotiation Engine for Cross-Domain Interoperability

    Science.gov (United States)

    Vatan, Farrokh; Chow, Edward T.

    2012-01-01

    A successful policy negotiation scheme for Policy-Based Management (PBM) has been implemented. Policy negotiation is the process of determining the "best" communication policy that all of the parties involved can agree on. Specifically, the problem is how to reconcile the various (and possibly conflicting) communication protocols used by different divisions. The solution must use protocols available to all parties involved, and should attempt to do so in the best way possible. Which protocols are commonly available, and what the definition of "best" is will be dependent on the parties involved and their individual communications priorities.

  5. Content-addressable memory based enforcement of configurable policies

    Science.gov (United States)

    Berg, Michael J

    2014-05-06

    A monitoring device for monitoring transactions on a bus includes content-addressable memory ("CAM") and a response policy unit. The CAM includes an input coupled to receive a bus transaction tag based on bus traffic on the bus. The CAM stores data tags associated with rules of a security policy to compare the bus transaction tag to the data tags. The CAM generates an output signal indicating whether one or more matches occurred. The response policy unit is coupled to the CAM to receive the output signal from the CAM and to execute a policy action in response to the output signal.

  6. Strengthening Research Capacity and Evidence-Based Policy ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... wider Central Asian region lack capacity to conduct empirical analysis and create policies based on research evidence. To address government priorities, the region needs quality research driven by local demands and analytical skills that can inform effective development responses through policy. This 39-month project, ...

  7. A review of Agent Based Modeling for agricultural policy evaluation

    NARCIS (Netherlands)

    Kremmydas, Dimitris; Athanasiadis, I.N.; Rozakis, Stelios

    2018-01-01

    Farm level scale policy analysis is receiving increased attention due to a changing agricultural policy orientation. Agent based models (ABM) are farm level models that have appeared in the end of 1990's, having several differences from traditional farm level models, like the consideration of

  8. Adaptive dynamic programming for discrete-time linear quadratic regulation based on multirate generalised policy iteration

    Science.gov (United States)

    Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho

    2018-06-01

    In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.

  9. Modified Convolutional Neural Network Based on Dropout and the Stochastic Gradient Descent Optimizer

    Directory of Open Access Journals (Sweden)

    Jing Yang

    2018-03-01

    Full Text Available This study proposes a modified convolutional neural network (CNN algorithm that is based on dropout and the stochastic gradient descent (SGD optimizer (MCNN-DS, after analyzing the problems of CNNs in extracting the convolution features, to improve the feature recognition rate and reduce the time-cost of CNNs. The MCNN-DS has a quadratic CNN structure and adopts the rectified linear unit as the activation function to avoid the gradient problem and accelerate convergence. To address the overfitting problem, the algorithm uses an SGD optimizer, which is implemented by inserting a dropout layer into the all-connected and output layers, to minimize cross entropy. This study used the datasets MNIST, HCL2000, and EnglishHand as the benchmark data, analyzed the performance of the SGD optimizer under different learning parameters, and found that the proposed algorithm exhibited good recognition performance when the learning rate was set to [0.05, 0.07]. The performances of WCNN, MLP-CNN, SVM-ELM, and MCNN-DS were compared. Statistical results showed the following: (1 For the benchmark MNIST, the MCNN-DS exhibited a high recognition rate of 99.97%, and the time-cost of the proposed algorithm was merely 21.95% of MLP-CNN, and 10.02% of SVM-ELM; (2 Compared with SVM-ELM, the average improvement in the recognition rate of MCNN-DS was 2.35% for the benchmark HCL2000, and the time-cost of MCNN-DS was only 15.41%; (3 For the EnglishHand test set, the lowest recognition rate of the algorithm was 84.93%, the highest recognition rate was 95.29%, and the average recognition rate was 89.77%.

  10. Fuzzy logic-based assessment for mapping potential infiltration areas in low-gradient watersheds.

    Science.gov (United States)

    Quiroz Londoño, Orlando Mauricio; Romanelli, Asunción; Lima, María Lourdes; Massone, Héctor Enrique; Martínez, Daniel Emilio

    2016-07-01

    This paper gives an account of the design a logic-based approach for identifying potential infiltration areas in low-gradient watersheds based on remote sensing data. This methodological framework is applied in a sector of the Pampa Plain, Argentina, which has high level of agricultural activities and large demands for groundwater supplies. Potential infiltration sites are assessed as a function of two primary topics: hydrologic and soil conditions. This model shows the state of each evaluated subwatershed respecting to its potential contribution to infiltration mainly based on easily measurable and commonly used parameters: drainage density, geomorphologic units, soil media, land-cover, slope and aspect (slope orientation). Mapped outputs from the logic model displayed 42% very low-low, 16% moderate, 41% high-very high contribution to potential infiltration in the whole watershed. Subwatersheds in the upper and lower section were identified as areas with high to very high potential infiltration according to the following media features: low drainage density (drainage plain and, dunes and beaches. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Missing value imputation in DNA microarrays based on conjugate gradient method.

    Science.gov (United States)

    Dorri, Fatemeh; Azmi, Paeiz; Dorri, Faezeh

    2012-02-01

    Analysis of gene expression profiles needs a complete matrix of gene array values; consequently, imputation methods have been suggested. In this paper, an algorithm that is based on conjugate gradient (CG) method is proposed to estimate missing values. k-nearest neighbors of the missed entry are first selected based on absolute values of their Pearson correlation coefficient. Then a subset of genes among the k-nearest neighbors is labeled as the best similar ones. CG algorithm with this subset as its input is then used to estimate the missing values. Our proposed CG based algorithm (CGimpute) is evaluated on different data sets. The results are compared with sequential local least squares (SLLSimpute), Bayesian principle component analysis (BPCAimpute), local least squares imputation (LLSimpute), iterated local least squares imputation (ILLSimpute) and adaptive k-nearest neighbors imputation (KNNKimpute) methods. The average of normalized root mean squares error (NRMSE) and relative NRMSE in different data sets with various missing rates shows CGimpute outperforms other methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Strengthening Science-based Environmental Policy Development in ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Strengthening Science-based Environmental Policy Development in Burma's Democratic ... IDRC is providing funding to Simon Fraser University to support a network of ... The project will also encourage and assist in the creation of a business ...

  13. Moving Zimbabwe Forward : an Evidence Based Policy Dialogue ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Moving Zimbabwe Forward : an Evidence Based Policy Dialogue ... levels of poverty, unemployment, inflation and poor service provision in the areas of education, ... International Water Resources Association, in close collaboration with IDRC, ...

  14. A Gradient-Based Multistart Algorithm for Multimodal Aerodynamic Shape Optimization Problems Based on Free-Form Deformation

    Science.gov (United States)

    Streuber, Gregg Mitchell

    Environmental and economic factors motivate the pursuit of more fuel-efficient aircraft designs. Aerodynamic shape optimization is a powerful tool in this effort, but is hampered by the presence of multimodality in many design spaces. Gradient-based multistart optimization uses a sampling algorithm and multiple parallel optimizations to reliably apply fast gradient-based optimization to moderately multimodal problems. Ensuring that the sampled geometries remain physically realizable requires manually developing specialized linear constraints for each class of problem. Utilizing free-form deformation geometry control allows these linear constraints to be written in a geometry-independent fashion, greatly easing the process of applying the algorithm to new problems. This algorithm was used to assess the presence of multimodality when optimizing a wing in subsonic and transonic flows, under inviscid and viscous conditions, and a blended wing-body under transonic, viscous conditions. Multimodality was present in every wing case, while the blended wing-body was found to be generally unimodal.

  15. GROUP POLICY BASED AUTHENTICATION ON INCOMING CALLS FOR ANDROID SMARTPHONES

    OpenAIRE

    Sunita M. Kumbhar, Prof. Z.M Shaikh

    2016-01-01

    The numbers of Smartphone users increasing day by day. Hence, there is need to propose advanced Group Policy based Authentication for incoming calls for Android phone. Android platform provides a variety of functions that support the programming of face recognition, as in image processing. Group policy based authentication scheme increases the security which restricts the access of incoming call form un-authorized user. To solve problems, related to face recognition should be applied in the p...

  16. Comparative analysis of gradient-field-based orientation estimation methods and regularized singular-value decomposition for fringe pattern processing.

    Science.gov (United States)

    Sun, Qi; Fu, Shujun

    2017-09-20

    Fringe orientation is an important feature of fringe patterns and has a wide range of applications such as guiding fringe pattern filtering, phase unwrapping, and abstraction. Estimating fringe orientation is a basic task for subsequent processing of fringe patterns. However, various noise, singular and obscure points, and orientation data degeneration lead to inaccurate calculations of fringe orientation. Thus, to deepen the understanding of orientation estimation and to better guide orientation estimation in fringe pattern processing, some advanced gradient-field-based orientation estimation methods are compared and analyzed. At the same time, following the ideas of smoothing regularization and computing of bigger gradient fields, a regularized singular-value decomposition (RSVD) technique is proposed for fringe orientation estimation. To compare the performance of these gradient-field-based methods, quantitative results and visual effect maps of orientation estimation are given on simulated and real fringe patterns that demonstrate that the RSVD produces the best estimation results at a cost of relatively less time.

  17. Breast tissue classification in digital tomosynthesis images based on global gradient minimization and texture features

    Science.gov (United States)

    Qin, Xulei; Lu, Guolan; Sechopoulos, Ioannis; Fei, Baowei

    2014-03-01

    Digital breast tomosynthesis (DBT) is a pseudo-three-dimensional x-ray imaging modality proposed to decrease the effect of tissue superposition present in mammography, potentially resulting in an increase in clinical performance for the detection and diagnosis of breast cancer. Tissue classification in DBT images can be useful in risk assessment, computer-aided detection and radiation dosimetry, among other aspects. However, classifying breast tissue in DBT is a challenging problem because DBT images include complicated structures, image noise, and out-of-plane artifacts due to limited angular tomographic sampling. In this project, we propose an automatic method to classify fatty and glandular tissue in DBT images. First, the DBT images are pre-processed to enhance the tissue structures and to decrease image noise and artifacts. Second, a global smooth filter based on L0 gradient minimization is applied to eliminate detailed structures and enhance large-scale ones. Third, the similar structure regions are extracted and labeled by fuzzy C-means (FCM) classification. At the same time, the texture features are also calculated. Finally, each region is classified into different tissue types based on both intensity and texture features. The proposed method is validated using five patient DBT images using manual segmentation as the gold standard. The Dice scores and the confusion matrix are utilized to evaluate the classified results. The evaluation results demonstrated the feasibility of the proposed method for classifying breast glandular and fat tissue on DBT images.

  18. An online supervised learning method based on gradient descent for spiking neurons.

    Science.gov (United States)

    Xu, Yan; Yang, Jing; Zhong, Shuiming

    2017-09-01

    The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by precise firing times of spikes. The gradient-descent-based (GDB) learning methods are widely used and verified in the current research. Although the existing GDB multi-spike learning (or spike sequence learning) methods have good performance, they work in an offline manner and still have some limitations. This paper proposes an online GDB spike sequence learning method for spiking neurons that is based on the online adjustment mechanism of real biological neuron synapses. The method constructs error function and calculates the adjustment of synaptic weights as soon as the neurons emit a spike during their running process. We analyze and synthesize desired and actual output spikes to select appropriate input spikes in the calculation of weight adjustment in this paper. The experimental results show that our method obviously improves learning performance compared with the offline learning manner and has certain advantage on learning accuracy compared with other learning methods. Stronger learning ability determines that the method has large pattern storage capacity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Nonlinear behavior of capacitive micro-beams based on strain gradient theory

    International Nuclear Information System (INIS)

    Fathalilou, Mohammad; Sadeghi, Morteza; Rezazadeh, Ghader

    2014-01-01

    This paper studies the size dependent behavior of materials in MEMS structures. This behavior becomes noticeable for a structure when the characteristic size such as thickness or diameter is close to its internal length-scale parameter and is insignificant for the high ratio of the characteristic size to the length-scale parameter, which is the case of the silicon base micro-beams. However, in some types of micro-beams like gold or nickel bases, the size dependent effect cannot be overlooked. In such cases, ignoring this behavior in modeling will lead to incorrect results. Some previous researchers have applied classic beam theory on their models and imposed a considerable hypothetical value of residual stress to match their theoretical results with the experimental ones. The equilibrium positions or fixed points of the gold and nickel micro-beams are obtained and shown that for a given DC voltage, there is a considerable difference between the obtained fixed points using classic beam theory, modified couple stress theory, and modified strain gradient theory. In addition, it is shown that the calculated static and dynamic pull-in voltages using higher order theories are much closer to the experimental results and are higher several times than those obtained by classic beam theory.

  20. 3D DC Resistivity Inversion with Topography Based on Regularized Conjugate Gradient Method

    Directory of Open Access Journals (Sweden)

    Jian-ke Qiang

    2013-01-01

    Full Text Available During the past decades, we observed a strong interest in 3D DC resistivity inversion and imaging with complex topography. In this paper, we implemented 3D DC resistivity inversion based on regularized conjugate gradient method with FEM. The Fréchet derivative is assembled with the electric potential in order to speed up the inversion process based on the reciprocity theorem. In this study, we also analyzed the sensitivity of the electric potential on the earth’s surface to the conductivity in each cell underground and introduced an optimized weighting function to produce new sensitivity matrix. The synthetic model study shows that this optimized weighting function is helpful to improve the resolution of deep anomaly. By incorporating topography into inversion, the artificial anomaly which is actually caused by topography can be eliminated. As a result, this algorithm potentially can be applied to process the DC resistivity data collected in mountain area. Our synthetic model study also shows that the convergence and computation speed are very stable and fast.

  1. 3D printing for the design and fabrication of polymer-based gradient scaffolds.

    Science.gov (United States)

    Bracaglia, Laura G; Smith, Brandon T; Watson, Emma; Arumugasaamy, Navein; Mikos, Antonios G; Fisher, John P

    2017-07-01

    To accurately mimic the native tissue environment, tissue engineered scaffolds often need to have a highly controlled and varied display of three-dimensional (3D) architecture and geometrical cues. Additive manufacturing in tissue engineering has made possible the development of complex scaffolds that mimic the native tissue architectures. As such, architectural details that were previously unattainable or irreproducible can now be incorporated in an ordered and organized approach, further advancing the structural and chemical cues delivered to cells interacting with the scaffold. This control over the environment has given engineers the ability to unlock cellular machinery that is highly dependent upon the intricate heterogeneous environment of native tissue. Recent research into the incorporation of physical and chemical gradients within scaffolds indicates that integrating these features improves the function of a tissue engineered construct. This review covers recent advances on techniques to incorporate gradients into polymer scaffolds through additive manufacturing and evaluate the success of these techniques. As covered here, to best replicate different tissue types, one must be cognizant of the vastly different types of manufacturing techniques available to create these gradient scaffolds. We review the various types of additive manufacturing techniques that can be leveraged to fabricate scaffolds with heterogeneous properties and discuss methods to successfully characterize them. Additive manufacturing techniques have given tissue engineers the ability to precisely recapitulate the native architecture present within tissue. In addition, these techniques can be leveraged to create scaffolds with both physical and chemical gradients. This work offers insight into several techniques that can be used to generate graded scaffolds, depending on the desired gradient. Furthermore, it outlines methods to determine if the designed gradient was achieved. This review

  2. Determination of accelerated factors in gradient descent iterations based on Taylor's series

    Directory of Open Access Journals (Sweden)

    Petrović Milena

    2017-01-01

    Full Text Available In this paper the efficiency of accelerated gradient descent methods regarding the way of determination of accelerated factor is considered. Due to the previous researches we assert that the use of Taylor's series of posed gradient descent iteration in calculation of accelerated parameter gives better final results than some other choices. We give a comparative analysis of efficiency of several methods with different approaches in obtaining accelerated parameter. According to the achieved results of numerical experiments we make a conclusion about the one of the most optimal way in defining accelerated parameter in accelerated gradient descent schemes.

  3. Ecosystem Based Management in Transition: From Ocean Policy to Application

    Science.gov (United States)

    Saumweber, W. J.; Goldman, E.

    2016-12-01

    Ecosystem-based management (EBM) has been proposed as a means to improve resource management and stewardship for more than two decades. Over this history, its exact goals and approaches have evolved in concert with advances in science and policy, including a greater understanding of ecosystem function, valuation, and thresholds for change, along with direct reference to EBM principles in statute, regulation, and other Executive Actions. Most recently, and explicitly, the Administration's National Ocean Policy (NOP) called for the development of a Federal EBM framework that would outline principles and guidelines for implementing EBM under existing authorities. This cross-agency framework has yet to be developed, but, the NOP, and related Administration initiatives, have resulted in the practical application of EBM principles in several issue-specific policy initiatives ranging from fisheries and marine protected area management to coastal adaptation and water resource infrastructure investment. In each case, the application of EBM principles uses apparently unique policy mechanisms (e.g. marine planning, ecosystem services assessment, adaptive management, dynamic ocean management, etc.). Despite differences in terminology and policy context, each of these policy initiatives is linked at its core to concepts of integrated and adaptive management that consider broad ecosystem function and services. This practical history of EBM implementation speaks to both the challenges and opportunities in broad incorporation of EBM across diverse policy initiatives and frameworks. We suggest that the continued growth of EBM as a practical policy concept will require a move away from broad frameworks, and towards the identification of specific resource management issues and accompanying policy levers with which to address those issues. In order to promote this progression, Federal policy should recognize and articulate the diverse set of policy mechanisms encompassed under the

  4. Fabrication of Ni-Al/diamond composite based on layered and gradient structures of SHS system

    Directory of Open Access Journals (Sweden)

    Lu Jiafeng

    2017-01-01

    Full Text Available In this paper layered and gradient structures of Ni-Al SHS system were adopted to manufacture Ni-Al/diamond composites. The effect of the layered and the diamond mesh gradient structures of Ni-Al/diamond on the SHS process and the microstructure of the composites were investigated. It is found that with the increasing of the number of layers, the combustion wave velocity is decreased. The combustion wave velocity for diamond mesh size gradient structure of Ni-Al SHS is faster than that for the layered structure. A well bonding can be formed between diamond and the matrix in layered and gradient structure Ni-Al/diamond composites due to the melt of Ni-Cr brazing alloy.

  5. Router Agent Technology for Policy-Based Network Management

    Science.gov (United States)

    Chow, Edward T.; Sudhir, Gurusham; Chang, Hsin-Ping; James, Mark; Liu, Yih-Chiao J.; Chiang, Winston

    2011-01-01

    This innovation can be run as a standalone network application on any computer in a networked environment. This design can be configured to control one or more routers (one instance per router), and can also be configured to listen to a policy server over the network to receive new policies based on the policy- based network management technology. The Router Agent Technology transforms the received policies into suitable Access Control List syntax for the routers it is configured to control. It commits the newly generated access control lists to the routers and provides feedback regarding any errors that were faced. The innovation also automatically generates a time-stamped log file regarding all updates to the router it is configured to control. This technology, once installed on a local network computer and started, is autonomous because it has the capability to keep listening to new policies from the policy server, transforming those policies to router-compliant access lists, and committing those access lists to a specified interface on the specified router on the network with any error feedback regarding commitment process. The stand-alone application is named RouterAgent and is currently realized as a fully functional (version 1) implementation for the Windows operating system and for CISCO routers.

  6. Explanation of L→H mode transition based on gradient stabilization of edge thermal fluctuations

    International Nuclear Information System (INIS)

    Stacey, W.M.

    1996-01-01

    A linear analysis of thermal fluctuations, using a fluid model which treats the large radial gradient related phenomena in the plasma edge, leads to a constraint on the temperature and density gradients for stabilization of edge temperature fluctuations. A temperature gradient, or conductive edge heat flux, threshold is identified. It is proposed that the L→H transition takes place when the conductive heat flux to the edge produces a sufficiently large edge temperature gradient to stabilize the edge thermal fluctuations. The consequences following from this mechanism for the L→H transition are in accord with observed phenomena associated with the L→H transition and with the observed parameter dependences of the power threshold. First, a constraint is established on the edge temperature and density gradients that are sufficient for the stability of edge temperature fluctuations. A slab approximation for the thin plasma edge and a fluid model connected to account for the large radial gradients present in the plasma edge are used. Equilibrium solutions are characterized by the value of the density and of its gradient L n -1 double-bond - n -1 , etc. Temperature fluctuations expanded about the equilibrium value are then used in the energy balance equation summed over plasma ions, electrons and impurities to obtain, after linearization, an expression for the growth rate ω of edge localized thermal fluctuations. Thermal stability of the equilibrium solution requires ω ≤ 0, which establishes a constraint that must be satisfied by L n -1 and L T -1 . The limiting value of the constraint (ω = 0) leads to an expression for the minimum value of that is sufficient for thermal stability, for a given value of L T -1. It is found that there is a minimum value of the temperature gradient, (L T -1 ) min that is necessary for a stable solution to exist for any value of L n -1

  7. A morphological perceptron with gradient-based learning for Brazilian stock market forecasting.

    Science.gov (United States)

    Araújo, Ricardo de A

    2012-04-01

    Several linear and non-linear techniques have been proposed to solve the stock market forecasting problem. However, a limitation arises from all these techniques and is known as the random walk dilemma (RWD). In this scenario, forecasts generated by arbitrary models have a characteristic one step ahead delay with respect to the time series values, so that, there is a time phase distortion in stock market phenomena reconstruction. In this paper, we propose a suitable model inspired by concepts in mathematical morphology (MM) and lattice theory (LT). This model is generically called the increasing morphological perceptron (IMP). Also, we present a gradient steepest descent method to design the proposed IMP based on ideas from the back-propagation (BP) algorithm and using a systematic approach to overcome the problem of non-differentiability of morphological operations. Into the learning process we have included a procedure to overcome the RWD, which is an automatic correction step that is geared toward eliminating time phase distortions that occur in stock market phenomena. Furthermore, an experimental analysis is conducted with the IMP using four complex non-linear problems of time series forecasting from the Brazilian stock market. Additionally, two natural phenomena time series are used to assess forecasting performance of the proposed IMP with other non financial time series. At the end, the obtained results are discussed and compared to results found using models recently proposed in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Detail-enhanced multimodality medical image fusion based on gradient minimization smoothing filter and shearing filter.

    Science.gov (United States)

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2018-02-13

    In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.

  9. Carotid artery B-mode ultrasound image segmentation based on morphology, geometry and gradient direction

    Science.gov (United States)

    Sunarya, I. Made Gede; Yuniarno, Eko Mulyanto; Purnomo, Mauridhi Hery; Sardjono, Tri Arief; Sunu, Ismoyo; Purnama, I. Ketut Eddy

    2017-06-01

    Carotid Artery (CA) is one of the vital organs in the human body. CA features that can be used are position, size and volume. Position feature can used to determine the preliminary initialization of the tracking. Examination of the CA features can use Ultrasound. Ultrasound imaging can be operated dependently by an skilled operator, hence there could be some differences in the images result obtained by two or more different operators. This can affect the process of determining of CA. To reduce the level of subjectivity among operators, it can determine the position of the CA automatically. In this study, the proposed method is to segment CA in B-Mode Ultrasound Image based on morphology, geometry and gradient direction. This study consists of three steps, the data collection, preprocessing and artery segmentation. The data used in this study were taken directly by the researchers and taken from the Brno university's signal processing lab database. Each data set contains 100 carotid artery B-Mode ultrasound image. Artery is modeled using ellipse with center c, major axis a and minor axis b. The proposed method has a high value on each data set, 97% (data set 1), 73 % (data set 2), 87% (data set 3). This segmentation results will then be used in the process of tracking the CA.

  10. Thai Finger-Spelling Recognition Using a Cascaded Classifier Based on Histogram of Orientation Gradient Features

    Directory of Open Access Journals (Sweden)

    Kittasil Silanon

    2017-01-01

    Full Text Available Hand posture recognition is an essential module in applications such as human-computer interaction (HCI, games, and sign language systems, in which performance and robustness are the primary requirements. In this paper, we proposed automatic classification to recognize 21 hand postures that represent letters in Thai finger-spelling based on Histogram of Orientation Gradient (HOG feature (which is applied with more focus on the information within certain region of the image rather than each single pixel and Adaptive Boost (i.e., AdaBoost learning technique to select the best weak classifier and to construct a strong classifier that consists of several weak classifiers to be cascaded in detection architecture. We collected 21 static hand posture images from 10 subjects for testing and training in Thai letters finger-spelling. The parameters for the training process have been adjusted in three experiments, false positive rates (FPR, true positive rates (TPR, and number of training stages (N, to achieve the most suitable training model for each hand posture. All cascaded classifiers are loaded into the system simultaneously to classify different hand postures. A correlation coefficient is computed to distinguish the hand postures that are similar. The system achieves approximately 78% accuracy on average on all classifier experiments.

  11. Cooperative and Adaptive Network Coding for Gradient Based Routing in Wireless Sensor Networks with Multiple Sinks

    Directory of Open Access Journals (Sweden)

    M. E. Migabo

    2017-01-01

    Full Text Available Despite its low computational cost, the Gradient Based Routing (GBR broadcast of interest messages in Wireless Sensor Networks (WSNs causes significant packets duplications and unnecessary packets transmissions. This results in energy wastage, traffic load imbalance, high network traffic, and low throughput. Thanks to the emergence of fast and powerful processors, the development of efficient network coding strategies is expected to enable efficient packets aggregations and reduce packets retransmissions. For multiple sinks WSNs, the challenge consists of efficiently selecting a suitable network coding scheme. This article proposes a Cooperative and Adaptive Network Coding for GBR (CoAdNC-GBR technique which considers the network density as dynamically defined by the average number of neighbouring nodes, to efficiently aggregate interest messages. The aggregation is performed by means of linear combinations of random coefficients of a finite Galois Field of variable size GF(2S at each node and the decoding is performed by means of Gaussian elimination. The obtained results reveal that, by exploiting the cooperation of the multiple sinks, the CoAdNC-GBR not only improves the transmission reliability of links and lowers the number of transmissions and the propagation latency, but also enhances the energy efficiency of the network when compared to the GBR-network coding (GBR-NC techniques.

  12. СREATING OF BARCODES FOR FACIAL IMAGES BASED ON INTENSITY GRADIENTS

    Directory of Open Access Journals (Sweden)

    G. A. Kukharev

    2014-05-01

    Full Text Available The paper provides analysis of existing approaches to the generating of barcodes and description of the system structure for generating of barcodes from facial images. The method for generating of standard type linear barcodes from facial images is proposed. This method is based on the difference of intensity gradients, which represent images in the form of initial features. Further averaging of these features into a limited number of intervals is performed; the quantization of results into decimal digits from 0 to 9 and table conversion into the standard barcode is done. Testing was conducted on the Face94 database and database of composite faces of different ages. It showed that the proposed method ensures the stability of generated barcodes according to changes of scale, pose and mirroring of facial images, as well as changes of facial expressions and shadows on faces from local lighting. The proposed solutions are computationally low-cost and do not require the use of any specialized image processing software for generating of facial barcodes in real-time systems.

  13. Optimal path-finding through mental exploration based on neural energy field gradients.

    Science.gov (United States)

    Wang, Yihong; Wang, Rubin; Zhu, Yating

    2017-02-01

    Rodent animal can accomplish self-locating and path-finding task by forming a cognitive map in the hippocampus representing the environment. In the classical model of the cognitive map, the system (artificial animal) needs large amounts of physical exploration to study spatial environment to solve path-finding problems, which costs too much time and energy. Although Hopfield's mental exploration model makes up for the deficiency mentioned above, the path is still not efficient enough. Moreover, his model mainly focused on the artificial neural network, and clear physiological meanings has not been addressed. In this work, based on the concept of mental exploration, neural energy coding theory has been applied to the novel calculation model to solve the path-finding problem. Energy field is constructed on the basis of the firing power of place cell clusters, and the energy field gradient can be used in mental exploration to solve path-finding problems. The study shows that the new mental exploration model can efficiently find the optimal path, and present the learning process with biophysical meaning as well. We also analyzed the parameters of the model which affect the path efficiency. This new idea verifies the importance of place cell and synapse in spatial memory and proves that energy coding is effective to study cognitive activities. This may provide the theoretical basis for the neural dynamics mechanism of spatial memory.

  14. Gradient-Based Optimization of Wind Farms with Different Turbine Heights: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Stanley, Andrew P. J.; Thomas, Jared; Ning, Andrew; Annoni, Jennifer; Dykes, Katherine; Fleming, Paul

    2017-05-08

    Turbine wakes reduce power production in a wind farm. Current wind farms are generally built with turbines that are all the same height, but if wind farms included turbines with different tower heights, the cost of energy (COE) may be reduced. We used gradient-based optimization to demonstrate a method to optimize wind farms with varied hub heights. Our study includes a modified version of the FLORIS wake model that accommodates three-dimensional wakes integrated with a tower structural model. Our purpose was to design a process to minimize the COE of a wind farm through layout optimization and varying turbine hub heights. Results indicate that when a farm is optimized for layout and height with two separate height groups, COE can be lowered by as much as 5%-9%, compared to a similar layout and height optimization where all the towers are the same. The COE has the best improvement in farms with high turbine density and a low wind shear exponent.

  15. Gradient-Based Optimization of Wind Farms with Different Turbine Heights

    Energy Technology Data Exchange (ETDEWEB)

    Stanley, Andrew P. J.; Thomas, Jared; Ning, Andrew; Annoni, Jennifer; Dykes, Katherine; Fleming, Paul

    2017-01-09

    Turbine wakes reduce power production in a wind farm. Current wind farms are generally built with turbines that are all the same height, but if wind farms included turbines with different tower heights, the cost of energy (COE) may be reduced. We used gradient-based optimization to demonstrate a method to optimize wind farms with varied hub heights. Our study includes a modified version of the FLORIS wake model that accommodates three-dimensional wakes integrated with a tower structural model. Our purpose was to design a process to minimize the COE of a wind farm through layout optimization and varying turbine hub heights. Results indicate that when a farm is optimized for layout and height with two separate height groups, COE can be lowered by as much as 5%-9%, compared to a similar layout and height optimization where all the towers are the same. The COE has the best improvement in farms with high turbine density and a low wind shear exponent.

  16. Urban Link Travel Time Prediction Based on a Gradient Boosting Method Considering Spatiotemporal Correlations

    Directory of Open Access Journals (Sweden)

    Faming Zhang

    2016-11-01

    Full Text Available The prediction of travel times is challenging because of the sparseness of real-time traffic data and the intrinsic uncertainty of travel on congested urban road networks. We propose a new gradient–boosted regression tree method to accurately predict travel times. This model accounts for spatiotemporal correlations extracted from historical and real-time traffic data for adjacent and target links. This method can deliver high prediction accuracy by combining simple regression trees with poor performance. It corrects the error found in existing models for improved prediction accuracy. Our spatiotemporal gradient–boosted regression tree model was verified in experiments. The training data were obtained from big data reflecting historic traffic conditions collected by probe vehicles in Wuhan from January to May 2014. Real-time data were extracted from 11 weeks of GPS records collected in Wuhan from 5 May 2014 to 20 July 2014. Based on these data, we predicted link travel time for the period from 21 July 2014 to 25 July 2014. Experiments showed that our proposed spatiotemporal gradient–boosted regression tree model obtained better results than gradient boosting, random forest, or autoregressive integrated moving average approaches. Furthermore, these results indicate the advantages of our model for urban link travel time prediction.

  17. Age replacement policy based on imperfect repair with random probability

    International Nuclear Information System (INIS)

    Lim, J.H.; Qu, Jian; Zuo, Ming J.

    2016-01-01

    In most of literatures of age replacement policy, failures before planned replacement age can be either minimally repaired or perfectly repaired based on the types of failures, cost for repairs and so on. In this paper, we propose age replacement policy based on imperfect repair with random probability. The proposed policy incorporates the case that such intermittent failure can be either minimally repaired or perfectly repaired with random probabilities. The mathematical formulas of the expected cost rate per unit time are derived for both the infinite-horizon case and the one-replacement-cycle case. For each case, we show that the optimal replacement age exists and is finite. - Highlights: • We propose a new age replacement policy with random probability of perfect repair. • We develop the expected cost per unit time. • We discuss the optimal age for replacement minimizing the expected cost rate.

  18. Worship Discourse and White Race-based Policy Attitudes

    Science.gov (United States)

    Brown, R. Khari; Kaiser, Angela; Jackson, James S.

    2014-01-01

    The current study relies upon the 2004 National Politics Study to examine the association between exposure to race-based messages within places of worship and White race-based policy attitudes. The present study challenges the notion that, for White Americans, religiosity inevitably leads to racial prejudice. Rather, we argue, as others have, that religion exists on a continuum that spans from reinforcing to challenging the status quo of social inequality. Our findings suggests that the extent to which Whites discuss race along with the potential need for public policy solutions to address racial inequality within worship spaces, worship attendance contributes to support for public policies aimed at reducing racial inequality. On the other hand, apolitical and non-structural racial discussions within worship settings do seemingly little to move many Whites to challenge dominant idealistic perceptions of race that eschews public policy interventions as solutions to racial inequality. PMID:25324579

  19. Arbitrary magnetic field gradient waveform correction using an impulse response based pre-equalization technique.

    Science.gov (United States)

    Goora, Frédéric G; Colpitts, Bruce G; Balcom, Bruce J

    2014-01-01

    The time-varying magnetic fields used in magnetic resonance applications result in the induction of eddy currents on conductive structures in the vicinity of both the sample under investigation and the gradient coils. These eddy currents typically result in undesired degradations of image quality for MRI applications. Their ubiquitous nature has resulted in the development of various approaches to characterize and minimize their impact on image quality. This paper outlines a method that utilizes the magnetic field gradient waveform monitor method to directly measure the temporal evolution of the magnetic field gradient from a step-like input function and extracts the system impulse response. With the basic assumption that the gradient system is sufficiently linear and time invariant to permit system theory analysis, the impulse response is used to determine a pre-equalized (optimized) input waveform that provides a desired gradient response at the output of the system. An algorithm has been developed that calculates a pre-equalized waveform that may be accurately reproduced by the amplifier (is physically realizable) and accounts for system limitations including system bandwidth, amplifier slew rate capabilities, and noise inherent in the initial measurement. Significant improvements in magnetic field gradient waveform fidelity after pre-equalization have been realized and are summarized. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Polymerase chain reaction-based denaturing gradient gel electrophoresis in the evaluation of oral microbiota.

    Science.gov (United States)

    Li, Y; Saxena, D; Barnes, V M; Trivedi, H M; Ge, Y; Xu, T

    2006-10-01

    Clinical evaluation of oral microbial reduction after a standard prophylactic treatment has traditionally been based on bacterial cultivation methods. However, not all microbes in saliva or dental plaque can be cultivated. Polymerase chain reaction-based denaturing gradient gel electrophoresis (PCR-DGGE) is a cultivation-independent molecular fingerprinting technique that allows the assessment of the predominant bacterial species present in the oral cavity. This study sought to evaluate the oral microbial changes that occurred after a standard prophylactic treatment with a conventional oral care product using PCR-DGGE. Twelve healthy adults participated in the study. Pooled plaque samples were collected at baseline, 24 h after prophylaxis (T1), and 4 days after toothbrushing with fluoride toothpaste (T4). The total microbial genomic DNA of the plaque was isolated. PCR was performed with a set of universal bacterial 16S rDNA primers. The PCR-amplified 16S rDNA fragments were separated by DGGE. The effects of the treatment and of dental brushing were assessed by comparing the PCR-DGGE fingerprinting profiles. The mean numbers of detected PCR amplicons were 22.3 +/- 6.1 for the baseline group, 13.0 +/- 3.1 for the T1 group, and 13.5 +/- 4.3 for the T4 group; the differences among the three groups were statistically significant (P < 0.01). The study also found a significant difference in the mean similarities of microbial profiles between the baseline and the treatment groups (P < 0.001). PCR-based DGGE has been shown to be an excellent means of rapidly and accurately assessing oral microbial changes in this clinical study.

  1. Microsphere-Based Scaffolds Carrying Opposing Gradients of Chondroitin Sulfate and Tricalcium Phosphate

    Directory of Open Access Journals (Sweden)

    Vineet eGupta

    2015-07-01

    Full Text Available Extracellular matrix (ECM components such as chondroitin sulfate (CS and tricalcium phosphate (TCP serve as raw materials and thus spatial patterning of these raw materials may be leveraged to mimic the smooth transition of physical, chemical and mechanical properties at the bone-cartilage interface. We hypothesized that encapsulation of opposing gradients of these raw materials in high molecular weight poly(D,L-lactic-co-glycolic acid (PLGA microsphere-based scaffolds would enhance differentiation of rat bone marrow stromal cells (rBMSCs. The raw material encapsulation altered the microstructure of the microspheres and also influenced the cellular morphology that depended on the type of material encapsulated. Moreover, the mechanical properties of the raw material encapsulating microsphere-based scaffolds initially relied on the composition of the scaffolds and later on were primarily governed by the degradation of the polymer phase and newly synthesized extracellular matrix by the seeded cells. Furthermore, raw materials had a mitogenic effect on the seeded cells and led to increased glycosaminoglycan (GAG, collagen, and calcium content. Interestingly, the initial effects of raw material encapsulation on a per-cell basis might have been overshadowed by medium-regulated environment that appeared to favor osteogenesis. However, it is to be noted that in vivo, differentiation of the cells would be governed by the surrounding native environment. Thus, the results of this study demonstrated the potential of the raw materials in facilitating neo-tissue synthesis in microsphere-based scaffolds and perhaps in combination with bioactive signals, these raw materials may be able to achieve intricate cell differentiation profiles required for regenerating the osteochondral interface.

  2. Phase gradient algorithm based on co-axis two-step phase-shifting interferometry and its application

    Science.gov (United States)

    Wang, Yawei; Zhu, Qiong; Xu, Yuanyuan; Xin, Zhiduo; Liu, Jingye

    2017-12-01

    A phase gradient method based on co-axis two-step phase-shifting interferometry, is used to reveal the detailed information of a specimen. In this method, the phase gradient distribution can only be obtained by calculating both the first-order derivative and the radial Hilbert transformation of the intensity difference between two phase-shifted interferograms. The feasibility and accuracy of this method were fully verified by the simulation results for a polystyrene sphere and a red blood cell. The empirical results demonstrated that phase gradient is sensitive to changes in the refractive index and morphology. Because phase retrieval and tedious phase unwrapping are not required, the calculation speed is faster. In addition, co-axis interferometry has high spatial resolution.

  3. Application of an online ion-chromatography-based instrument for gradient flux measurements of speciated nitrogen and sulfur

    Science.gov (United States)

    Rumsey, Ian C.; Walker, John T.

    2016-06-01

    The dry component of total nitrogen and sulfur atmospheric deposition remains uncertain. The lack of measurements of sufficient chemical speciation and temporal extent make it difficult to develop accurate mass budgets and sufficient process level detail is not available to improve current air-surface exchange models. Over the past decade, significant advances have been made in the development of continuous air sampling measurement techniques, resulting with instruments of sufficient sensitivity and temporal resolution to directly quantify air-surface exchange of nitrogen and sulfur compounds. However, their applicability is generally restricted to only one or a few of the compounds within the deposition budget. Here, the performance of the Monitor for AeRosols and GAses in ambient air (MARGA 2S), a commercially available online ion-chromatography-based analyzer is characterized for the first time as applied for air-surface exchange measurements of HNO3, NH3, NH4+, NO3-, SO2 and SO42-. Analytical accuracy and precision are assessed under field conditions. Chemical concentrations gradient precision are determined at the same sampling site. Flux uncertainty measured by the aerodynamic gradient method is determined for a representative 3-week period in fall 2012 over a grass field. Analytical precision and chemical concentration gradient precision were found to compare favorably in comparison to previous studies. During the 3-week period, percentages of hourly chemical concentration gradients greater than the corresponding chemical concentration gradient detection limit were 86, 42, 82, 73, 74 and 69 % for NH3, NH4+, HNO3, NO3-, SO2 and SO42-, respectively. As expected, percentages were lowest for aerosol species, owing to their relatively low deposition velocities and correspondingly smaller gradients relative to gas phase species. Relative hourly median flux uncertainties were 31, 121, 42, 43, 67 and 56 % for NH3, NH4+, HNO3, NO3-, SO2 and SO42-, respectively. Flux

  4. Energy harvesting from vibration of Timoshenko nanobeam under base excitation considering flexoelectric and elastic strain gradient effects

    Science.gov (United States)

    Managheb, S. A. M.; Ziaei-Rad, S.; Tikani, R.

    2018-05-01

    The coupling between polarization and strain gradients is called flexoelectricity. This phenomenon exists in all dielectrics with any symmetry. In this paper, energy harvesting from a Timoshenko beam is studied by considering the flexoelectric and strain gradient effects. General governing equations and related boundary conditions are derived using Hamilton's principle. The flexoelectric effects are defined by gradients of normal and shear strains which lead to a more general model. The developed model also covers the classical Timoshenko beam theory by ignoring the flexoelectric effect. Based on the developed model, flexoelectricity effect on dielectric beams and energy harvesting from cantilever beam under harmonic base excitation is investigated. A parametric study was conducted to evaluate the effects of flexoelectric coefficients, strain gradient constants, base acceleration and the attaching tip mass on the energy harvested from a cantilever Timoshenko beam. Results show that the flexoelectricity has a significant effect on the energy harvester performance, especially in submicron and nano scales. In addition, this effect makes the beam to behave softer than before and also it changes the harvester first resonance frequency. The present study provides guidance for flexoelectric nano-beam analysis and a method to evaluate the performance of energy harvester in nano-dielectric devices.

  5. Base stock policies with degraded service to larger orders

    DEFF Research Database (Denmark)

    Du, Bisheng; Larsen, Christian

    We study an inventory system controlled by a base stock policy assuming a compound renewal demand process. We extend the base stock policy by incorporating rules for degrading the service of larger orders. Two specific rules are considered, denoted Postpone(q,t) and Split(q), respectively. The aim...... of using these rules is to achieve a given order fill rate of the regular orders (those of size less than or equal to the parameter q) having less inventory. We develop mathematical expressions for the performance measures order fill rate (of the regular orders) and average on-hand inventory level. Based...

  6. Quantification of pH gradients and implications in insulator-based dielectrophoresis of biomolecules.

    Science.gov (United States)

    Gencoglu, Aytug; Camacho-Alanis, Fernanda; Nguyen, Vi Thanh; Nakano, Asuka; Ros, Alexandra; Minerick, Adrienne R

    2011-09-01

    Direct current (DC) insulator-based dielectrophoretic (iDEP) microdevices have the potential to replace traditional alternating current dielectrophoretic devices for many cellular and biomolecular separation applications. The use of large DC fields suggest that electrode reactions and ion transport mechanisms can become important and impact ion distributions in the nanoliters of fluid in iDEP microchannels. This work tracked natural pH gradient formation in a 100 μm wide, 1 cm-long microchannel under applicable iDEP protein manipulation conditions. Using fluorescence microscopy with the pH-sensitive dye FITC Isomer I and the pH-insensitive dye TRITC as a reference, pH was observed to drop drastically in the microchannels within 1 min in a 3000 V/cm electric field; pH drops were observed in the range of 6-10 min within a 100 V/cm electric field and varied based on the buffer conductivity. To address concerns of dye transport impacting intensity data, electrokinetic mobilities of FITC were carefully examined and found to be (i) toward the anode and (ii) 1 to 2 orders of magnitude smaller than H⁺ transport which is responsible for pH drops from the anode toward the cathode. COMSOL simulations of ion transport showed qualitative agreement with experimental results. The results indicate that pH changes are severe enough and rapid enough to influence the net charge of a protein or cause aggregation during iDEP experiments. The results also elucidate reasonable time periods over which the phosphate buffering capacity can counter increases in H⁺ and OH⁻ for unperturbed iDEP manipulations. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Improved incorporation of strain gradient elasticity in the flexoelectricity based energy harvesting from nanobeams

    Science.gov (United States)

    Zhou, Yarong; Yang, Xu; Pan, Dongmei; Wang, Binglei

    2018-04-01

    Flexoelectricity, the coupling of strain gradient and polarization, exists in all the dielectric materials and numerous models have been proposed to study this mechanism. However, the contribution of strain gradient elasticity has typically been underestimated. In this work, inspired by the one-length scale parameter model developed by Deng et al. [19], we incorporate three length-scale parameters to carefully capture the contribution of the purely mechanical strain gradients on flexoelectricity. This three-parameter model is more flexible and could be applied to investigate the flexoelectricity in a wide range of complicated deformations. Accordingly, we carry out our analysis by studying a dielectric nanobeam under different boundary conditions. We show that the strain gradient elasticity and flexoelectricity have apparent size effects and significant influence on the electromechanical response. In particular, the strain gradient effects could significantly reduce the energy efficiency, indicating their importance and necessity. This work may be helpful in understanding the mechanism of flexoelectricity at the nanoscale and sheds light on the flexoelectricity energy harvesting.

  8. Evaluation of extreme ionospheric total electron content gradient associated with plasma bubbles for GNSS Ground-Based Augmentation System

    Science.gov (United States)

    Saito, S.; Yoshihara, T.

    2017-08-01

    Associated with plasma bubbles, extreme spatial gradients in ionospheric total electron content (TEC) were observed on 8 April 2008 at Ishigaki (24.3°N, 124.2°E, +19.6° magnetic latitude), Japan. The largest gradient was 3.38 TECU km-1 (total electron content unit, 1 TECU = 1016 el m-2), which is equivalent to an ionospheric delay gradient of 540 mm km-1 at the GPS L1 frequency (1.57542 GHz). This value is confirmed by using multiple estimating methods. The observed value exceeds the maximum ionospheric gradient that has ever been observed (412 mm km-1 or 2.59 TECU km-1) to be associated with a severe magnetic storm. It also exceeds the assumed maximum value (500 mm km-1 or 3.08 TECU km-1) which was used to validate the draft international standard for Global Navigation Satellite System (GNSS) Ground-Based Augmentation Systems (GBAS) to support Category II/III approaches and landings. The steepest part of this extreme gradient had a scale size of 5.3 km, and the front-normal velocities were estimated to be 71 m s-1 with a wavefront-normal direction of east-northeastward. The total width of the transition region from outside to inside the plasma bubble was estimated to be 35.3 km. The gradient of relatively small spatial scale size may fall between an aircraft and a GBAS ground subsystem and may be undetectable by both aircraft and ground.

  9. A condition-based maintenance policy for stochastically deteriorating systems

    International Nuclear Information System (INIS)

    Grall, A.; Berenguer, C.; Dieulle, L.

    2002-01-01

    We focus on the analytical modeling of a condition-based inspection/replacement policy for a stochastically and continuously deteriorating single-unit system. We consider both the replacement threshold and the inspection schedule as decision variables for this maintenance problem and we propose to implement the maintenance policy using a multi-level control-limit rule. In order to assess the performance of the proposed maintenance policy and to minimize the long run expected maintenance cost per unit time, a mathematical model for the maintained system cost is derived, supported by the existence of a stationary law for the maintained system state. Numerical experiments illustrate the performance of the proposed policy and confirm that the maintenance cost rate on an infinite horizon can be minimized by a joint optimization of the maintenance structure thresholds, or equivalently by a joint optimization of a system replacement threshold and the aperiodic inspection schedule

  10. School Improvement Policy--Site-Based Management

    Directory of Open Access Journals (Sweden)

    C. Kenneth Tanner

    1998-03-01

    Full Text Available Have administrative functions of principals changed in schools practicing site-based management (SBM with shared governance? To deal with this issue we employed the Delphi technique and a panel of 24 experts from 14 states. The experts, which included educational specialists, researchers, writers, and elementary school principals, agreed that the implementation of SBM dramatically influences the roles of the principal in management/administration and leadership. Data revealed that the elementary principal's leadership role requires specialized skills to support shared governance, making it necessary to form professional development programs that adapt to innovations evolving from the implementation of SBM.

  11. Estimating Surface Downward Shortwave Radiation over China Based on the Gradient Boosting Decision Tree Method

    Directory of Open Access Journals (Sweden)

    Lu Yang

    2018-01-01

    Full Text Available Downward shortwave radiation (DSR is an essential parameter in the terrestrial radiation budget and a necessary input for models of land-surface processes. Although several radiation products using satellite observations have been released, coarse spatial resolution and low accuracy limited their application. It is important to develop robust and accurate retrieval methods with higher spatial resolution. Machine learning methods may be powerful candidates for estimating the DSR from remotely sensed data because of their ability to perform adaptive, nonlinear data fitting. In this study, the gradient boosting regression tree (GBRT was employed to retrieve DSR measurements with the ground observation data in China collected from the China Meteorological Administration (CMA Meteorological Information Center and the satellite observations from the Advanced Very High Resolution Radiometer (AVHRR at a spatial resolution of 5 km. The validation results of the DSR estimates based on the GBRT method in China at a daily time scale for clear sky conditions show an R2 value of 0.82 and a root mean square error (RMSE value of 27.71 W·m−2 (38.38%. These values are 0.64 and 42.97 W·m−2 (34.57%, respectively, for cloudy sky conditions. The monthly DSR estimates were also evaluated using ground measurements. The monthly DSR estimates have an overall R2 value of 0.92 and an RMSE of 15.40 W·m−2 (12.93%. Comparison of the DSR estimates with the reanalyzed and retrieved DSR measurements from satellite observations showed that the estimated DSR is reasonably accurate but has a higher spatial resolution. Moreover, the proposed GBRT method has good scalability and is easy to apply to other parameter inversion problems by changing the parameters and training data.

  12. Full waveform inversion based on the optimized gradient and its spectral implementation

    KAUST Repository

    Wu, Zedong; Alkhalifah, Tariq Ali

    2014-01-01

    for the convergence are available, the high number of iterations required to approach a solution renders FWI as very expensive (especially in 3D). A spectral implementation in which the wavefields are extrapolated and gradients are calculated in the wavenumber domain

  13. Identification of small-scale discontinuities based on dip-oriented gradient energy entropy coherence estimation

    Science.gov (United States)

    Peng, Da; Yin, Cheng

    2017-09-01

    Locating small-scale discontinuities is one of the most challenging geophysical tasks; these subtle geological features are significant since they are often associated with subsurface petroleum traps. Subtle faults, fractures, unconformities, reef textures, channel boundaries, thin-bed boundaries and other structural and stratigraphic discontinuities have subtle geological edges which may provide lateral variation in seismic expression. Among the different geophysical techniques available, 3D seismic discontinuity attributes are particularly useful for highlighting discontinuities in the seismic data. Traditional seismic discontinuity attributes are sensitive to noise and are not very appropriate for detecting small-scale discontinuities. Thus, we present a dip-oriented gradient energy entropy (DOGEE) coherence estimation method to detect subtle faults and structural features. The DOGEE coherence estimation method uses the gradient structure tensor (GST) algorithm to obtain local dip information and construct a gradient correlation matrix to calculate gradient energy entropy. The proposed DOGEE coherence estimation method is robust to noise, and also improves the clarity of fault edges. It is effective for small-scale discontinuity characterisation and interpretation.

  14. Pilling evaluation of patterned fabrics based on a gradient field method

    Czech Academy of Sciences Publication Activity Database

    Techniková, L.; Tunák, M.; Janáček, Jiří

    2016-01-01

    Roč. 41, č. 1 (2016), s. 97-101 ISSN 0971-0426 Institutional support: RVO:67985823 Keywords : 3D surface reconstruction * fabric pilling * gradient field method * patterned fabric * pills detection Subject RIV: JS - Reliability ; Quality Management, Testing Impact factor: 0.430, year: 2016

  15. A fast nonlinear conjugate gradient based method for 3D frictional contact problems

    NARCIS (Netherlands)

    Zhao, J.; Vollebregt, E.A.H.; Oosterlee, C.W.

    2014-01-01

    This paper presents a fast numerical solver for a nonlinear constrained optimization problem, arising from a 3D frictional contact problem. It incorporates an active set strategy with a nonlinear conjugate gradient method. One novelty is to consider the tractions of each slip element in a polar

  16. A fast nonlinear conjugate gradient based method for 3D concentrated frictional contact problems

    NARCIS (Netherlands)

    J. Zhao (Jing); E.A.H. Vollebregt (Edwin); C.W. Oosterlee (Cornelis)

    2015-01-01

    htmlabstractThis paper presents a fast numerical solver for a nonlinear constrained optimization problem, arising from 3D concentrated frictional shift and rolling contact problems with dry Coulomb friction. The solver combines an active set strategy with a nonlinear conjugate gradient method. One

  17. Full waveform inversion based on the optimized gradient and its spectral implementation

    KAUST Repository

    Wu, Zedong

    2014-01-01

    Full waveform inversion (FWI) despite it\\'s potential suffers from the ability to converge to the desired solution due to the high nonlinearity of the objective function at conventional seismic frequencies. Even if frequencies necessary for the convergence are available, the high number of iterations required to approach a solution renders FWI as very expensive (especially in 3D). A spectral implementation in which the wavefields are extrapolated and gradients are calculated in the wavenumber domain allows for a cleaner more efficient implementation (no finite difference dispersion errors). In addition, we use not only an up and down going wavefield decomposition of the gradient to access the smooth background update, but also a right and left propagation decomposition to allow us to do that for large dips. To insure that the extracted smooth component of the gradient has the right decent direction, we solve an optimization problem to search for the smoothest component that provides a negative (decent) gradient. Application to the Marmousi model shows that this approach works well with linear increasing initial velocity model and data with frequencies above 2Hz.

  18. Hidden policy ciphertext-policy attribute-based encryption with keyword search against keyword guessing attack

    Institute of Scientific and Technical Information of China (English)

    Shuo; QIU; Jiqiang; LIU; Yanfeng; SHI; Rui; ZHANG

    2017-01-01

    Attribute-based encryption with keyword search(ABKS) enables data owners to grant their search capabilities to other users by enforcing an access control policy over the outsourced encrypted data. However,existing ABKS schemes cannot guarantee the privacy of the access structures, which may contain some sensitive private information. Furthermore, resulting from the exposure of the access structures, ABKS schemes are susceptible to an off-line keyword guessing attack if the keyword space has a polynomial size. To solve these problems, we propose a novel primitive named hidden policy ciphertext-policy attribute-based encryption with keyword search(HP-CPABKS). With our primitive, the data user is unable to search on encrypted data and learn any information about the access structure if his/her attribute credentials cannot satisfy the access control policy specified by the data owner. We present a rigorous selective security analysis of the proposed HP-CPABKS scheme, which simultaneously keeps the indistinguishability of the keywords and the access structures. Finally,the performance evaluation verifies that our proposed scheme is efficient and practical.

  19. Photometric correction for an optical CCD-based system based on the sparsity of an eight-neighborhood gray gradient.

    Science.gov (United States)

    Zhang, Yuzhong; Zhang, Yan

    2016-07-01

    In an optical measurement and analysis system based on a CCD, due to the existence of optical vignetting and natural vignetting, photometric distortion, in which the intensity falls off away from the image center, affects the subsequent processing and measuring precision severely. To deal with this problem, an easy and straightforward method used for photometric distortion correction is presented in this paper. This method introduces a simple polynomial fitting model of the photometric distortion function and employs a particle swarm optimization algorithm to get these model parameters by means of a minimizing eight-neighborhood gray gradient. Compared with conventional calibration methods, this method can obtain the profile information of photometric distortion from only a single common image captured by the optical CCD-based system, with no need for a uniform luminance area source used as a standard reference source and relevant optical and geometric parameters in advance. To illustrate the applicability of this method, numerical simulations and photometric distortions with different lens parameters are evaluated using this method in this paper. Moreover, the application example of temperature field correction for casting billets also demonstrates the effectiveness of this method. The experimental results show that the proposed method is able to achieve the maximum absolute error for vignetting estimation of 0.0765 and the relative error for vignetting estimation from different background images of 3.86%.

  20. A dual justification for science-based policy-making

    DEFF Research Database (Denmark)

    Pedersen, David Budtz

    2014-01-01

    Science-based policy-making has grown ever more important in recent years, in parallel with the dramatic increase in the complexity and uncertainty of the ways in which science and technology interact with society and economy at the national, regional and global level. Installing a proper framewo...

  1. Evidence-based ICT Policy for Development and Innovation | CRDI ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Evidence-based ICT Policy for Development and Innovation. The cost of access to information and communication technologies (ITCs) in Africa remains the major impediment to the participation of Africans in the networked society. While Africa is the region with the fastest growing number of mobile phone subscribers in the ...

  2. Investigation of Size-Dependency in Free-Vibration of Micro-Resonators Based on the Strain Gradient Theory

    Directory of Open Access Journals (Sweden)

    R. Vatankhah

    Full Text Available Abstract This paper investigates the vibration behavior of micro-resonators based on the strain gradient theory, a non-classical continuum theory capable of capturing the size effect appearing in micro-scale structures. The micro-resonator is modeled as a clamped-clamped micro-beam with an attached mass subjected to an axial force. The governing equations of motion and both classical and non-classical sets of boundary conditions are developed based on the strain gradient theory. The normalized natural frequency of the micro-resonator is evaluated and the influences of various parameters are assessed. In addition, the current results are compared to those of the classical and modified couple stress continuum theories.

  3. PROSPECTS OF THE REGIONAL INTEGRATION POLICY BASED ON CLUSTER FORMATION

    Directory of Open Access Journals (Sweden)

    Elena Tsepilova

    2018-01-01

    Full Text Available The purpose of this article is to develop the theoretical foundations of regional integration policy and to determine its prospects on the basis of cluster formation. The authors use such research methods as systematization, comparative and complex analysis, synthesis, statistical method. Within the framework of the research, the concept of regional integration policy is specified, and its integration core – cluster – is allocated. The authors work out an algorithm of regional clustering, which will ensure the growth of economy and tax income. Measures have been proposed to optimize the organizational mechanism of interaction between the participants of the territorial cluster and the authorities that allow to ensure the effective functioning of clusters, including taxation clusters. Based on the results of studying the existing methods for assessing the effectiveness of cluster policy, the authors propose their own approach to evaluating the consequences of implementing the regional integration policy, according to which the list of quantitative and qualitative indicators is defined. The present article systematizes the experience and results of the cluster policy of certain European countries, that made it possible to determine the prospects and synergetic effect from the development of clusters as an integration foundation of regional policy in the Russian Federation. The authors carry out the analysis of activity of cluster formations using the example of the Rostov region – a leader in the formation of conditions for the cluster policy development in the Southern Federal District. 11 clusters and cluster initiatives are developing in this region. As a result, the authors propose measures for support of the already existing clusters and creation of the new ones.

  4. A Novel Ship Detection Method Based on Gradient and Integral Feature for Single-Polarization Synthetic Aperture Radar Imagery

    Directory of Open Access Journals (Sweden)

    Hao Shi

    2018-02-01

    Full Text Available With the rapid development of remote sensing technologies, SAR satellites like China’s Gaofen-3 satellite have more imaging modes and higher resolution. With the availability of high-resolution SAR images, automatic ship target detection has become an important topic in maritime research. In this paper, a novel ship detection method based on gradient and integral features is proposed. This method is mainly composed of three steps. First, in the preprocessing step, a filter is employed to smooth the clutters and the smoothing effect can be adaptive adjusted according to the statistics information of the sub-window. Thus, it can retain details while achieving noise suppression. Second, in the candidate area extraction, a sea-land segmentation method based on gradient enhancement is presented. The integral image method is employed to accelerate computation. Finally, in the ship target identification step, a feature extraction strategy based on Haar-like gradient information and a Radon transform is proposed. This strategy decreases the number of templates found in traditional Haar-like methods. Experiments were performed using Gaofen-3 single-polarization SAR images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. In addition, this method has the potential for on-board processing.

  5. A pH-Sensing Optode for Mapping Spatiotemporal Gradients in 3D Paper-Based Cell Cultures.

    Science.gov (United States)

    Kenney, Rachael M; Boyce, Matthew W; Whitman, Nathan A; Kromhout, Brenden P; Lockett, Matthew R

    2018-02-06

    Paper-based cultures are an emerging platform for preparing 3D tissue-like structures. Chemical gradients can be imposed upon these cultures, generating microenvironments similar to those found in poorly vascularized tumors. There is increasing evidence that the tumor microenvironment is responsible for promoting drug resistance and increased invasiveness. Acidosis, or the acidification of the extracellular space, is particularly important in promoting these aggressive cancer phenotypes. To better understand how cells respond to acidosis there is a need for 3D culture platforms that not only model relevant disease states but also contain sensors capable of quantifying small molecules in the extracellular environment. In this work, we describe pH-sensing optodes that are capable of generating high spatial and temporal resolution maps of pH gradients in paper-based cultures. This sensor was fabricated by suspending microparticles containing pH-sensitive (fluorescein) and pH-insensitive (diphenylanthracene) dyes in a polyurethane hydrogel, which was then coated onto a transparent film. The pH-sensing films have a fast response time, are reversible, stable in long-term culture environments, have minimal photobleaching, and are not cytotoxic. These films have a pK a of 7.61 ± 0.04 and are sensitive in the pH range corresponding to normal and tumorigenic tissues. With these optodes, we measured the spatiotemporal evolution of pH gradients in paper-based tumor models.

  6. Exploration Opportunity Search of Near-earth Objects Based on Analytical Gradients

    Science.gov (United States)

    Ren, Yuan; Cui, Ping-Yuan; Luan, En-Jie

    2008-07-01

    The problem of search of opportunity for the exploration of near-earth minor objects is investigated. For rendezvous missions, the analytical gradients of the performance index with respect to the free parameters are derived using the variational calculus and the theory of state-transition matrix. After generating randomly some initial guesses in the search space, the performance index is optimized, guided by the analytical gradients, leading to the local minimum points representing the potential launch opportunities. This method not only keeps the global-search property of the traditional method, but also avoids the blindness in the latter, thereby increasing greatly the computing speed. Furthermore, with this method, the searching precision could be controlled effectively.

  7. Strong source heat transfer simulations based on a GalerKin/Gradient - least - squares method

    International Nuclear Information System (INIS)

    Franca, L.P.; Carmo, E.G.D. do.

    1989-05-01

    Heat conduction problems with temperature-dependent strong sources are modeled by an equation with a laplacian term, a linear term and a given source distribution term. When the linear-temperature-dependent source term is much larger than the laplacian term, we have a singular perturbation problem. In this case, boundary layers are formed to satisfy the Dirichlet boundary conditions. Although this is an elliptic equation, the standard Galerkin method solution is contaminated by spurious oscillations in the neighborhood of the boundary layers. Herein we employ a Galerkin/Gradient-least-squares method which eliminates all pathological phenomena of the Galerkin method. The method is constructed by adding to the Galerkin method a mesh-dependent term obtained by the least-squares form of the gradient of the Euler-Lagrange equation. Error estimates, numerical simulations in one-and multi-dimensions are given that attest the good stability and accuracy properties of the method [pt

  8. Gradient-index phononic crystal lens-based enhancement of elastic wave energy harvesting

    Science.gov (United States)

    Tol, S.; Degertekin, F. L.; Erturk, A.

    2016-08-01

    We explore the enhancement of structure-borne elastic wave energy harvesting, both numerically and experimentally, by exploiting a Gradient-Index Phononic Crystal Lens (GRIN-PCL) structure. The proposed GRIN-PCL is formed by an array of blind holes with different diameters on an aluminum plate, where the blind hole distribution is tailored to obtain a hyperbolic secant gradient profile of refractive index guided by finite-element simulations of the lowest asymmetric mode Lamb wave band diagrams. Under plane wave excitation from a line source, experimentally measured wave field validates the numerical simulation of wave focusing within the GRIN-PCL domain. A piezoelectric energy harvester disk located at the first focus of the GRIN-PCL yields an order of magnitude larger power output as compared to the baseline case of energy harvesting without the GRIN-PCL on the uniform plate counterpart.

  9. INVESTIGATION OF FISCAL AND BUDGETARY POLICIES BASED ON ECONOMIC THEORIES

    Directory of Open Access Journals (Sweden)

    EMILIA CAMPEANU

    2011-04-01

    Full Text Available Empirical analysis of fiscal and budgetary policies cannot be achieved without first knowing how they are viewed in the economic theories. This approach is important to indicate the position and implications of fiscal and budgetary policy tools in the economic theory considering their major differences. Therefore, the paper aims is to investigate the fiscal and budgetary policies based on economic theories such as neoclassical, Keynesian and neo-Keynesian theory in order to indicate their divergent points. Once known these approaches at the economic theory level is easier to establish the appropriate measures taking into consideration the framing of a country economy in a certain pattern. This work was supported from the European Social Fund through Sectoral Operational Programme Human Resources Development 2007-2013, project number POSDRU/89/1.5/S/59184 „Performance and excellence in postdoctoral research in Romanian economics science domain” (contract no. 0501/01.11.2010.

  10. Conjugate gradient method for phase retrieval based on the Wirtinger derivative.

    Science.gov (United States)

    Wei, Zhun; Chen, Wen; Qiu, Cheng-Wei; Chen, Xudong

    2017-05-01

    A conjugate gradient Wirtinger flow (CG-WF) algorithm for phase retrieval is proposed in this paper. It is shown that, compared with recently reported Wirtinger flow and its modified methods, the proposed CG-WF algorithm is able to dramatically accelerate the convergence rate while keeping the dominant computational cost of each iteration unchanged. We numerically illustrate the effectiveness of our method in recovering 1D Gaussian signals and 2D natural color images under both Gaussian and coded diffraction pattern models.

  11. Study of flow instability in a centrifugal fan based on energy gradient theory

    International Nuclear Information System (INIS)

    Xiao, Meina; Dou, Hua-Shu; Ma, Xiaoyang; Xiao, Qing; Chen, Yongning; He, Haijiang; Ye, Xinxue

    2016-01-01

    Flow instability in a centrifugal fan was studied using energy gradient theory. Numerical simulation was performed for the three dimensional turbulent flow field in a centrifugal fan. The flow is governed by the three-dimensional incompressible Navier-Stokes equations coupled with the RNG k-ε turbulent model. The finite volume method was used to discretize the governing equations and the Semiimplicit method for pressure linked equation (SIMPLE) algorithm is employed to iterate the system of the equations. The interior flow field in the centrifugal fan and the distribution of the energy gradient function K are obtained at different flow rates. According to the energy gradient method, the area with larger value of K is the place where the flow loses stability easier. The results show that instability is easier to generate in the regions of impeller outlet and volute tongue. The air flow near the hub is more stable than that near the shroud. That is due to the influences of variations of the velocity and the inlet angle along the axial direction. With the decrease of the flow rate, instability zone in a blade channel moves to the impeller inlet from the outlet and the unstable regions in different channels develop in opposite direction to the rotation of impeller

  12. Dexamethasone levels and base to apex concentration gradients in scala tympani perilymph following intracochlear delivery in the guinea pig

    Science.gov (United States)

    Hahn, Hartmut; Salt, Alec N.; Biegner, Thorsten; Kammerer, Bernd; Delabar, Ursular; Hartsock, Jared; Plontke, Stefan K.

    2012-01-01

    Hypothesis To determine whether intracochlearly applied dexamethasone will lead to better control of drug levels, higher peak concentrations and lower base-to apex concentration gradients in scala tympani (ST) of the guinea pig than after intratympanic (round window, RW) application. Background Local application of drugs to the RW results in substantial variation of intracochlear drug levels and significant base-to apex concentration gradients in ST. Methods Two μL of dexamethasone-phosphate (10 mg/mL) were injected into ST either through the RW membrane which was covered with 1% sodium hyaluronate gel or through a cochleostomy with a fluid tight seal of the micropipette. Perilymph was sequentially sampled from the apex at a single time point for each animal, at 20, 80, or 200 min after the injection ended. Results were mathematically interpreted by the means of an established computer model and compared with prior experiments performed by our group with the same experimental techniques but using intratympanic applications. Results Single intracochlear injections over 20 min resulted in approximately ten times higher peak concentrations (on average) than 2-3 hours of intratympanic application to the round window niche. Intracochlear drug levels were less variable and could be measured for at least up to 220 min. Concentration gradients along scala tympani were less pronounced. The remaining variability in intracochlear drug levels was attributable to perilymph and drug leak from the injection site. Conclusion With significantly higher, less variable drug levels and smaller base-to apex concentration gradients, intracochlear applications have advantages to intratympanic injections. For further development of this technique, it is of importance to control leaks of perilymph and drug from the injection site and to evaluate its clinical feasibility and associated risks. PMID:22588238

  13. $L_{0}$ Gradient Projection.

    Science.gov (United States)

    Ono, Shunsuke

    2017-04-01

    Minimizing L 0 gradient, the number of the non-zero gradients of an image, together with a quadratic data-fidelity to an input image has been recognized as a powerful edge-preserving filtering method. However, the L 0 gradient minimization has an inherent difficulty: a user-given parameter controlling the degree of flatness does not have a physical meaning since the parameter just balances the relative importance of the L 0 gradient term to the quadratic data-fidelity term. As a result, the setting of the parameter is a troublesome work in the L 0 gradient minimization. To circumvent the difficulty, we propose a new edge-preserving filtering method with a novel use of the L 0 gradient. Our method is formulated as the minimization of the quadratic data-fidelity subject to the hard constraint that the L 0 gradient is less than a user-given parameter α . This strategy is much more intuitive than the L 0 gradient minimization because the parameter α has a clear meaning: the L 0 gradient value of the output image itself, so that one can directly impose a desired degree of flatness by α . We also provide an efficient algorithm based on the so-called alternating direction method of multipliers for computing an approximate solution of the nonconvex problem, where we decompose it into two subproblems and derive closed-form solutions to them. The advantages of our method are demonstrated through extensive experiments.

  14. Time-domain full waveform inversion using the gradient preconditioning based on seismic wave energy: Application to the South China Sea

    KAUST Repository

    Mengxuan, Zhong

    2017-06-01

    The gradient preconditioning algorithms based on Hessian matrices in time-domain full waveform inversion (FWI) are widely used now, but consume a lot of memory and do not fit the FWI of large models or actual seismic data well. To avoid the huge storage consumption, the gradient preconditioning approach based on seismic wave energy has been proposed it simulates the “approximated wave field” with the acoustic wave equation and uses the energy of the simulated wavefield to precondition the gradient. The method does not require computing and storing the Hessian matrix or its inverse and can effectively eliminate the effect caused by geometric diffusion and uneven illumination on gradient. The result of experiments in this article with field data from South China Sea confirms that the time-domain FWI using the gradient preconditioning based on seismic wave energy (GPWE) can achieve higher inversion accuracy for the deep high-velocity model and its underlying strata.

  15. Development of a New Gradient Based Strain-Criterion for Prediction of Bendability in Quality Assurance and FEA

    Science.gov (United States)

    Denninger, Ralf; Liewald, Mathias; Sindel, Manfred

    2011-08-01

    Numerical simulation systems are more and more used in process development of car bodies. Nowadays, also the hemming process is optimised in FEA. Thus, the analysing of process robustness calls for a failure criterion for the specific bending and hemming load condition. For that purpose the experimental determination of bendability under various pre-load conditions that occur in real production, e.g. during deep drawing in press shop, is content of this contribution. Using these experimental results, a new approach for a strain-gradient based failure criterion for bending operations is presented to optimise bendability prediction. The bending-strain-gradient approach can be used both in production related departments of quality assurance as well as for simulative process design or process validation for vehicle manufacturing planning.

  16. Sustainable development based energy policy making frameworks, a critical review

    International Nuclear Information System (INIS)

    Meyar-Naimi, H.; Vaez-Zadeh, S.

    2012-01-01

    This paper, in the first step, presents an overview of the origination and formulation of sustainable development (SD) concept and the related policy making frameworks. The frameworks include Pressure–State–Response (PSR), Driving Force–State–Response (DSR), Driving Force–Pressure–State–Impact–Response (DPSIR), Driving Force–Pressure–State–Effect–Action (DPSEA) and Driving Force-Pressure-State-Exposure-Effect-Action (DPSEEA). In this regard, 40 case studies using the reviewed frameworks reported during 1994–2011 are surveyed. Then, their application area and application intensity are investigated. It is concluded that PSR, and DPSEA and DPSEEA have the higher and lower application intensities, respectively. Moreover, using Analytical Hierarchy Process (AHP) with a set of criteria, it is shown that PSR and DPSIR have the highest and lowest priorities. Finally, the shortcomings of frameworks applications are discussed. The paper is helpful in selecting appropriate policy making frameworks and presents some hints for future research in the area for developing more comprehensive models especially for sustainable electric energy policy making. - Highlights: ► The origination and formulation of sustainable development (SD) concept is reviewed. ► SD based frameworks (PSR, DSR, DPSIR, DPSEA and DPSEEA) are also reviewed. ► Then, the frameworks application area and intensity in recent years are investigated. ► Finally, the SD concept and the SD based frameworks are criticized. ► It will be helpful for developing more comprehensive energy policy making models.

  17. Automatic differentiation for gradient-based optimization of radiatively heated microelectronics manufacturing equipment

    Energy Technology Data Exchange (ETDEWEB)

    Moen, C.D.; Spence, P.A.; Meza, J.C.; Plantenga, T.D.

    1996-12-31

    Automatic differentiation is applied to the optimal design of microelectronic manufacturing equipment. The performance of nonlinear, least-squares optimization methods is compared between numerical and analytical gradient approaches. The optimization calculations are performed by running large finite-element codes in an object-oriented optimization environment. The Adifor automatic differentiation tool is used to generate analytic derivatives for the finite-element codes. The performance results support previous observations that automatic differentiation becomes beneficial as the number of optimization parameters increases. The increase in speed, relative to numerical differences, has a limited value and results are reported for two different analysis codes.

  18. Gradient-based estimation of Manning's friction coefficient from noisy data

    KAUST Repository

    Calo, Victor M.

    2013-01-01

    We study the numerical recovery of Manning\\'s roughness coefficient for the diffusive wave approximation of the shallow water equation. We describe a conjugate gradient method for the numerical inversion. Numerical results for one-dimensional models are presented to illustrate the feasibility of the approach. Also we provide a proof of the differentiability of the weak form with respect to the coefficient as well as the continuity and boundedness of the linearized operator under reasonable assumptions using the maximal parabolic regularity theory. © 2012 Elsevier B.V. All rights reserved.

  19. Accelerated gradient methods for total-variation-based CT image reconstruction

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Heide; Jensen, Tobias Lindstrøm; Hansen, Per Christian

    2011-01-01

    incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping...... reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-demanding methods such as Newton’s method. The simple gradient method has much lower memory requirements, but exhibits slow convergence...

  20. Identification techniques for phenomenological models of hysteresis based on the conjugate gradient method

    International Nuclear Information System (INIS)

    Andrei, Petru; Oniciuc, Liviu; Stancu, Alexandru; Stoleriu, Laurentiu

    2007-01-01

    An identification technique for the parameters of phenomenological models of hysteresis is presented. The basic idea of our technique is to set up a system of equations for the parameters of the model as a function of known quantities on the major or minor hysteresis loops (e.g. coercive force, susceptibilities at various points, remanence), or other magnetization curves. This system of equations can be either over or underspecified and is solved by using the conjugate gradient method. Numerical results related to the identification of parameters in the Energetic, Jiles-Atherton, and Preisach models are presented

  1. Gradient-based estimation of Manning's friction coefficient from noisy data

    KAUST Repository

    Calo, Victor M.; Collier, Nathan; Gehre, Matthias; Jin, Bangti; Radwan, Hany G.; Santillana, Mauricio

    2013-01-01

    We study the numerical recovery of Manning's roughness coefficient for the diffusive wave approximation of the shallow water equation. We describe a conjugate gradient method for the numerical inversion. Numerical results for one-dimensional models are presented to illustrate the feasibility of the approach. Also we provide a proof of the differentiability of the weak form with respect to the coefficient as well as the continuity and boundedness of the linearized operator under reasonable assumptions using the maximal parabolic regularity theory. © 2012 Elsevier B.V. All rights reserved.

  2. A physics-based model for maintenance of the pH gradient in the gastric mucus layer.

    Science.gov (United States)

    Lewis, Owen L; Keener, James P; Fogelson, Aaron L

    2017-12-01

    It is generally accepted that the gastric mucus layer provides a protective barrier between the lumen and the mucosa, shielding the mucosa from acid and digestive enzymes and preventing autodigestion of the stomach epithelium. However, the precise mechanisms that contribute to this protective function are still up for debate. In particular, it is not clear what physical processes are responsible for transporting hydrogen protons, secreted within the gastric pits, across the mucus layer to the lumen without acidifying the environment adjacent to the epithelium. One hypothesis is that hydrogen may be bound to the mucin polymers themselves as they are convected away from the mucosal surface and eventually degraded in the stomach lumen. It is also not clear what mechanisms prevent hydrogen from diffusing back toward the mucosal surface, thereby lowering the local pH. In this work we investigate a physics-based model of ion transport within the mucosal layer based on a Nernst-Planck-like equation. Analysis of this model shows that the mechanism of transporting protons bound to the mucus gel is capable of reproducing the trans-mucus pH gradients reported in the literature. Furthermore, when coupled with ion exchange at the epithelial surface, our analysis shows that bicarbonate secretion alone is capable of neutralizing the epithelial pH, even in the face of enormous diffusive gradients of hydrogen. Maintenance of the pH gradient is found to be robust to a wide array of perturbations in both physiological and phenomenological model parameters, suggesting a robust physiological control mechanism. NEW & NOTEWORTHY This work combines modeling techniques based on physical principles, as well as novel numerical simulations to test the plausibility of one hypothesized mechanism for proton transport across the gastric mucus layer. Results show that this mechanism is able to maintain the extreme pH gradient seen in in vivo experiments and suggests a highly robust regulation

  3. Assessing of energy policies based on Turkish agriculture:

    International Nuclear Information System (INIS)

    Sayin, Cengiz; Nisa Mencet, M.; Ozkan, Burhan

    2005-01-01

    In this study, the current energy status of Turkey and the effects of national energy policies on Turkish agricultural support policies are discussed for both current and future requirements. Turkey is an energy-importing country producing 30 mtoe (million tons of oil equivalent) energy but consuming 80 mtoe. The energy import ratio of Turkey is 65-70% and the majority of this import is based on petroleum and natural gas. Furthermore, while world energy demand increases by 1.8% annually, Turkey's energy demand increases by about 8%. Although energy consumption in agriculture is much lower than the other sectors in Turkey, energy use as both input and output of agricultural sector is a very important issue due to its large agricultural potential and rural area. Total agricultural land area is 27.8 million hectares and about 66.5% of this area is devoted for cereal production. On the other hand, Turkey has over 4 million agricultural farm holdings of which 70-75% is engaged in cereal production. Machinery expenses, mainly diesel, constitute 30-50% of total variable expenses in cereal production costs. It is observed that energy policies pursued in agriculture have been directly affected by diesel prices in Turkey. Therefore, support policy tools for using diesel and electricity in agriculture are being pursued by the Turkish government

  4. High gradient magnetic separation of upconverting lanthanide nanophosphors based on their intrinsic paramagnetism

    Energy Technology Data Exchange (ETDEWEB)

    Arppe, Riikka, E-mail: riikka.arppe@utu.fi; Salovaara, Oskari; Mattsson, Leena; Lahtinen, Satu; Valta, Timo; Riuttamaeki, Terhi; Soukka, Tero [University of Turku, Department of Biotechnology (Finland)

    2013-09-15

    Photon upconverting nanophosphors (UCNPs) have the unique luminescent property of converting low-energy infrared light into visible emission which can be widely utilized in nanoreporter and imaging applications. For the use as reporters in these applications, the UCNPs must undergo a series of surface modification and bioconjugation reactions. Efficient purification methods are required to remove the excess reagents and biomolecules from the nanophosphor solution after each step to yield highly responsive reporters for sensitive bioanalytical assays. However, as the particle size of the UCNPs approaches the size of biomolecules, the handling of these reporters becomes cumbersome with traditional purification methods such as centrifugation. Here we introduce a novel approach for purification of bioconjugated 32-nm NaYF{sub 4}: Yb{sup 3+}, Er{sup 3+}-nanophosphors from excess unbound biomolecules utilizing high gradient magnetic separation (HGMS)-system constructed from permanent super magnets which produce magnetic gradients in a magnetizable steel wool matrix amplifying the magnetic field. The non-magnetic biomolecules flowed straight through the magnetized HGMS-column while the UCNPs were eluted only after the magnetic field was removed. In the UCNPs the luminescent centers, i.e., lanthanide-ion dopants are responsible for the strong upconversion luminescence, but in addition they are also paramagnetic. In this study we have shown that the presence of these weakly paramagnetic luminescent lanthanides actually also enables the use of HGMS to capture the UCNPs without incorporating additional optically inactive magnetic core into them.

  5. Nonlocal strain gradient based wave dispersion behavior of smart rotating magneto-electro-elastic nanoplates

    Science.gov (United States)

    Ebrahimi, Farzad; Dabbagh, Ali

    2017-02-01

    Main object of the present research is an exact investigation of wave propagation responses of smart rotating magneto-electro-elastic (MEE) graded nanoscale plates. In addition, effective material properties of functionally graded (FG) nanoplate are presumed to be calculated using the power-law formulations. Also, it has been tried to cover both softening and stiffness-hardening behaviors of nanostructures by the means of employing nonlocal strain gradient theory (NSGT). Due to increasing the accuracy of the presented model in predicting shear deformation effects, a refined higher-order plate theory is introduced. In order to cover the most enormous circumstances, maximum amount of load generated by plate’s rotation is considered. Furthermore, utilizing a developed form of Hamilton’s principle, containing magneto-electric effects, the nonlocal governing equations of MEE-FG rotating nanoplates are derived. An analytical solution is obtained to solve the governing equations and validity of the solution method is proven by comparing results from present method with those of former attempts. At last, outcomes are plotted in the framework of some figures to show the influences of various parameters such as wave number, nonlocality, length scale parameter, magnetic potential, electric voltage, gradient index and angular velocity on wave frequency, phase velocity and escape frequency of the examined nanoplate.

  6. High gradient magnetic separation of upconverting lanthanide nanophosphors based on their intrinsic paramagnetism

    International Nuclear Information System (INIS)

    Arppe, Riikka; Salovaara, Oskari; Mattsson, Leena; Lahtinen, Satu; Valta, Timo; Riuttamäki, Terhi; Soukka, Tero

    2013-01-01

    Photon upconverting nanophosphors (UCNPs) have the unique luminescent property of converting low-energy infrared light into visible emission which can be widely utilized in nanoreporter and imaging applications. For the use as reporters in these applications, the UCNPs must undergo a series of surface modification and bioconjugation reactions. Efficient purification methods are required to remove the excess reagents and biomolecules from the nanophosphor solution after each step to yield highly responsive reporters for sensitive bioanalytical assays. However, as the particle size of the UCNPs approaches the size of biomolecules, the handling of these reporters becomes cumbersome with traditional purification methods such as centrifugation. Here we introduce a novel approach for purification of bioconjugated 32-nm NaYF 4 : Yb 3+ , Er 3+ -nanophosphors from excess unbound biomolecules utilizing high gradient magnetic separation (HGMS)-system constructed from permanent super magnets which produce magnetic gradients in a magnetizable steel wool matrix amplifying the magnetic field. The non-magnetic biomolecules flowed straight through the magnetized HGMS-column while the UCNPs were eluted only after the magnetic field was removed. In the UCNPs the luminescent centers, i.e., lanthanide-ion dopants are responsible for the strong upconversion luminescence, but in addition they are also paramagnetic. In this study we have shown that the presence of these weakly paramagnetic luminescent lanthanides actually also enables the use of HGMS to capture the UCNPs without incorporating additional optically inactive magnetic core into them

  7. Conjugate gradient and cross-correlation based least-square reverse time migration and its application

    Science.gov (United States)

    Sun, Xiao-Dong; Ge, Zhong-Hui; Li, Zhen-Chun

    2017-09-01

    Although conventional reverse time migration can be perfectly applied to structural imaging it lacks the capability of enabling detailed delineation of a lithological reservoir due to irregular illumination. To obtain reliable reflectivity of the subsurface it is necessary to solve the imaging problem using inversion. The least-square reverse time migration (LSRTM) (also known as linearized reflectivity inversion) aims to obtain relatively high-resolution amplitude preserving imaging by including the inverse of the Hessian matrix. In practice, the conjugate gradient algorithm is proven to be an efficient iterative method for enabling use of LSRTM. The velocity gradient can be derived from a cross-correlation between observed data and simulated data, making LSRTM independent of wavelet signature and thus more robust in practice. Tests on synthetic and marine data show that LSRTM has good potential for use in reservoir description and four-dimensional (4D) seismic images compared to traditional RTM and Fourier finite difference (FFD) migration. This paper investigates the first order approximation of LSRTM, which is also known as the linear Born approximation. However, for more complex geological structures a higher order approximation should be considered to improve imaging quality.

  8. Electron thermal energy transport research based on dynamical relationship between heat flux and temperature gradient

    International Nuclear Information System (INIS)

    Notake, Takashi; Inagaki, Shigeru; Tamura, Naoki

    2008-01-01

    In the nuclear fusion plasmas, both of thermal energy and particle transport governed by turbulent flow are anomalously enhanced more than neoclassical levels. Thus, to clarify a relationship between the turbulent flow and the anomalous transports has been the most worthwhile work. There are experimental results that the turbulent flow induces various phenomena on transport processes such as non-linearity, transition, hysteresis, multi-branches and non-locality. We are approaching these complicated problems by analyzing not conventional power balance but these phenomena directly. They are recognized as dynamical trajectories in the flux and gradient space and must be a clue to comprehend a physical mechanism of arcane anomalous transport. Especially, to elucidate the mechanism for electron thermal energy transport is critical in the fusion plasma researches because the burning plasmas will be sustained by alpha-particle heating. In large helical device, the dynamical relationships between electron thermal energy fluxes and electron temperature gradients are investigated by using modulated electron cyclotron resonance heating and modern electron cyclotron emission diagnostic systems. Some trajectories such as hysteresis loop or line segments with steep slope which represent non-linear property are observed in the experiment. (author)

  9. Far-Infrared Based Pedestrian Detection for Driver-Assistance Systems Based on Candidate Filters, Gradient-Based Feature and Multi-Frame Approval Matching.

    Science.gov (United States)

    Wang, Guohua; Liu, Qiong

    2015-12-21

    Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians' head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians' size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only.

  10. Far-Infrared Based Pedestrian Detection for Driver-Assistance Systems Based on Candidate Filters, Gradient-Based Feature and Multi-Frame Approval Matching

    Directory of Open Access Journals (Sweden)

    Guohua Wang

    2015-12-01

    Full Text Available Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians’ head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians’ size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only.

  11. An acoustic Maxwell’s fish-eye lens based on gradient-index metamaterials

    International Nuclear Information System (INIS)

    Yuan Bao-guo; Tian Ye; Cheng Ying; Liu Xiao-jun

    2016-01-01

    We have proposed a two-dimensional acoustic Maxwell’s fish-eye lens by using the gradient-index metamaterials with space-coiling units. By adjusting the structural parameters of the units, the refractive index can be gradually varied, which is key role to design the acoustic fish-eye lens. As predicted by ray trajectories on a virtual sphere, the proposed lens has the capability to focus the acoustic wave irradiated from a point source at the surface of the lens on the diametrically opposite side of the lens. The broadband and low loss performance is further demonstrated for the lens. The proposed acoustic fish-eye lens is expected to have the potential applications in directional acoustic coupler or coherent ultrasonic imaging. (paper)

  12. Nonlocal strain gradient theory calibration using molecular dynamics simulation based on small scale vibration of nanotubes

    Energy Technology Data Exchange (ETDEWEB)

    Mehralian, Fahimeh [Mechanical Engineering Department, Shahrekord University, Shahrekord (Iran, Islamic Republic of); Tadi Beni, Yaghoub, E-mail: tadi@eng.sku.ac.ir [Faculty of Engineering, Shahrekord University, Shahrekord (Iran, Islamic Republic of); Karimi Zeverdejani, Mehran [Mechanical Engineering Department, Shahrekord University, Shahrekord (Iran, Islamic Republic of)

    2017-06-01

    Featured by two small length scale parameters, nonlocal strain gradient theory is utilized to investigate the free vibration of nanotubes. A new size-dependent shell model formulation is developed by using the first order shear deformation theory. The governing equations and boundary conditions are obtained using Hamilton's principle and solved for simply supported boundary condition. As main purpose of this study, since the values of two small length scale parameters are still unknown, they are calibrated by the means of molecular dynamics simulations (MDs). Then, the influences of different parameters such as nonlocal parameter, scale factor, length and thickness on vibration characteristics of nanotubes are studied. It is also shown that increase in thickness and decrease in length parameters intensify the effect of nonlocal parameter and scale factor.

  13. Real-time positioning technology in horizontal directional drilling based on magnetic gradient tensor measurement

    Science.gov (United States)

    Deng, Guoqing; Yao, Aiguo

    2017-04-01

    Horizontal directional drilling (HDD) technology has been widely used in Civil Engineering. The dynamic position of the drill bit during construction is one of significant facts determining the accuracy of the trajectory of HDD. A new method now has been proposed to detecting the position of drill bit by measuring the magnetic gradient tensor of the ground solenoid magnetic beacon. Compared with traditional HDD positioning technologies, this new model is much easier to apply with lower request for construction sites and higher positioning efficiency. A direct current (DC) solenoid as a magnetic dipole is placed on ground near the drill bit, and related sensors array which contains four Micro-electromechanical Systems (MEMS ) tri-axial magnetometers, one MEMS tri-axial accelerometer and one MEMS tri-axial gyroscope is set up for measuring the magnetic gradient tensor of the magnetic dipole. The related HDD positioning model has been established and simulation experiments have been carried out to verify the feasibility and reliability of the proposed method. The experiments show that this method has good positioning accuracy in horizontal and vertical direction, and totally avoid the impact of the environmental magnetic field. It can be found that the posture of the magnetic beacon will impact the remote positioning precision within valid positioning range, and the positioning accuracy is higher with longer baseline for limited space in drilling tools. The results prove that the relative error can be limited in 2% by adjusting position of the magnetic beacon, the layers of the enameled coil, the sensitive of magnetometers and the baseline distance. Conclusion can be made that this new method can be applied in HDD positioning with better effect and wider application range than traditional method.

  14. To what extent are Canadian second language policies evidence-based? Reflections on the intersections of research and policy.

    Science.gov (United States)

    Cummins, Jim

    2014-01-01

    THE PAPER ADDRESSES THE INTERSECTIONS BETWEEN RESEARCH FINDINGS AND CANADIAN EDUCATIONAL POLICIES FOCUSING ON FOUR MAJOR AREAS: (a) core and immersion programs for the teaching of French to Anglophone students, (b) policies concerning the learning of English and French by students from immigrant backgrounds, (c) heritage language teaching, and (d) the education of Deaf and hard-of hearing students. With respect to the teaching of French, policy-makers have largely ignored the fact that most core French programs produce meager results for the vast majority of students. Only a small proportion of students (languages, preferring instead to leave uncorrected the proposition that acquisition of languages such as American Sign Language by young children (with or without cochlear implants) will impede children's language and academic development. The paper reviews the kinds of policies, programs, and practices that could be implemented (at no additional cost) if policy-makers and educators pursued evidence-based educational policies.

  15. A new 3-D ray tracing method based on LTI using successive partitioning of cell interfaces and traveltime gradients

    Science.gov (United States)

    Zhang, Dong; Zhang, Ting-Ting; Zhang, Xiao-Lei; Yang, Yan; Hu, Ying; Qin, Qian-Qing

    2013-05-01

    We present a new method of three-dimensional (3-D) seismic ray tracing, based on an improvement to the linear traveltime interpolation (LTI) ray tracing algorithm. This new technique involves two separate steps. The first involves a forward calculation based on the LTI method and the dynamic successive partitioning scheme, which is applied to calculate traveltimes on cell boundaries and assumes a wavefront that expands from the source to all grid nodes in the computational domain. We locate several dynamic successive partition points on a cell's surface, the traveltimes of which can be calculated by linear interpolation between the vertices of the cell's boundary. The second is a backward step that uses Fermat's principle and the fact that the ray path is always perpendicular to the wavefront and follows the negative traveltime gradient. In this process, the first-arriving ray path can be traced from the receiver to the source along the negative traveltime gradient, which can be calculated by reconstructing the continuous traveltime field with cubic B-spline interpolation. This new 3-D ray tracing method is compared with the LTI method and the shortest path method (SPM) through a number of numerical experiments. These comparisons show obvious improvements to computed traveltimes and ray paths, both in precision and computational efficiency.

  16. What Is the Best Way to Contour Lung Tumors on PET Scans? Multiobserver Validation of a Gradient-Based Method Using a NSCLC Digital PET Phantom

    International Nuclear Information System (INIS)

    Werner-Wasik, Maria; Nelson, Arden D.; Choi, Walter; Arai, Yoshio; Faulhaber, Peter F.; Kang, Patrick; Almeida, Fabio D.; Xiao, Ying; Ohri, Nitin; Brockway, Kristin D.; Piper, Jonathan W.; Nelson, Aaron S.

    2012-01-01

    Purpose: To evaluate the accuracy and consistency of a gradient-based positron emission tomography (PET) segmentation method, GRADIENT, compared with manual (MANUAL) and constant threshold (THRESHOLD) methods. Methods and Materials: Contouring accuracy was evaluated with sphere phantoms and clinically realistic Monte Carlo PET phantoms of the thorax. The sphere phantoms were 10–37 mm in diameter and were acquired at five institutions emulating clinical conditions. One institution also acquired a sphere phantom with multiple source-to-background ratios of 2:1, 5:1, 10:1, 20:1, and 70:1. One observer segmented (contoured) each sphere with GRADIENT and THRESHOLD from 25% to 50% at 5% increments. Subsequently, seven physicians segmented 31 lesions (7–264 mL) from 25 digital thorax phantoms using GRADIENT, THRESHOLD, and MANUAL. Results: For spheres 20 mm (p < 0.065) and <20 mm (p < 0.015). For digital thorax phantoms, GRADIENT was the most accurate (p < 0.01), with a mean absolute % error in volume of 10.99% (11.9% SD), followed by 25% THRESHOLD at 17.5% (29.4% SD), and MANUAL at 19.5% (17.2% SD). GRADIENT had the least systematic bias, with a mean % error in volume of –0.05% (16.2% SD) compared with 25% THRESHOLD at –2.1% (34.2% SD) and MANUAL at –16.3% (20.2% SD; p value <0.01). Interobserver variability was reduced using GRADIENT compared with both 25% THRESHOLD and MANUAL (p value <0.01, Levene’s test). Conclusion: GRADIENT was the most accurate and consistent technique for target volume contouring. GRADIENT was also the most robust for varying imaging conditions. GRADIENT has the potential to play an important role for tumor delineation in radiation therapy planning and response assessment.

  17. New technology based on clamping for high gradient radio frequency photogun

    Science.gov (United States)

    Alesini, David; Battisti, Antonio; Ferrario, Massimo; Foggetta, Luca; Lollo, Valerio; Ficcadenti, Luca; Pettinacci, Valerio; Custodio, Sean; Pirez, Eylene; Musumeci, Pietro; Palumbo, Luigi

    2015-09-01

    High gradient rf photoguns have been a key development to enable several applications of high quality electron beams. They allow the generation of beams with very high peak current and low transverse emittance, satisfying the tight demands for free-electron lasers, energy recovery linacs, Compton/Thomson sources and high-energy linear colliders. In the present paper we present the design of a new rf photogun recently developed in the framework of the SPARC_LAB photoinjector activities at the laboratories of the National Institute of Nuclear Physics in Frascati (LNF-INFN, Italy). This design implements several new features from the electromagnetic point of view and, more important, a novel technology for its realization that does not involve any brazing process. From the electromagnetic point of view the gun presents high mode separation, low peak surface electric field at the iris and minimized pulsed heating on the coupler. For the realization, we have implemented a novel fabrication design that, avoiding brazing, strongly reduces the cost, the realization time and the risk of failure. Details on the electromagnetic design, low power rf measurements and high power radiofrequency and beam tests performed at the University of California in Los Angeles (UCLA) are discussed in the paper.

  18. A modified conjugate gradient method based on the Tikhonov system for computerized tomography (CT).

    Science.gov (United States)

    Wang, Qi; Wang, Huaxiang

    2011-04-01

    During the past few decades, computerized tomography (CT) was widely used for non-destructive testing (NDT) and non-destructive examination (NDE) in the industrial area because of its characteristics of non-invasiveness and visibility. Recently, CT technology has been applied to multi-phase flow measurement. Using the principle of radiation attenuation measurements along different directions through the investigated object with a special reconstruction algorithm, cross-sectional information of the scanned object can be worked out. It is a typical inverse problem and has always been a challenge for its nonlinearity and ill-conditions. The Tikhonov regulation method is widely used for similar ill-posed problems. However, the conventional Tikhonov method does not provide reconstructions with qualities good enough, the relative errors between the reconstructed images and the real distribution should be further reduced. In this paper, a modified conjugate gradient (CG) method is applied to a Tikhonov system (MCGT method) for reconstructing CT images. The computational load is dominated by the number of independent measurements m, and a preconditioner is imported to lower the condition number of the Tikhonov system. Both simulation and experiment results indicate that the proposed method can reduce the computational time and improve the quality of image reconstruction. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Towards a Demand-Driven Agenda for Place-Based Policies in the EU

    DEFF Research Database (Denmark)

    Jensen, Camilla

    This policy study is the second report on the policy implications of the EU funded project Policy Incentives for the Creation of Knowledge – Methods and Evidence (PICK-ME). All contributions in the project related with place-based policy and cluster building are summarized and reviewed (Work...

  20. Quaternion Gradient and Hessian

    OpenAIRE

    Xu, Dongpo; Mandic, Danilo P.

    2014-01-01

    The optimization of real scalar functions of quaternion variables, such as the mean square error or array output power, underpins many practical applications. Solutions typically require the calculation of the gradient and Hessian. However, real functions of quaternion variables are essentially nonanalytic, which are prohibitive to the development of quaternion-valued learning systems. To address this issue, we propose new definitions of quaternion gradient and Hessian, based on the novel gen...

  1. Air pollution assessment based on elemental concentration of leaves tissue and foliage dust along an urbanization gradient in Vienna

    International Nuclear Information System (INIS)

    Simon, Edina; Braun, Mihaly; Vidic, Andreas; Bogyo, David; Fabian, Istvan; Tothmeresz, Bela

    2011-01-01

    Foliage dust contains heavy metal that may have harmful effects on human health. The elemental contents of tree leaves and foliage dust are especially useful to assess air environmental pollution. We studied the elemental concentrations in foliage dust and leaves of Acer pseudoplatanus along an urbanization gradient in Vienna, Austria. Samples were collected from urban, suburban and rural areas. We analysed 19 elements in both kind of samples: aluminium, barium, calcium, copper, iron, potassium, magnesium, sodium, phosphor, sulphur, strontium and zinc. We found that the elemental concentrations of foliage dust were significantly higher in the urban area than in the rural area for aluminium, barium, iron, lead, phosphor and selenium. Elemental concentrations of leaves were significantly higher in urban than in rural area for manganese and strontium. Urbanization changed significantly the elemental concentrations of foliage dust and leaves and the applied method can be useful for monitoring the environmental load. - Highlights: → We studied the elements in dust and leaves along an urbanization gradient, Austria. → We analysed 19 elements: Al, Ba, Ca, Cd, Cu, Fe, K, Mg, Na, P, Pb, S, Sr and Zn. → Elemental concentrations were higher in urban area than in the rural area. → Studied areas were separated by CDA based on the elemental concentrations. → Dust and leaves can be useful for monitoring the environmental load. - Studying the elements (Al, Ba, Ca, Cu, Fe, K, Mg, Na, P, S, Sr, Zn) in dust and leaves along an urbanization gradient in Wien, Austria we found that the elemental concentrations of foliage dust were significantly higher in the urban area than in the rural area for Al, Ba, Fe, Pb, P and Se, and concentrations of leaves were significantly higher in urban than in rural area for Mn and Sr.

  2. Air pollution assessment based on elemental concentration of leaves tissue and foliage dust along an urbanization gradient in Vienna

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Edina, E-mail: edina.simon@gmail.com [Department of Ecology, University of Debrecen, H-4010 Debrecen, P.O. Box 71 (Hungary); Braun, Mihaly [Department of Inorganic and Analytical Chemistry, University of Debrecen, H-4010 Debrecen, P.O. Box 21 (Hungary); Vidic, Andreas [Department fuer Naturschutzbiologie, Vegetations- und Landschaftsoekologie, Universitat Wien, Althanstrasse 14, 1090 Wien (Austria); Bogyo, David [Department of Ecology, University of Debrecen, H-4010 Debrecen, P.O. Box 71 (Hungary); Fabian, Istvan [Department of Inorganic and Analytical Chemistry, University of Debrecen, H-4010 Debrecen, P.O. Box 21 (Hungary); Tothmeresz, Bela [Department of Ecology, University of Debrecen, H-4010 Debrecen, P.O. Box 71 (Hungary)

    2011-05-15

    Foliage dust contains heavy metal that may have harmful effects on human health. The elemental contents of tree leaves and foliage dust are especially useful to assess air environmental pollution. We studied the elemental concentrations in foliage dust and leaves of Acer pseudoplatanus along an urbanization gradient in Vienna, Austria. Samples were collected from urban, suburban and rural areas. We analysed 19 elements in both kind of samples: aluminium, barium, calcium, copper, iron, potassium, magnesium, sodium, phosphor, sulphur, strontium and zinc. We found that the elemental concentrations of foliage dust were significantly higher in the urban area than in the rural area for aluminium, barium, iron, lead, phosphor and selenium. Elemental concentrations of leaves were significantly higher in urban than in rural area for manganese and strontium. Urbanization changed significantly the elemental concentrations of foliage dust and leaves and the applied method can be useful for monitoring the environmental load. - Highlights: > We studied the elements in dust and leaves along an urbanization gradient, Austria. > We analysed 19 elements: Al, Ba, Ca, Cd, Cu, Fe, K, Mg, Na, P, Pb, S, Sr and Zn. > Elemental concentrations were higher in urban area than in the rural area. > Studied areas were separated by CDA based on the elemental concentrations. > Dust and leaves can be useful for monitoring the environmental load. - Studying the elements (Al, Ba, Ca, Cu, Fe, K, Mg, Na, P, S, Sr, Zn) in dust and leaves along an urbanization gradient in Wien, Austria we found that the elemental concentrations of foliage dust were significantly higher in the urban area than in the rural area for Al, Ba, Fe, Pb, P and Se, and concentrations of leaves were significantly higher in urban than in rural area for Mn and Sr.

  3. Mixed model phase evolution for correction of magnetic field inhomogeneity effects in 3D quantitative gradient echo-based MRI

    DEFF Research Database (Denmark)

    Fatnassi, Chemseddine; Boucenna, Rachid; Zaidi, Habib

    2017-01-01

    PURPOSE: In 3D gradient echo magnetic resonance imaging (MRI), strong field gradients B0macro are visually observed at air/tissue interfaces. At low spatial resolution in particular, the respective field gradients lead to an apparent increase in intravoxel dephasing, and subsequently, to signal...... loss or inaccurate R2* estimates. If the strong field gradients are measured, their influence can be removed by postprocessing. METHODS: Conventional corrections usually assume a linear phase evolution with time. For high macroscopic gradient inhomogeneities near the edge of the brain...

  4. A temporal subtraction method for thoracic CT images based on generalized gradient vector flow

    International Nuclear Information System (INIS)

    Miyake, Noriaki; Kim, H.; Maeda, Shinya; Itai, Yoshinori; Tan, J.K.; Ishikawa, Seiji; Katsuragawa, Shigehiko

    2010-01-01

    A temporal subtraction image, which is obtained by subtraction of a previous image from a current one, can be used for enhancing interval changes (such as formation of new lesions and changes in existing abnormalities) on medical images by removing most of the normal structures. If image registration is incorrect, not only the interval changes but also the normal structures would be appeared as some artifacts on the temporal subtraction image. In a temporal subtraction technique for 2-D X-ray image, the effectiveness is shown through a lot of clinical evaluation experiments, and practical use is advancing. Moreover, the MDCT (Multi-Detector row Computed Tomography) can easily introduced on medical field, the development of a temporal subtraction for thoracic CT Images is expected. In our study, a temporal subtraction technique for thoracic CT Images is developed. As the technique, the vector fields are described by use of GGVF (Generalized Gradient Vector Flow) from the previous and current CT images. Afterwards, VOI (Volume of Interest) are set up on the previous and current CT image pairs. The shift vectors are calculated by using nearest neighbor matching of the vector fields in these VOIs. The search kernel on previous CT image is set up from the obtained shift vector. The previous CT voxel which resemble standard the current voxel is detected by voxel value and vector of the GGVF in the kernel. And, the previous CT image is transformed to the same coordinate of standard voxel. Finally, temporal subtraction image is made by subtraction of a warping image from a current one. To verify the proposal method, the result of application to 7 cases and the effectiveness are described. (author)

  5. Curve Evolution in Subspaces and Exploring the Metameric Class of Histogram of Gradient Orientation based Features using Nonlinear Projection Methods

    DEFF Research Database (Denmark)

    Tatu, Aditya Jayant

    This thesis deals with two unrelated issues, restricting curve evolution to subspaces and computing image patches in the equivalence class of Histogram of Gradient orientation based features using nonlinear projection methods. Curve evolution is a well known method used in various applications like...... tracking interfaces, active contour based segmentation methods and others. It can also be used to study shape spaces, as deforming a shape can be thought of as evolving its boundary curve. During curve evolution a curve traces out a path in the infinite dimensional space of curves. Due to application...... specific requirements like shape priors or a given data model, and due to limitations of the computer, the computed curve evolution forms a path in some finite dimensional subspace of the space of curves. We give methods to restrict the curve evolution to a finite dimensional linear or implicitly defined...

  6. A comprehensive assessment of ionospheric gradients observed in Ecuador during 2013 and 2014 for ground based augmentation systems

    Science.gov (United States)

    Sánchez-Naranjo, S.; Rincón, W.; Ramos-Pollán, R.; González, F. A.; Soley, S.

    2017-04-01

    Ground Based Augmentation Systems GBAS provide differential corrections to approaching and landing aircrafts in the vicinities of an airport. The ionosphere can introduce an error not accountable by those differential corrections, and a threat model for the Conterminous United States region CONUS was developed in order to consider the highest gradients measured. This study presents the first extensive analysis of ionospheric gradients for Ecuador, from data fully covering 2013 and 2014 collected by their national Global Navigation Satellite System GNSS monitoring network (REGME). In this work it is applied an automated methodology adapted for low latitudes for processing data from dual frequency receivers networks, by considering data from all available days in the date range of the study regardless the geomagnetic indices values. The events found above the CONUS threat model occurred during days of nominal geomagnetic indices, confirming: (1) the higher bounds required for an ionospheric threat model for Ecuador, and (2) that geomagnetic indices are not enough to indicate relevant ionospheric anomalies in low latitude regions, reinforcing the necessity of a continuous monitoring of ionosphere. As additional contribution, the events database is published online, making it available to other researchers.

  7. Enhanced sensitivity of DNA- and rRNA-based stable isotope probing by fractionation and quantitative analysis of isopycnic centrifugation gradients.

    Science.gov (United States)

    Lueders, Tillmann; Manefield, Mike; Friedrich, Michael W

    2004-01-01

    Stable isotope probing (SIP) of nucleic acids allows the detection and identification of active members of natural microbial populations that are involved in the assimilation of an isotopically labelled compound into nucleic acids. SIP is based on the separation of isotopically labelled DNA or rRNA by isopycnic density gradient centrifugation. We have developed a highly sensitive protocol for the detection of 'light' and 'heavy' nucleic acids in fractions of centrifugation gradients. It involves the fluorometric quantification of total DNA or rRNA, and the quantification of either 16S rRNA genes or 16S rRNA in gradient fractions by real-time PCR with domain-specific primers. Using this approach, we found that fully 13C-labelled DNA or rRNA of Methylobacterium extorquens was quantitatively resolved from unlabelled DNA or rRNA of Methanosarcina barkeri by cesium chloride or cesium trifluoroacetate density gradient centrifugation respectively. However, a constant low background of unspecific nucleic acids was detected in all DNA or rRNA gradient fractions, which is important for the interpretation of environmental SIP results. Consequently, quantitative analysis of gradient fractions provides a higher precision and finer resolution for retrieval of isotopically enriched nucleic acids than possible using ethidium bromide or gradient fractionation combined with fingerprinting analyses. This is a prerequisite for the fine-scale tracing of microbial populations metabolizing 13C-labelled compounds in natural ecosystems.

  8. Area-based initiatives - Engines of planning and policy innovation?

    DEFF Research Database (Denmark)

    Agger, Annika; Norvig Larsen, Jacob

    studies of local planning culture change are discussed. Main findings are that during the past two decades a general change in planning culture has developed gradually, triggered by urban regeneration full scale experimentation with place-based approaches. Second, planners as well as public administrators...... and development in planning culture turns out to be a more substantial result than the reduction of social exclusion and economic deprivation. The paper analyses all available official evaluation studies of Danish place-based urban policy initiatives from mid-1990s through 2010. In addition to this, recent...... attitude towards the involvement of local citizens and stakeholders is significantly transformed. While earlier, public participation in planning was mostly restricted to what was lawfully mandatory, the new turn in planning culture demonstrates a practice that goes much further in involving citizens...

  9. The Evolution of Policy Enactment on Gender-Based Violence in Schools

    Science.gov (United States)

    Parkes, Jenny

    2016-01-01

    This article examines how policies and strategies to address school-related gender-based violence have evolved since 2000, when gender-based violence within education was largely invisible. Through an exploration of policy enactment in three countries--Liberia, South Africa, and Brazil--it traces remarkable progress in policy, programmes, and…

  10. Econometric Model of Rice Policy Based On Presidential Instruction

    Science.gov (United States)

    Abadi Sembiring, Surya; Hutauruk, Julia

    2018-01-01

    The objective of research is to build an econometric model based on Presidential Instruction rice policy. The data was monthly time series from March 2005 to September 2009. Rice policy model specification using simultaneous equation, consisting of 14 structural equations and four identity equation, which was estimated using Two Stages Least Squares (2SLS) method. The results show that: (1) an increase of government purchasing price of dried harvest paddy has a positive impact on to increase in total rice production and community rice stock, (2) an increase community rice stock lead to decrease the rice imports, (3) an increase of the realization of the distribution of subsidized ZA fertilizers and the realization of the distribution of subsidized NPK fertilizers has a positive impact on to increase in total rice production and community rice stock and to reduce rice imports, (4) the price of the dried harvest paddy is highly responsive to the water content of dried harvest paddy both the short run and long run, (5) the quantity of rice imported is highly responsive to the imported rice price, both short run and long run.

  11. The institutional economics of market-based climate policy

    International Nuclear Information System (INIS)

    Woerdman, E.

    2005-01-01

    The objective of this book is to analyze the institutional barriers to implementing market-based climate policy, as well as to provide some opportunities to overcome them. The approach is that of institutional economics, with special emphasis on political transaction costs and path dependence. Instead of rejecting the neoclassical approach, this book uses it where fruitful and shows when and why it is necessary to employ a new or neo-institutionalist approach. The result is that equity is considered next to efficiency, that the evolution and possible lock-in of both formal and informal climate institutions are studied, and that attention is paid to the politics and law of economic instruments for climate policy, including some new empirical analyses. The research topics of this book include the set-up costs of a permit trading system, the risk that credit trading becomes locked-in, the potential legal problem of grandfathering in terms of actional subsidies under WTO law or state aid under EC law, and the changing attitudes of various European officials towards restricting the use of the Kyoto Mechanisms

  12. Polyhedral shape model for terrain correction of gravity and gravity gradient data based on an adaptive mesh

    Science.gov (United States)

    Guo, Zhikui; Chen, Chao; Tao, Chunhui

    2016-04-01

    Since 2007, there are four China Da yang cruises (CDCs), which have been carried out to investigate polymetallic sulfides in the southwest Indian ridge (SWIR) and have acquired both gravity data and bathymetry data on the corresponding survey lines(Tao et al., 2014). Sandwell et al. (2014) published a new global marine gravity model including the free air gravity data and its first order vertical gradient (Vzz). Gravity data and its gradient can be used to extract unknown density structure information(e.g. crust thickness) under surface of the earth, but they contain all the mass effect under the observation point. Therefore, how to get accurate gravity and its gradient effect of the existing density structure (e.g. terrain) has been a key issue. Using the bathymetry data or ETOPO1 (http://www.ngdc.noaa.gov/mgg/global/global.html) model at a full resolution to calculate the terrain effect could spend too much computation time. We expect to develop an effective method that takes less time but can still yield the desired accuracy. In this study, a constant-density polyhedral model is used to calculate the gravity field and its vertical gradient, which is based on the work of Tsoulis (2012). According to gravity field attenuation with distance and variance of bathymetry, we present an adaptive mesh refinement and coarsening strategies to merge both global topography data and multi-beam bathymetry data. The local coarsening or size of mesh depends on user-defined accuracy and terrain variation (Davis et al., 2011). To depict terrain better, triangular surface element and rectangular surface element are used in fine and coarse mesh respectively. This strategy can also be applied to spherical coordinate in large region and global scale. Finally, we applied this method to calculate Bouguer gravity anomaly (BGA), mantle Bouguer anomaly(MBA) and their vertical gradient in SWIR. Further, we compared the result with previous results in the literature. Both synthetic model

  13. What Is the Best Way to Contour Lung Tumors on PET Scans? Multiobserver Validation of a Gradient-Based Method Using a NSCLC Digital PET Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Werner-Wasik, Maria, E-mail: Maria.Werner-wasik@jeffersonhospital.org [Department of Radiation Oncology, Thomas Jefferson University Hospital, Philadelphia, PA (United States); Nelson, Arden D. [MIM Software Inc., Cleveland, OH (United States); Choi, Walter [Department of Radiation Oncology, UPMC Health Systems, Pittsburgh, PA (United States); Arai, Yoshio [Department of Radiation Oncology, Beth Israel Medical Center, New York, NY (Israel); Faulhaber, Peter F. [University Hospitals Case Medical Center, Cleveland, OH (United States); Kang, Patrick [Department of Radiology, Beth Israel Medical Center, New York, NY (Israel); Almeida, Fabio D. [Division of Nuclear Medicine, University of Arizona Health Sciences Center, Tucson, AZ (United States); Xiao, Ying; Ohri, Nitin [Department of Radiation Oncology, Thomas Jefferson University Hospital, Philadelphia, PA (United States); Brockway, Kristin D.; Piper, Jonathan W.; Nelson, Aaron S. [MIM Software Inc., Cleveland, OH (United States)

    2012-03-01

    Purpose: To evaluate the accuracy and consistency of a gradient-based positron emission tomography (PET) segmentation method, GRADIENT, compared with manual (MANUAL) and constant threshold (THRESHOLD) methods. Methods and Materials: Contouring accuracy was evaluated with sphere phantoms and clinically realistic Monte Carlo PET phantoms of the thorax. The sphere phantoms were 10-37 mm in diameter and were acquired at five institutions emulating clinical conditions. One institution also acquired a sphere phantom with multiple source-to-background ratios of 2:1, 5:1, 10:1, 20:1, and 70:1. One observer segmented (contoured) each sphere with GRADIENT and THRESHOLD from 25% to 50% at 5% increments. Subsequently, seven physicians segmented 31 lesions (7-264 mL) from 25 digital thorax phantoms using GRADIENT, THRESHOLD, and MANUAL. Results: For spheres <20 mm in diameter, GRADIENT was the most accurate with a mean absolute % error in diameter of 8.15% (10.2% SD) compared with 49.2% (51.1% SD) for 45% THRESHOLD (p < 0.005). For larger spheres, the methods were statistically equivalent. For varying source-to-background ratios, GRADIENT was the most accurate for spheres >20 mm (p < 0.065) and <20 mm (p < 0.015). For digital thorax phantoms, GRADIENT was the most accurate (p < 0.01), with a mean absolute % error in volume of 10.99% (11.9% SD), followed by 25% THRESHOLD at 17.5% (29.4% SD), and MANUAL at 19.5% (17.2% SD). GRADIENT had the least systematic bias, with a mean % error in volume of -0.05% (16.2% SD) compared with 25% THRESHOLD at -2.1% (34.2% SD) and MANUAL at -16.3% (20.2% SD; p value <0.01). Interobserver variability was reduced using GRADIENT compared with both 25% THRESHOLD and MANUAL (p value <0.01, Levene's test). Conclusion: GRADIENT was the most accurate and consistent technique for target volume contouring. GRADIENT was also the most robust for varying imaging conditions. GRADIENT has the potential to play an important role for tumor delineation in

  14. Method and apparatus for producing a carbon based foam article having a desired thermal-conductivity gradient

    Science.gov (United States)

    Klett, James W [Knoxville, TN; Cameron, Christopher Stan [Sanford, NC

    2010-03-02

    A carbon based foam article is made by heating the surface of a carbon foam block to a temperature above its graphitizing temperature, which is the temperature sufficient to graphitize the carbon foam. In one embodiment, the surface is heated with infrared pulses until heat is transferred from the surface into the core of the foam article such that the graphitizing temperature penetrates into the core to a desired depth below the surface. The graphitizing temperature is maintained for a time sufficient to substantially entirely graphitize the portion of the foam article from the surface to the desired depth below the surface. Thus, the foam article is an integral monolithic material that has a desired conductivity gradient with a relatively high thermal conductivity in the portion of the core that was graphitized and a relatively low thermal conductivity in the remaining portion of the foam article.

  15. An efficient impedance method for induced field evaluation based on a stabilized Bi-conjugate gradient algorithm

    International Nuclear Information System (INIS)

    Wang Hua; Liu Feng; Crozier, Stuart; Xia Ling

    2008-01-01

    This paper presents a stabilized Bi-conjugate gradient algorithm (BiCGstab) that can significantly improve the performance of the impedance method, which has been widely applied to model low-frequency field induction phenomena in voxel phantoms. The improved impedance method offers remarkable computational advantages in terms of convergence performance and memory consumption over the conventional, successive over-relaxation (SOR)-based algorithm. The scheme has been validated against other numerical/analytical solutions on a lossy, multilayered sphere phantom excited by an ideal coil loop. To demonstrate the computational performance and application capability of the developed algorithm, the induced fields inside a human phantom due to a low-frequency hyperthermia device is evaluated. The simulation results show the numerical accuracy and superior performance of the method.

  16. An efficient impedance method for induced field evaluation based on a stabilized Bi-conjugate gradient algorithm.

    Science.gov (United States)

    Wang, Hua; Liu, Feng; Xia, Ling; Crozier, Stuart

    2008-11-21

    This paper presents a stabilized Bi-conjugate gradient algorithm (BiCGstab) that can significantly improve the performance of the impedance method, which has been widely applied to model low-frequency field induction phenomena in voxel phantoms. The improved impedance method offers remarkable computational advantages in terms of convergence performance and memory consumption over the conventional, successive over-relaxation (SOR)-based algorithm. The scheme has been validated against other numerical/analytical solutions on a lossy, multilayered sphere phantom excited by an ideal coil loop. To demonstrate the computational performance and application capability of the developed algorithm, the induced fields inside a human phantom due to a low-frequency hyperthermia device is evaluated. The simulation results show the numerical accuracy and superior performance of the method.

  17. Feedback-Based Projected-Gradient Method For Real-Time Optimization of Aggregations of Energy Resources: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bernstein, Andrey [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Simonetto, Andrea [IBM Research Center Ireland

    2017-11-27

    This paper develops an online optimization method to maximize the operational objectives of distribution-level distributed energy resources (DERs) while adjusting the aggregate power generated (or consumed) in response to services requested by grid operators. The design of the online algorithm is based on a projected-gradient method, suitably modified to accommodate appropriate measurements from the distribution network and the DERs. By virtue of this approach, the resultant algorithm can cope with inaccuracies in the representation of the AC power, it avoids pervasive metering to gather the state of noncontrollable resources, and it naturally lends itself to a distributed implementation. Optimality claims are established in terms of tracking of the solution of a well-posed time-varying optimization problem.

  18. A diatom-based biological condition gradient (BCG) approach for assessing impairment and developing nutrient criteria for streams.

    Science.gov (United States)

    Hausmann, Sonja; Charles, Donald F; Gerritsen, Jeroen; Belton, Thomas J

    2016-08-15

    Over-enrichment leading to excess algal growth is a major problem in rivers and streams. Regulations to protect streams typically incorporate nutrient criteria, concentrations of phosphorus and nitrogen that should not be exceeded in order to protect biological communities. A major challenge has been to develop an approach for both categorizing streams based on their biological conditions and determining scientifically defensible nutrient criteria to protect the biotic integrity of streams in those categories. To address this challenge, we applied the Biological Condition Gradient (BCG) approach to stream diatom assemblages to develop a system for categorizing sites by level of impairment, and then examined the related nutrient concentrations to identify potential nutrient criteria. The six levels of the BCG represent a range of ecological conditions from natural (1) to highly disturbed (6). A group of diatom experts developed a set of rules and a model to assign sites to these levels based on their diatom assemblages. To identify potential numeric nutrient criteria, we explored the relation of assigned BCG levels to nutrient concentrations, other anthropogenic stressors, and possible confounding variables using data for stream sites in New Jersey (n=42) and in surrounding Mid-Atlantic states, USA (n=1443). In both data sets, BCG levels correlated most strongly with total phosphorus and the percentage of forest in the watershed, but were independent of pH. We applied Threshold Indicator Taxa Analysis (TITAN) to determine change-points in the diatom assemblages along the BCG gradient. In both data sets, statistically significant diatom changes occurred between BCG levels 3 and 4. Sites with BCG levels 1 to 3 were dominated by species that grow attached to surfaces, while sites with BCG scores of 4 and above were characterized by motile diatoms. The diatom change-point corresponded with a total phosphorus concentration of about 50μg/L. Copyright © 2016 Elsevier B

  19. Opportunity-based age replacement policy with minimal repair

    International Nuclear Information System (INIS)

    Jhang, J.P.; Sheu, S.H.

    1999-01-01

    This paper proposes an opportunity-based age replacement policy with minimal repair. The system has two types of failures. Type I failures (minor failures) are removed by minimal repairs, whereas type II failures are removed by replacements. Type I and type II failures are age-dependent. A system is replaced at type II failure (catastrophic failure) or at the opportunity after age T, whichever occurs first. The cost of the minimal repair of the system at age z depends on the random part C(z) and the deterministic part c(z). The opportunity arises according to a Poisson process, independent of failures of the component. The expected cost rate is obtained. The optimal T * which would minimize the cost rate is discussed. Various special cases are considered. Finally, a numerical example is given

  20. Designing and implementing science-based methane policies

    Science.gov (United States)

    George, F.

    2017-12-01

    The phenomenal growth in shale gas production across the U.S. has significantly improved the energy security and economic prospects of the country. Natural gas is a "versatile" fuel that has application in every major end-use sector of the economy, both as a fuel and a feedstock. Natural gas has also played a significant role in reducing CO2 emissions from the power sector by displacing more carbon intensive fossil fuels. However, emissions of natural gas (predominantly methane) from the wellhead to the burner tip can erode this environmental benefit. Preserving the many benefits of America's natural gas resources requires smart, science-based policies to optimize the energy delivery efficiency of the natural gas supply chain and ensure that natural gas remains a key pillar in our transition to a low-carbon economy. Southwestern Energy (SWN) is the third largest natural gas producer in the United States. Over the last several years, SWN has participated in a number of scientific studies with regulatory agencies, academia and non-governmental entities that have led to over a dozen peer-reviewed papers on methane emissions from oil and gas operations. This presentation will review how our participation in these studies has informed our internal policies and procedures, as well as our external programs, including the ONE Future coalition (ONE Future). In particular, the presentation will highlight the impact of such studies on our Leak Detection and Repair (LDAR) program, designing new methane research and on the ONE Future initiatives - all with the focus of improving the delivery efficiency of oil and gas operations. Our experience supports continued research in the detection and mitigation of methane emissions, with emphasis on longer duration characterization of methane emissions from oil and gas facilities and further development of cost-effective methane detection and mitigation techniques. We conclude from our scientific and operational experiences that a

  1. A Path-Based Gradient Projection Algorithm for the Cost-Based System Optimum Problem in Networks with Continuously Distributed Value of Time

    Directory of Open Access Journals (Sweden)

    Wen-Xiang Wu

    2014-01-01

    Full Text Available The cost-based system optimum problem in networks with continuously distributed value of time is formulated as a path-based form, which cannot be solved by the Frank-Wolfe algorithm. In light of magnitude improvement in the availability of computer memory in recent years, path-based algorithms have been regarded as a viable approach for traffic assignment problems with reasonably large network sizes. We develop a path-based gradient projection algorithm for solving the cost-based system optimum model, based on Goldstein-Levitin-Polyak method which has been successfully applied to solve standard user equilibrium and system optimum problems. The Sioux Falls network tested is used to verify the effectiveness of the algorithm.

  2. MIMO feed-forward design in wafer scanners using a gradient approximation-based algorithm

    NARCIS (Netherlands)

    Heertjes, M.F.; Hennekens, D.W.T.; Steinbuch, M.

    2010-01-01

    An experimental demonstration is given of a data-based multi-input multi-output (MIMO) feed-forward control design applied to the motion systems of a wafer scanner. Atop a nominal single-input single-output (SISO) feed-forward controller, a MIMO controller is designed having a finite impulse

  3. Irradiance gradients

    International Nuclear Information System (INIS)

    Ward, G.J.; Heckbert, P.S.; Technische Hogeschool Delft

    1992-04-01

    A new method for improving the accuracy of a diffuse interreflection calculation is introduced in a ray tracing context. The information from a hemispherical sampling of the luminous environment is interpreted in a new way to predict the change in irradiance as a function of position and surface orientation. The additional computation involved is modest and the benefit is substantial. An improved interpolation of irradiance resulting from the gradient calculation produces smoother, more accurate renderings. This result is achieved through better utilization of ray samples rather than additional samples or alternate sampling strategies. Thus, the technique is applicable to a variety of global illumination algorithms that use hemicubes or Monte Carlo sampling techniques

  4. Role of ideas and ideologies in evidence-based health policy.

    Science.gov (United States)

    Prinja, S

    2010-01-01

    Policy making in health is largely thought to be driven by three 'I's namely ideas, interests and institutions. Recent years have seen a shift in approach with increasing reliance being placed on role of evidence for policy making. The present article ascertains the role of ideas and ideologies in shaping evidence which is used to aid in policy decisions. The article discusses different theories of research-policy interface and the relative freedom of research-based evidence from the influence of ideas. Examples from developed and developed countries are cited to illustrate the contentions made. The article highlights the complexity of the process of evidence-based policy making, in a world driven by existing political, social and cultural ideologies. Consideration of this knowledge is paramount where more efforts are being made to bridge the gap between the 'two worlds' of researchers and policy makers to make evidence-based policy as also for policy analysts.

  5. Zwitterionic silica-based monolithic capillary columns for isocratic and gradient hydrophilic interaction liquid chromatography

    Czech Academy of Sciences Publication Activity Database

    Moravcová, Dana; Planeta, Josef; Kahle, Vladislav; Roth, Michal

    2012-01-01

    Roč. 1270, DEC 28 (2012), s. 178-185 ISSN 0021-9673 R&D Projects: GA MV VG20112015021; GA ČR(CZ) GAP206/11/0138; GA ČR(CZ) GAP106/12/0522 Institutional support: RVO:68081715 Keywords : HILIC * Monolithic silica column * Nucleoside separation * Nucleic acid base Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 4.612, year: 2012

  6. Fugitive emission source characterization using a gradient-based optimization scheme and scalar transport adjoint

    Science.gov (United States)

    Brereton, Carol A.; Joynes, Ian M.; Campbell, Lucy J.; Johnson, Matthew R.

    2018-05-01

    Fugitive emissions are important sources of greenhouse gases and lost product in the energy sector that can be difficult to detect, but are often easily mitigated once they are known, located, and quantified. In this paper, a scalar transport adjoint-based optimization method is presented to locate and quantify unknown emission sources from downstream measurements. This emission characterization approach correctly predicted locations to within 5 m and magnitudes to within 13% of experimental release data from Project Prairie Grass. The method was further demonstrated on simulated simultaneous releases in a complex 3-D geometry based on an Alberta gas plant. Reconstructions were performed using both the complex 3-D transient wind field used to generate the simulated release data and using a sequential series of steady-state RANS wind simulations (SSWS) representing 30 s intervals of physical time. Both the detailed transient and the simplified wind field series could be used to correctly locate major sources and predict their emission rates within 10%, while predicting total emission rates from all sources within 24%. This SSWS case would be much easier to implement in a real-world application, and gives rise to the possibility of developing pre-computed databases of both wind and scalar transport adjoints to reduce computational time.

  7. Databases as policy instruments. About extending networks as evidence-based policy

    Directory of Open Access Journals (Sweden)

    Stoevelaar Herman

    2007-12-01

    Full Text Available Abstract Background This article seeks to identify the role of databases in health policy. Access to information and communication technologies has changed traditional relationships between the state and professionals, creating new systems of surveillance and control. As a result, databases may have a profound effect on controlling clinical practice. Methods We conducted three case studies to reconstruct the development and use of databases as policy instruments. Each database was intended to be employed to control the use of one particular pharmaceutical in the Netherlands (growth hormone, antiretroviral drugs for HIV and Taxol, respectively. We studied the archives of the Dutch Health Insurance Board, conducted in-depth interviews with key informants and organized two focus groups, all focused on the use of databases both in policy circles and in clinical practice. Results Our results demonstrate that policy makers hardly used the databases, neither for cost control nor for quality assurance. Further analysis revealed that these databases facilitated self-regulation and quality assurance by (national bodies of professionals, resulting in restrictive prescription behavior amongst physicians. Conclusion The databases fulfill control functions that were formerly located within the policy realm. The databases facilitate collaboration between policy makers and physicians, since they enable quality assurance by professionals. Delegating regulatory authority downwards into a network of physicians who control the use of pharmaceuticals seems to be a good alternative for centralized control on the basis of monitoring data.

  8. To What Extent Are Canadian Second Language Policies Evidence-Based? Reflections on the Intersections of Research and Policy

    Directory of Open Access Journals (Sweden)

    Jim eCummins

    2014-05-01

    Full Text Available The paper addresses the intersections between research, ideology, and Canadian educational policies focusing on four major areas: (a core and immersion programs for the teaching of French to Anglophone students, (b policies concerning the learning of English and French by students from immigrant backgrounds, (c heritage language teaching, and (d the education of Deaf and hard-of hearing students. With respect to the teaching of French, policy-makers have largely ignored the fact that most core French programs produce meager results for the vast majority of students. Only a small proportion of students (< 10% attend more effective alternatives (e.g. French immersion and Intensive French programs. With respect to immigrant-background students, a large majority of teachers and administrators have not had opportunities to access the knowledge base regarding effective instruction for these students nor have they had opportunities for pre-service or in-service professional development regarding effective instructional practices. Educational policies have also treated the linguistic resources that children bring to school with, at best, benign neglect. In some cases (e.g., Ontario school systems have been explicitly prohibited from instituting enrichment bilingual programs that would promote students’ bilingualism and biliteracy. Finally, with respect to Deaf students, policy-makers have ignored overwhelming research on the positive relationship between academic success and the development of proficiency in natural sign languages, preferring instead to perpetuate the falsehood that acquisition of languages such as American Sign Language by young children (with or without cochlear implants will impede children’s language and academic development. The paper reviews the kinds of policies, programs, and practices that could be implemented (at no additional cost if policy-makers and educators pursued evidence-based educational policies.

  9. Dengue transmission based on urban environmental gradients in different cities of Pakistan.

    Science.gov (United States)

    Khalid, Bushra; Ghaffar, Abdul

    2015-03-01

    This study focuses on the dengue transmission in different regions of Pakistan. For this purpose, the data of dengue cases for 2009-2012 from four different cities (Rawalpindi, Islamabad, Lahore, and Karachi) of the country is collected, evaluated, and compiled. To identify the reasons and regions of higher risk of Dengue transmission, land use classification, analysis of climate covariates and drainage patterns was done. Analysis involves processing of SPOT 5 10 m, Landsat TM 30 m data sets, and SRTM 90 m digital elevation models by using remote sensing and GIS techniques. The results are based on the change in urbanization and population density, analysis of temperature, rainfall, and wind speed; calculation of drainage patterns including stream features, flow accumulation, and drainage density of the study areas. Results suggest that the low elevation areas with calm winds and minimum temperatures higher than the normal, rapid increase in unplanned urbanization, low flow accumulation, and higher drainage density areas favor the dengue transmission.

  10. Steganalysis Method for LSB Replacement Based on Local Gradient of Image Histogram

    Directory of Open Access Journals (Sweden)

    M. Mahdavi

    2008-10-01

    Full Text Available In this paper we present a new accurate steganalysis method for the LSBreplacement steganography. The suggested method is based on the changes that occur in thehistogram of an image after the embedding of data. Every pair of neighboring bins of ahistogram are either inter-related or unrelated depending on whether embedding of a bit ofdata in the image could affect both bins or not. We show that the overall behavior of allinter-related bins, when compared with that of the unrelated ones, could give an accuratemeasure for the amount of the embedded data. Both analytical analysis and simulationresults show the accuracy of the proposed method. The suggested method has beenimplemented and tested for over 2000 samples and compared with the RS Steganalysismethod. Mean and variance of error were 0.0025 and 0.0037 for the suggested methodwhere these quantities were 0.0070 and 0.0182 for the RS Steganalysis. Using 4800samples, we showed that the performance of the suggested method is comparable withthose of the RS steganalysis for JPEG filtered images. The new approach is applicable forthe detection of both random and sequential LSB embedding.

  11. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    Science.gov (United States)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  12. Optimization of Simple Monetary Policy Rules on the Base of Estimated DSGE-model

    OpenAIRE

    Shulgin, A.

    2015-01-01

    Optimization of coefficients in monetary policy rules is performed on the base of the DSGE-model with two independent monetary policy instruments estimated on the Russian data. It was found that welfare maximizing policy rules lead to inadequate result and pro-cyclical monetary policy. Optimal coefficients in Taylor rule and exchange rate rule allow to decrease volatility estimated on Russian data of 2001-2012 by about 20%. The degree of exchange rate flexibility parameter was found to be low...

  13. A Critical Assessment of Evidence-Based Policy and Practice in Social Work.

    Science.gov (United States)

    Diaz, Clive; Drewery, Sian

    2016-01-01

    In this article the authors consider how effective social work has been in terms of evidence-based policies and practice. They consider the role that "evidence" plays in policy making both in the wider context and, in particular, in relation to social work. The authors argue that there are numerous voices in the policy-making process and evidence only plays a minor role in terms of policy development and practice in social work.

  14. Focus measure method based on the modulus of the gradient of the color planes for digital microscopy

    Science.gov (United States)

    Hurtado-Pérez, Román; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso; Aguilar-Valdez, J. Félix; Ortega-Mendoza, Gabriel

    2018-02-01

    The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.

  15. BASES OF PUBLIC POLICY FORMATION DIRECTED AT ENSURING BUDGET SECURITY

    Directory of Open Access Journals (Sweden)

    S. Onishchenko

    2015-03-01

    Full Text Available In the article the priorities and public policies that can improve the safety level of the budget of Ukraine have been grounded. Attention on the problems of imbalance and deficiency trends accumulation of public debt has been focused. The detailed analysis of the budget deficit of the European community to further research the main problems of fiscal security has been carried out. The formation of the concept of budget policy should include long-term and medium-term priorities of the state priorities areas have been concluded. Budget policy on public debt must deal with interrelated issues of debt bondage and effective use of public credit, promote economic growth with respect safe level and structure of public debt have been emphasized by author. Debt policy as part of fiscal policy under certain conditions can be a powerful tool to intensify investment and innovation processes in society, promote economic and social development. The reorientation of fiscal policy to address current problems through debt and use it as the basis of investment and innovation development provides an effective public debt management is designed to reduce state budget expenditures on its servicing and repayment, optimizing the scope and structure of debt according to economic growth. The role of debt policy in modern terms increases is clearly subordinate to and consistent with long-term goals and priorities of fiscal policy. There is an urgent development and implementation of effective mechanisms for investing borrowed resources, increasing the efficiency of public investment, including the improvement of organizational, financial, legal and controls. Strategically budget security guarantees only competitive economy, which can be constructed only by recovery and accelerated development of promising sectors of the national economy in the presence of a balanced budget policy. Now there is a tendency to implement only measures to stabilize the political and socio

  16. Bone marrow-derived cells for cardiovascular cell therapy: an optimized GMP method based on low-density gradient improves cell purity and function.

    Science.gov (United States)

    Radrizzani, Marina; Lo Cicero, Viviana; Soncin, Sabrina; Bolis, Sara; Sürder, Daniel; Torre, Tiziano; Siclari, Francesco; Moccetti, Tiziano; Vassalli, Giuseppe; Turchetto, Lucia

    2014-09-27

    Cardiovascular cell therapy represents a promising field, with several approaches currently being tested. The advanced therapy medicinal product (ATMP) for the ongoing METHOD clinical study ("Bone marrow derived cell therapy in the stable phase of chronic ischemic heart disease") consists of fresh mononuclear cells (MNC) isolated from autologous bone marrow (BM) through density gradient centrifugation on standard Ficoll-Paque. Cells are tested for safety (sterility, endotoxin), identity/potency (cell count, CD45/CD34/CD133, viability) and purity (contaminant granulocytes and platelets). BM-MNC were isolated by density gradient centrifugation on Ficoll-Paque. The following process parameters were optimized throughout the study: gradient medium density; gradient centrifugation speed and duration; washing conditions. A new manufacturing method was set up, based on gradient centrifugation on low density Ficoll-Paque, followed by 2 washing steps, of which the second one at low speed. It led to significantly higher removal of contaminant granulocytes and platelets, improving product purity; the frequencies of CD34+ cells, CD133+ cells and functional hematopoietic and mesenchymal precursors were significantly increased. The methodological optimization described here resulted in a significant improvement of ATMP quality, a crucial issue to clinical applications in cardiovascular cell therapy.

  17. Smart City Mobility Application—Gradient Boosting Trees for Mobility Prediction and Analysis Based on Crowdsourced Data

    Directory of Open Access Journals (Sweden)

    Ivana Semanjski

    2015-07-01

    Full Text Available Mobility management represents one of the most important parts of the smart city concept. The way we travel, at what time of the day, for what purposes and with what transportation modes, have a pertinent impact on the overall quality of life in cities. To manage this process, detailed and comprehensive information on individuals’ behaviour is needed as well as effective feedback/communication channels. In this article, we explore the applicability of crowdsourced data for this purpose. We apply a gradient boosting trees algorithm to model individuals’ mobility decision making processes (particularly concerning what transportation mode they are likely to use. To accomplish this we rely on data collected from three sources: a dedicated smartphone application, a geographic information systems-based web interface and weather forecast data collected over a period of six months. The applicability of the developed model is seen as a potential platform for personalized mobility management in smart cities and a communication tool between the city (to steer the users towards more sustainable behaviour by additionally weighting preferred suggestions and users (who can give feedback on the acceptability of the provided suggestions, by accepting or rejecting them, providing an additional input to the learning process.

  18. Smart City Mobility Application—Gradient Boosting Trees for Mobility Prediction and Analysis Based on Crowdsourced Data

    Science.gov (United States)

    Semanjski, Ivana; Gautama, Sidharta

    2015-01-01

    Mobility management represents one of the most important parts of the smart city concept. The way we travel, at what time of the day, for what purposes and with what transportation modes, have a pertinent impact on the overall quality of life in cities. To manage this process, detailed and comprehensive information on individuals’ behaviour is needed as well as effective feedback/communication channels. In this article, we explore the applicability of crowdsourced data for this purpose. We apply a gradient boosting trees algorithm to model individuals’ mobility decision making processes (particularly concerning what transportation mode they are likely to use). To accomplish this we rely on data collected from three sources: a dedicated smartphone application, a geographic information systems-based web interface and weather forecast data collected over a period of six months. The applicability of the developed model is seen as a potential platform for personalized mobility management in smart cities and a communication tool between the city (to steer the users towards more sustainable behaviour by additionally weighting preferred suggestions) and users (who can give feedback on the acceptability of the provided suggestions, by accepting or rejecting them, providing an additional input to the learning process). PMID:26151209

  19. Design and analysis of gradient index metamaterial-based cloak with wide bandwidth and physically realizable material parameters

    Science.gov (United States)

    Bisht, Mahesh Singh; Rajput, Archana; Srivastava, Kumar Vaibhav

    2018-04-01

    A cloak based on gradient index metamaterial (GIM) is proposed. Here, the GIM is used, for conversion of propagating waves into surface waves and vice versa, to get the cloaking effect. The cloak is made of metamaterial consisting of four supercells with each supercell possessing the linear spatial variation of permittivity and permeability. The spatial variation of material parameters in supercells allows the conversion of propagating waves into surface waves and vice versa, hence results in reduction of electromagnetic signature of the object. To facilitate the practical implementation of the cloak, continuous spatial variation of permittivity and/or permeability, in each supercell, is discretized into seven segments and it is shown that there is not much deviation in cloaking performance of discretized cloak as compared to its continuous counterpart. The crucial advantage, of the proposed cloaks, is that the material parameters are isotropic and in physically realizable range. Furthermore, the proposed cloaks have been shown to possess bandwidth of the order of 190% which is a significantly improved performance compared to the recently published literature.

  20. Smart City Mobility Application--Gradient Boosting Trees for Mobility Prediction and Analysis Based on Crowdsourced Data.

    Science.gov (United States)

    Semanjski, Ivana; Gautama, Sidharta

    2015-07-03

    Mobility management represents one of the most important parts of the smart city concept. The way we travel, at what time of the day, for what purposes and with what transportation modes, have a pertinent impact on the overall quality of life in cities. To manage this process, detailed and comprehensive information on individuals' behaviour is needed as well as effective feedback/communication channels. In this article, we explore the applicability of crowdsourced data for this purpose. We apply a gradient boosting trees algorithm to model individuals' mobility decision making processes (particularly concerning what transportation mode they are likely to use). To accomplish this we rely on data collected from three sources: a dedicated smartphone application, a geographic information systems-based web interface and weather forecast data collected over a period of six months. The applicability of the developed model is seen as a potential platform for personalized mobility management in smart cities and a communication tool between the city (to steer the users towards more sustainable behaviour by additionally weighting preferred suggestions) and users (who can give feedback on the acceptability of the provided suggestions, by accepting or rejecting them, providing an additional input to the learning process).

  1. A sensitivity function-based conjugate gradient method for optical tomography with the frequency-domain equation of radiative transfer

    International Nuclear Information System (INIS)

    Kim, Hyun Keol; Charette, Andre

    2007-01-01

    The Sensitivity Function-based Conjugate Gradient Method (SFCGM) is described. This method is used to solve the inverse problems of function estimation, such as the local maps of absorption and scattering coefficients, as applied to optical tomography for biomedical imaging. A highly scattering, absorbing, non-reflecting, non-emitting medium is considered here and simultaneous reconstructions of absorption and scattering coefficients inside the test medium are achieved with the proposed optimization technique, by using the exit intensity measured at boundary surfaces. The forward problem is solved with a discrete-ordinates finite-difference method on the framework of the frequency-domain full equation of radiative transfer. The modulation frequency is set to 600 MHz and the frequency data, obtained with the source modulation, is used as the input data. The inversion results demonstrate that the SFCGM can retrieve simultaneously the spatial distributions of optical properties inside the medium within a reasonable accuracy, by significantly reducing a cross-talk between inter-parameters. It is also observed that the closer-to-detector objects are better retrieved

  2. Time-domain full waveform inversion using the gradient preconditioning based on seismic wave energy: Application to the South China Sea

    KAUST Repository

    Mengxuan, Zhong; Jun, Tan; Peng, Song; Xiao-bo, Zhang; Chuang, Xie; Zhao-lun, Liu

    2017-01-01

    The gradient preconditioning algorithms based on Hessian matrices in time-domain full waveform inversion (FWI) are widely used now, but consume a lot of memory and do not fit the FWI of large models or actual seismic data well. To avoid the huge

  3. Wood formation from the base to the crown in Pinus radiata: gradients of tracheid wall thickness, wood density, radial growth rate and gene expression

    Science.gov (United States)

    Sheree Cato; Lisa McMillan; Lloyd Donaldson; Thomas Richardson; Craig Echt; Richard Gardner

    2006-01-01

    Wood formation was investigated at five heights along the bole for two unrelated trees of Pinus radiataBoth trees showed clear gradients in wood properties from the base to the crown. Cambial cells at the base of the tree were dividing 3.3-fold slower than those at the crown, while the average thickness of cell walls in wood was highest at the base....

  4. Development of evidence-based health policy documents in developing countries: a case of Iran.

    Science.gov (United States)

    Imani-Nasab, Mohammad Hasan; Seyedin, Hesam; Majdzadeh, Reza; Yazdizadeh, Bahareh; Salehi, Masoud

    2014-02-07

    Evidence-based policy documents that are well developed by senior civil servants and are timely available can reduce the barriers to evidence utilization by health policy makers. This study examined the barriers and facilitators in developing evidence-based health policy documents from the perspective of their producers in a developing country. In a qualitative study with a framework analysis approach, we conducted semi-structured interviews using purposive and snowball sampling. A qualitative analysis software (MAXQDA-10) was used to apply the codes and manage the data. This study was theory-based and the results were compared to exploratory studies about the factors influencing evidence-based health policy-making. 18 codes and three main themes of behavioral, normative, and control beliefs were identified. Factors that influence the development of evidence-based policy documents were identified by the participants: behavioral beliefs included quality of policy documents, use of resources, knowledge and innovation, being time-consuming and contextualization; normative beliefs included policy authorities, policymakers, policy administrators, and co-workers; and control beliefs included recruitment policy, performance management, empowerment, management stability, physical environment, access to evidence, policy making process, and effect of other factors. Most of the cited barriers to the development of evidence-based policy were related to control beliefs, i.e. barriers at the organizational and health system levels. This study identified the factors that influence the development of evidence-based policy documents based on the components of the theory of planned behavior. But in exploratory studies on evidence utilization by health policymakers, the identified factors were only related to control behaviors. This suggests that the theoretical approach may be preferable to the exploratory approach in identifying the barriers and facilitators of a behavior.

  5. The effects of an invasive seaweed on native communities vary along a gradient of land-based human impacts

    Directory of Open Access Journals (Sweden)

    Fabio Bulleri

    2016-03-01

    Full Text Available The difficulty in teasing apart the effects of biological invasions from those of other anthropogenic perturbations has hampered our understanding of the mechanisms underpinning the global biodiversity crisis. The recent elaboration of global-scale maps of cumulative human impacts provides a unique opportunity to assess how the impact of invaders varies among areas exposed to different anthropogenic activities. A recent meta-analysis has shown that the effects of invasive seaweeds on native biota tend to be more negative in relatively pristine than in human-impacted environments. Here, we tested this hypothesis through the experimental removal of the invasive green seaweed, Caulerpa cylindracea, from rocky reefs across the Mediterranean Sea. More specifically, we assessed which out of land-based and sea-based cumulative impact scores was a better predictor of the direction and magnitude of the effects of this seaweed on extant and recovering native assemblages. Approximately 15 months after the start of the experiment, the removal of C. cylindracea from extant assemblages enhanced the cover of canopy-forming macroalgae at relatively pristine sites. This did not, however, result in major changes in total cover or species richness of native assemblages. Preventing C. cylindracea re-invasion of cleared plots at pristine sites promoted the recovery of canopy-forming and encrusting macroalgae and hampered that of algal turfs, ultimately resulting in increased species richness. These effects weakened progressively with increasing levels of land-based human impacts and, indeed, shifted in sign at the upper end of the gradient investigated. Thus, at sites exposed to intense disturbance from land-based human activities, the removal of C. cylindracea fostered the cover of algal turfs and decreased that of encrusting algae, with no net effect on species richness. Our results suggests that competition from C. cylindracea is an important determinant of

  6. The effects of an invasive seaweed on native communities vary along a gradient of land-based human impacts.

    Science.gov (United States)

    Bulleri, Fabio; Badalamenti, Fabio; Iveša, Ljiljana; Mikac, Barbara; Musco, Luigi; Jaklin, Andrej; Rattray, Alex; Vega Fernández, Tomás; Benedetti-Cecchi, Lisandro

    2016-01-01

    The difficulty in teasing apart the effects of biological invasions from those of other anthropogenic perturbations has hampered our understanding of the mechanisms underpinning the global biodiversity crisis. The recent elaboration of global-scale maps of cumulative human impacts provides a unique opportunity to assess how the impact of invaders varies among areas exposed to different anthropogenic activities. A recent meta-analysis has shown that the effects of invasive seaweeds on native biota tend to be more negative in relatively pristine than in human-impacted environments. Here, we tested this hypothesis through the experimental removal of the invasive green seaweed, Caulerpa cylindracea, from rocky reefs across the Mediterranean Sea. More specifically, we assessed which out of land-based and sea-based cumulative impact scores was a better predictor of the direction and magnitude of the effects of this seaweed on extant and recovering native assemblages. Approximately 15 months after the start of the experiment, the removal of C. cylindracea from extant assemblages enhanced the cover of canopy-forming macroalgae at relatively pristine sites. This did not, however, result in major changes in total cover or species richness of native assemblages. Preventing C. cylindracea re-invasion of cleared plots at pristine sites promoted the recovery of canopy-forming and encrusting macroalgae and hampered that of algal turfs, ultimately resulting in increased species richness. These effects weakened progressively with increasing levels of land-based human impacts and, indeed, shifted in sign at the upper end of the gradient investigated. Thus, at sites exposed to intense disturbance from land-based human activities, the removal of C. cylindracea fostered the cover of algal turfs and decreased that of encrusting algae, with no net effect on species richness. Our results suggests that competition from C. cylindracea is an important determinant of benthic assemblage

  7. Inorganic species of arsenic in soil solution determined by microcartridges and ferrihydrite-based diffusive gradient in thin films (DGT).

    Science.gov (United States)

    Moreno-Jiménez, Eduardo; Six, Laetitia; Williams, Paul N; Smolders, Erik

    2013-01-30

    The bioavailability of soil arsenic (As) is determined by its speciation in soil solution, i.e., arsenite [As(III)] or arsenate [As(V)]. Soil bioavailability studies require suitable methods to cope with small volumes of soil solution that can be speciated directly after sampling, and thereby minimise any As speciation change during sample collection. In this study, we tested a self-made microcartridge to separate both As species and compared it to a commercially available cartridge. In addition, the diffusive gradient in thin films technique (DGT), in combination with the microcartridges, was applied to synthetic solutions and to a soil spiked with As. This combination was used to improve the assessment of available inorganic As species with ferrihydrite(FH)-DGT, in order to validate the technique for environmental analysis, mainly in soils. The self-made microcartridge was effective in separating As(III) from As(V) in solution with detection by inductively coupled plasma optical emission spectrometry (ICP-OES) in volumes of only 3 ml. The DGT study also showed that the FH-based binding gels are effective for As(III) and As(V) assessment, in solutions with As and P concentrations and ionic strength commonly found in soils. The FH-DGT was tested on flooded and unflooded As spiked soils and recoveries of As(III) and As(V) were 85-104% of the total dissolved As. This study shows that the DGT with FH-based binding gel is robust for assessing inorganic species of As in soils. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Advancing team-based primary health care: a comparative analysis of policies in western Canada.

    Science.gov (United States)

    Suter, Esther; Mallinson, Sara; Misfeldt, Renee; Boakye, Omenaa; Nasmith, Louise; Wong, Sabrina T

    2017-07-17

    We analyzed and compared primary health care (PHC) policies in British Columbia, Alberta and Saskatchewan to understand how they inform the design and implementation of team-based primary health care service delivery. The goal was to develop policy imperatives that can advance team-based PHC in Canada. We conducted comparative case studies (n = 3). The policy analysis included: Context review: We reviewed relevant information (2007 to 2014) from databases and websites. Policy review and comparative analysis: We compared and contrasted publically available PHC policies. Key informant interviews: Key informants (n = 30) validated narratives prepared from the comparative analysis by offering contextual information on potential policy imperatives. Advisory group and roundtable: An expert advisory group guided this work and a key stakeholder roundtable event guided prioritization of policy imperatives. The concept of team-based PHC varies widely across and within the three provinces. We noted policy gaps related to team configuration, leadership, scope of practice, role clarity and financing of team-based care; few policies speak explicitly to monitoring and evaluation of team-based PHC. We prioritized four policy imperatives: (1) alignment of goals and policies at different system levels; (2) investment of resources for system change; (3) compensation models for all members of the team; and (4) accountability through collaborative practice metrics. Policies supporting team-based PHC have been slow to emerge, lacking a systematic and coordinated approach. Greater alignment with specific consideration of financing, reimbursement, implementation mechanisms and performance monitoring could accelerate systemic transformation by removing some well-known barriers to team-based care.

  9. Evidence-based policy? The use of mobile phones in hospital

    NARCIS (Netherlands)

    Ettelt, Stefanie; Nolte, Ellen; McKee, Martin; Haugen, Odd Arild; Karlberg, Ingvar; Klazinga, Niek; Ricciardi, Walter; Teperi, Juha

    2006-01-01

    BACKGROUND: Evidence-based policies have become increasingly accepted in clinical practice. However, policies on many of the non-clinical activities that take place in health care facilities may be less frequently evidence based. METHODS: We carried out a review of literature on safety of mobile

  10. On the Implications of Knowledge Bases for Regional Innovation Policies in Germany

    Directory of Open Access Journals (Sweden)

    Hassink Robert

    2014-12-01

    Full Text Available Regional innovation policies have been criticised for being too standardised, one-size-fits-all and place-neutral in character. Embedded in these debates, this paper has two aims: first, to analyse whether industries with different knowledge bases in regions in Germany have different needs for regional innovation policies, and secondly, to investigate whether knowledge bases can contribute to the fine-tuning of regional innovation policies in particular and to a modern, tailor-made, place-based regional innovation policy in general. It concludes that although needs differ due to differences in knowledge bases, those bases are useful only to a limited extent in fine-tuning regional innovation policies

  11. Development of Evidence-Based Health Policy Documents in Developing Countries: A Case of Iran

    Science.gov (United States)

    Imani-Nasab, Mohammad Hasan; Seyedin, Hesam; Majdzadeh, Reza; Yazdizadeh, Bahareh; Salehi, Masoud

    2014-01-01

    Background: Evidence-based policy documents that are well developed by senior civil servants and are timely available can reduce the barriers to evidence utilization by health policy makers. This study examined the barriers and facilitators in developing evidence-based health policy documents from the perspective of their producers in a developing country. Methods: In a qualitative study with a framework analysis approach, we conducted semi-structured interviews using purposive and snowball sampling. A qualitative analysis software (MAXQDA-10) was used to apply the codes and manage the data. This study was theory-based and the results were compared to exploratory studies about the factors influencing evidence-based health policymaking. Results: 18 codes and three main themes of behavioral, normative, and control beliefs were identified. Factors that influence the development of evidence-based policy documents were identified by the participants: behavioral beliefs included quality of policy documents, use of resources, knowledge and innovation, being time-consuming and contextualization; normative beliefs included policy authorities, policymakers, policy administrators, and co-workers; and control beliefs included recruitment policy, performance management, empowerment, management stability, physical environment, access to evidence, policy making process, and effect of other factors. Conclusion: Most of the cited barriers to the development of evidence-based policy were related to control beliefs, i.e. barriers at the organizational and health system levels. This study identified the factors that influence the development of evidence-based policy documents based on the components of the theory of planned behavior. But in exploratory studies on evidence utilization by health policymakers, the identified factors were only related to control behaviors. This suggests that the theoretical approach may be preferable to the exploratory approach in identifying the barriers

  12. Design and fabrication of integrated micro/macrostructure for 3D functional gradient systems based on additive manufacturing

    Science.gov (United States)

    Yin, Ming; Xie, Luofeng; Jiang, Weifeng; Yin, Guofu

    2018-05-01

    Functional gradient systems have important applications in many areas. Although a 2D dielectric structure that serves as the gradient index medium for controlling electromagnetic waves is well established, it may not be suitable for application in 3D case. In this paper, we present a method to realize functional gradient systems with 3D integrated micro/macrostructure. The homogenization of the structure is studied in detail by conducting band diagram analysis. The analysis shows that the effective medium approximation is valid even when periodicity is comparable to wavelength. The condition to ensure the polarization-invariant, isotropic, and frequency-independent property is investigated. The scheme for the design and fabrication of 3D systems requiring spatial material property distribution is presented. By using the vat photopolymerization process, a large overall size of macrostructure at the system level and precise fine features of microstructure at the unit cell level are realized, thus demonstrating considerable scalability of the system for wave manipulation.

  13. Local CC2 response method based on the Laplace transform: Analytic energy gradients for ground and excited states

    Energy Technology Data Exchange (ETDEWEB)

    Ledermüller, Katrin; Schütz, Martin, E-mail: martin.schuetz@chemie.uni-regensburg.de [Institute of Physical and Theoretical Chemistry, University of Regensburg, Universitätsstraße 31, D-93040 Regensburg (Germany)

    2014-04-28

    A multistate local CC2 response method for the calculation of analytic energy gradients with respect to nuclear displacements is presented for ground and electronically excited states. The gradient enables the search for equilibrium geometries of extended molecular systems. Laplace transform is used to partition the eigenvalue problem in order to obtain an effective singles eigenvalue problem and adaptive, state-specific local approximations. This leads to an approximation in the energy Lagrangian, which however is shown (by comparison with the corresponding gradient method without Laplace transform) to be of no concern for geometry optimizations. The accuracy of the local approximation is tested and the efficiency of the new code is demonstrated by application calculations devoted to a photocatalytic decarboxylation process of present interest.

  14. Local CC2 response method based on the Laplace transform: analytic energy gradients for ground and excited states.

    Science.gov (United States)

    Ledermüller, Katrin; Schütz, Martin

    2014-04-28

    A multistate local CC2 response method for the calculation of analytic energy gradients with respect to nuclear displacements is presented for ground and electronically excited states. The gradient enables the search for equilibrium geometries of extended molecular systems. Laplace transform is used to partition the eigenvalue problem in order to obtain an effective singles eigenvalue problem and adaptive, state-specific local approximations. This leads to an approximation in the energy Lagrangian, which however is shown (by comparison with the corresponding gradient method without Laplace transform) to be of no concern for geometry optimizations. The accuracy of the local approximation is tested and the efficiency of the new code is demonstrated by application calculations devoted to a photocatalytic decarboxylation process of present interest.

  15. Local CC2 response method based on the Laplace transform: Analytic energy gradients for ground and excited states

    International Nuclear Information System (INIS)

    Ledermüller, Katrin; Schütz, Martin

    2014-01-01

    A multistate local CC2 response method for the calculation of analytic energy gradients with respect to nuclear displacements is presented for ground and electronically excited states. The gradient enables the search for equilibrium geometries of extended molecular systems. Laplace transform is used to partition the eigenvalue problem in order to obtain an effective singles eigenvalue problem and adaptive, state-specific local approximations. This leads to an approximation in the energy Lagrangian, which however is shown (by comparison with the corresponding gradient method without Laplace transform) to be of no concern for geometry optimizations. The accuracy of the local approximation is tested and the efficiency of the new code is demonstrated by application calculations devoted to a photocatalytic decarboxylation process of present interest

  16. Policy-Based mobility Management for Heterogeneous Networks

    DEFF Research Database (Denmark)

    Mihovska, Albena D.

    2007-01-01

    Next generation communications will be composed of flexible, scalable and context-aware, secure and resilient architectures and technologies that allow full mobility of the user and enable dynamic management policies that ensure end-to-end secure transmission of data and services across heterogen......Next generation communications will be composed of flexible, scalable and context-aware, secure and resilient architectures and technologies that allow full mobility of the user and enable dynamic management policies that ensure end-to-end secure transmission of data and services across...... access technology (RAT) association, user and flow context transfer, handover decision, and deployment priority. Index Terms— distributed RRM, centralized...

  17. Combining Correlation-Based and Reward-Based Learning in Neural Control for Policy Improvement

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Kolodziejski, Christoph; Wörgötter, Florentin

    2013-01-01

    Classical conditioning (conventionally modeled as correlation-based learning) and operant conditioning (conventionally modeled as reinforcement learning or reward-based learning) have been found in biological systems. Evidence shows that these two mechanisms strongly involve learning about...... associations. Based on these biological findings, we propose a new learning model to achieve successful control policies for artificial systems. This model combines correlation-based learning using input correlation learning (ICO learning) and reward-based learning using continuous actor–critic reinforcement...... learning (RL), thereby working as a dual learner system. The model performance is evaluated by simulations of a cart-pole system as a dynamic motion control problem and a mobile robot system as a goal-directed behavior control problem. Results show that the model can strongly improve pole balancing control...

  18. The power and pain of market-based carbon policies

    NARCIS (Netherlands)

    Henderson, B.; Golub, A.; Pambudi, D.; Hertel, T.; Godde, C.; Herrero, M.; Cacho, O.; Gerber, P.

    2018-01-01

    The objectives of this research are to assess the greenhouse gas mitigation potential of carbon policies applied to the ruminant livestock sector [inclusive of the major ruminant species—cattle (Bos Taurus and Bos indicus), sheep (Ovis aries), and goats (Capra hircus)]—with particular emphasis on

  19. A Semantic Based Policy Management Framework for Cloud Computing Environments

    Science.gov (United States)

    Takabi, Hassan

    2013-01-01

    Cloud computing paradigm has gained tremendous momentum and generated intensive interest. Although security issues are delaying its fast adoption, cloud computing is an unstoppable force and we need to provide security mechanisms to ensure its secure adoption. In this dissertation, we mainly focus on issues related to policy management and access…

  20. A novel process for textured thick film YBa2Cu3Oy coated conductors based on a constitutional gradients principle

    International Nuclear Information System (INIS)

    Reddy, E Sudhakar; Tarka, M; Noudem, J G; Goodilin, E A; Schmitz, G J

    2005-01-01

    A new method for the processing of textured YBa 2 Cu 3 O y (Y 123) thick film stripes on metallic tapes is discussed. The process involves the texturing of Y123 grains by a localized directional solidification method by creating constitutional gradients along the width of the precursor Y 2 BaCuO 5 (Y 211) stripe during an infiltration and growth process. The differences in the solidification temperatures of different rare earth 123 compounds were utilized to generate the constitutional gradients. The sample configuration involves printed lines of light (Nd) and heavy (Yb) rare earth compounds on either side of an airbrushed Y211 stripe underneath a liquid phase (barium cuprates) layer. The higher peritectic temperature (T p ) Nd regions serve as nucleating sites for Y123 grains nucleated in the adjacent Y211 stripes and the constitutional gradients produced due to the diffusion of respective rare earth ions between the Nd and Yb regions, typically of 200 K cm -1 in the region, induce a driving force for the directional growth of the nucleated grains. The solidification is analogous to that in a typical Bridgman furnace in applied high temperature gradients. The process, being independent of growth rate parameter and texture of the underlying substrate, is suitable for the fabrication of long length thick film conductors by a wind and react process in simple box type furnaces

  1. On computing quadrature-based bounds for the A-norm of the error in conjugate gradients

    Czech Academy of Sciences Publication Activity Database

    Meurant, G.; Tichý, Petr

    2013-01-01

    Roč. 62, č. 2 (2013), s. 163-191 ISSN 1017-1398 R&D Projects: GA AV ČR IAA100300802 Institutional research plan: CEZ:AV0Z10300504 Keywords : conjugate gradients * norm of the error * bounds for the error norm Subject RIV: BA - General Mathematics Impact factor: 1.005, year: 2013

  2. On computing quadrature-based bounds for the A-norm of the error in conjugate gradients

    Czech Academy of Sciences Publication Activity Database

    Meurant, G.; Tichý, Petr

    2013-01-01

    Roč. 62, č. 2 (2013), s. 163-191 ISSN 1017-1398 R&D Projects: GA AV ČR IAA100300802 Institutional research plan: CEZ:AV0Z10300504 Keywords : conjugate gradient s * norm of the error * bounds for the error norm Subject RIV: BA - General Mathematics Impact factor: 1.005, year: 2013

  3. Bryophyte and vascular plant responses to base-richness and water level gradients in Western Carpathian Sphagnum-rich mires

    Czech Academy of Sciences Publication Activity Database

    Hájková, Petra; Hájek, Michal

    2004-01-01

    Roč. 39, č. 4 (2004), s. 335-351 ISSN 1211-9520 R&D Projects: GA ČR(CZ) GA206/02/0568 Institutional research plan: CEZ:AV0Z6005908 Keywords : fen * poor-rich gradient * water table Subject RIV: EF - Botanics Impact factor: 0.968, year: 2004

  4. Monitoring of the spatio-temporal change in the interplate coupling at northeastern Japan subduction zone based on the spatial gradients of surface velocity field

    Science.gov (United States)

    Iinuma, Takeshi

    2018-04-01

    A monitoring method to grasp the spatio-temporal change in the interplate coupling in a subduction zone based on the spatial gradients of surface displacement rate fields is proposed. I estimated the spatio-temporal change in the interplate coupling along the plate boundary in northeastern (NE) Japan by applying the proposed method to the surface displacement rates based on global positioning system observations. The gradient of the surface velocities is calculated in each swath configured along the direction normal to the Japan Trench for time windows such as 0.5, 1, 2, 3 and 5 yr being shifted by one week during the period of 1997-2016. The gradient of the horizontal velocities is negative and has a large magnitude when the interplate coupling at the shallow part (less than approximately 50 km in depth) beneath the profile is strong, and the sign of the gradient of the vertical velocity is sensitive to the existence of the coupling at the deep part (greater than approximately 50 km in depth). The trench-parallel variation of the spatial gradients of a displacement rate field clearly corresponds to the trench-parallel variation of the amplitude of the interplate coupling on the plate interface, as well as the rupture areas of previous interplate earthquakes. Temporal changes in the trench-parallel variation of the spatial gradient of the displacement rate correspond to the strengthening or weakening of the interplate coupling. We can monitor the temporal change in the interplate coupling state by calculating the spatial gradients of the surface displacement rate field to some extent without performing inversion analyses with applying certain constraint conditions that sometimes cause over- and/or underestimation at areas of limited spatial resolution far from the observation network. The results of the calculation confirm known interplate events in the NE Japan subduction zone, such as the post-seismic slip of the 2003 M8.0 Tokachi-oki and 2005 M7.2 Miyagi

  5. Persistent misunderstandings about evidence-based (sorry: informed!) policy-making.

    Science.gov (United States)

    Bédard, Pierre-Olivier; Ouimet, Mathieu

    2016-01-01

    The field of research on knowledge mobilization and evidence-informed policy-making has seen enduring debates related to various fundamental assumptions such as the definition of 'evidence', the relative validity of various research methods, the actual role of evidence to inform policy-making, etc. In many cases, these discussions serve a useful purpose, but they also stem from serious disagreement on methodological and epistemological issues. This essay reviews the rationale for evidence-informed policy-making by examining some of the common claims made about the aims and practices of this perspective on public policy. Supplementing the existing justifications for evidence-based policy making, we argue in favor of a greater inclusion of research evidence in the policy process but in a structured fashion, based on methodological considerations. In this respect, we present an overview of the intricate relation between policy questions and appropriate research designs. By closely examining the relation between research questions and research designs, we claim that the usual points of disagreement are mitigated. For instance, when focusing on the variety of research designs that can answer a range of policy questions, the common critical claim about 'RCT-based policy-making' seems to lose some, if not all of its grip.

  6. Future nuclear energy policy based on the Broad Outline of Nuclear Energy Policy

    International Nuclear Information System (INIS)

    Saito, Shinzo

    2006-01-01

    The Broad Outline of Nuclear Energy Policy for about ten years was determined by the Cabinet meeting of Japan. Nuclear power plant safety and regulation, nuclear waste management, nuclear power production and nuclear power research and development were discussed. It determined that 3 nuclear power plants, which are building, should be built, and about 10 plants will be built to product 30 to 40 % of Japan electricity generation after 2030. FBR will be operated until 2050. The nuclear fuel cycle system will be used continuously. The nuclear power plant safety and nuclear waste management are so important for the nuclear industry that these subjects were discussed in detail. In order to understand and use the quantum beam technology, the advanced institutions and equipments and network among scientists, industry and people should be planed and practically used. (S.Y.)

  7. Better spent. Towards an 'Evidence-based' environmental policy

    International Nuclear Information System (INIS)

    2003-11-01

    The aim of the 'Groene Rekenkamer' (Green Audit Office) is to examine and review the scientific value of white papers, bills and other proposals on different environmental subjects. Also attention will be paid to the suitability, effectiveness and efficiency of laws and regulations with respect to public health and environmental targets and to analyze and clarify possible risks of the implementation of policy (cost benefit analysis) [nl

  8. Knickzone Extraction Tool (KET – A new ArcGIS toolset for automatic extraction of knickzones from a DEM based on multi-scale stream gradients

    Directory of Open Access Journals (Sweden)

    Zahra Tuba

    2017-04-01

    Full Text Available Extraction of knickpoints or knickzones from a Digital Elevation Model (DEM has gained immense significance owing to the increasing implications of knickzones on landform development. However, existing methods for knickzone extraction tend to be subjective or require time-intensive data processing. This paper describes the proposed Knickzone Extraction Tool (KET, a new raster-based Python script deployed in the form of an ArcGIS toolset that automates the process of knickzone extraction and is both fast and more user-friendly. The KET is based on multi-scale analysis of slope gradients along a river course, where any locally steep segment (knickzone can be extracted as an anomalously high local gradient. We also conducted a comparative analysis of the KET and other contemporary knickzone identification techniques. The relationship between knickzone distribution and its morphometric characteristics are also examined through a case study of a mountainous watershed in Japan.

  9. Validation for 2D/3D registration II: The comparison of intensity- and gradient-based merit functions using a new gold standard data set

    International Nuclear Information System (INIS)

    Gendrin, Christelle; Markelj, Primoz; Pawiro, Supriyanto Ardjo; Spoerk, Jakob; Bloch, Christoph; Weber, Christoph; Figl, Michael; Bergmann, Helmar; Birkfellner, Wolfgang; Likar, Bostjan; Pernus, Franjo

    2011-01-01

    Purpose: A new gold standard data set for validation of 2D/3D registration based on a porcine cadaver head with attached fiducial markers was presented in the first part of this article. The advantage of this new phantom is the large amount of soft tissue, which simulates realistic conditions for registration. This article tests the performance of intensity- and gradient-based algorithms for 2D/3D registration using the new phantom data set. Methods: Intensity-based methods with four merit functions, namely, cross correlation, rank correlation, correlation ratio, and mutual information (MI), and two gradient-based algorithms, the backprojection gradient-based (BGB) registration method and the reconstruction gradient-based (RGB) registration method, were compared. Four volumes consisting of CBCT with two fields of view, 64 slice multidetector CT, and magnetic resonance-T1 weighted images were registered to a pair of kV x-ray images and a pair of MV images. A standardized evaluation methodology was employed. Targets were evenly spread over the volumes and 250 starting positions of the 3D volumes with initial displacements of up to 25 mm from the gold standard position were calculated. After the registration, the displacement from the gold standard was retrieved and the root mean square (RMS), mean, and standard deviation mean target registration errors (mTREs) over 250 registrations were derived. Additionally, the following merit properties were computed: Accuracy, capture range, number of minima, risk of nonconvergence, and distinctiveness of optimum for better comparison of the robustness of each merit. Results: Among the merit functions used for the intensity-based method, MI reached the best accuracy with an RMS mTRE down to 1.30 mm. Furthermore, it was the only merit function that could accurately register the CT to the kV x rays with the presence of tissue deformation. As for the gradient-based methods, BGB and RGB methods achieved subvoxel accuracy (RMS m

  10. Mediated Ciphertext-Policy Attribute-Based Encryption and Its Application

    NARCIS (Netherlands)

    Ibraimi, L.; Petkovic, M.; Nikova, S.I.; Hartel, Pieter H.; Jonker, Willem; Youm, Heung Youl; Yung, Moti

    2009-01-01

    In Ciphertext-Policy Attribute-Based Encryption (CP-ABE), a user secret key is associated with a set of attributes, and the ciphertext is associated with an access policy over attributes. The user can decrypt the ciphertext if and only if the attribute set of his secret key satisfies the access

  11. Using an Online Tool to Support School-Based ICT Policy Planning in Primary Education

    Science.gov (United States)

    Vanderlinde, R.; Van Braak, J.; Tondeur, J.

    2010-01-01

    An important step towards the successful integration of information and communication technology (ICT) in schools is to facilitate their capacity to develop a school-based ICT policy resulting in an ICT policy plan. Such a plan can be defined as a school document containing strategic and operational elements concerning the integration of ICT in…

  12. Data-Based Decision Making at the Policy, Research, and Practice Levels

    NARCIS (Netherlands)

    Schildkamp, Kim; Ebbeler, J.

    2015-01-01

    Data-based decision making (DBDM) can lead to school improvement. However, schools struggle with the implementation of DBDM. In this symposium, we will discuss research and the implementation of DBDM at the national and regional policy level and the classroom level. We will discuss policy issues

  13. The process of developing policy based on global environmental risk assessment

    International Nuclear Information System (INIS)

    Fisk, D.J.

    1995-01-01

    A brief presentation is given on developing policy based on a global environmental risk assessment. The author looks at the global warming issue as if it were a formal problem in risk assessment. He uses that framework to make one or two suggestions as to how the interaction of policy and research might evolve as the climate convention progresses

  14. An intersectionality-based policy analysis framework: critical reflections on a methodology for advancing equity.

    Science.gov (United States)

    Hankivsky, Olena; Grace, Daniel; Hunting, Gemma; Giesbrecht, Melissa; Fridkin, Alycia; Rudrum, Sarah; Ferlatte, Olivier; Clark, Natalie

    2014-12-10

    In the field of health, numerous frameworks have emerged that advance understandings of the differential impacts of health policies to produce inclusive and socially just health outcomes. In this paper, we present the development of an important contribution to these efforts - an Intersectionality-Based Policy Analysis (IBPA) Framework. Developed over the course of two years in consultation with key stakeholders and drawing on best and promising practices of other equity-informed approaches, this participatory and iterative IBPA Framework provides guidance and direction for researchers, civil society, public health professionals and policy actors seeking to address the challenges of health inequities across diverse populations. Importantly, we present the application of the IBPA Framework in seven priority health-related policy case studies. The analysis of each case study is focused on explaining how IBPA: 1) provides an innovative structure for critical policy analysis; 2) captures the different dimensions of policy contexts including history, politics, everyday lived experiences, diverse knowledges and intersecting social locations; and 3) generates transformative insights, knowledge, policy solutions and actions that cannot be gleaned from other equity-focused policy frameworks. The aim of this paper is to inspire a range of policy actors to recognize the potential of IBPA to foreground the complex contexts of health and social problems, and ultimately to transform how policy analysis is undertaken.

  15. Proper orthogonal decomposition-based estimations of the flow field from particle image velocimetry wall-gradient measurements in the backward-facing step flow

    International Nuclear Information System (INIS)

    Nguyen, Thien Duy; Wells, John Craig; Mokhasi, Paritosh; Rempfer, Dietmar

    2010-01-01

    In this paper, particle image velocimetry (PIV) results from the recirculation zone of a backward-facing step flow, of which the Reynolds number is 2800 based on bulk velocity upstream of the step and step height (h = 16.5 mm), are used to demonstrate the capability of proper orthogonal decomposition (POD)-based measurement models. Three-component PIV velocity fields are decomposed by POD into a set of spatial basis functions and a set of temporal coefficients. The measurement models are built to relate the low-order POD coefficients, determined from an ensemble of 1050 PIV fields by the 'snapshot' method, to the time-resolved wall gradients, measured by a near-wall measurement technique called stereo interfacial PIV. These models are evaluated in terms of reconstruction and prediction of the low-order temporal POD coefficients of the velocity fields. In order to determine the estimation coefficients of the measurement models, linear stochastic estimation (LSE), quadratic stochastic estimation (QSE), principal component regression (PCR) and kernel ridge regression (KRR) are applied. We denote such approaches as LSE-POD, QSE-POD, PCR-POD and KRR-POD. In addition to comparing the accuracy of measurement models, we introduce multi-time POD-based estimations in which past and future information of the wall-gradient events is used separately or combined. The results show that the multi-time estimation approaches can improve the prediction process. Among these approaches, the proposed multi-time KRR-POD estimation with an optimized window of past wall-gradient information yields the best prediction. Such a multi-time KRR-POD approach offers a useful tool for real-time flow estimation of the velocity field based on wall-gradient data

  16. Research-Based Knowledge: Researchers' Contribution to Evidence-Based Practice and Policy Making in Career Guidance

    Science.gov (United States)

    Haug, Erik Hagaseth; Plant, Peter

    2016-01-01

    To present evidence for the outcomes of career guidance is increasingly seen as pivotal for a further professionalization of policy making and service provision. This paper puts an emphasis on researchers' contribution to evidence-based practice and policy making in career guidance. We argue for a broader and more pluralistic research strategy to…

  17. A controlled community-based trial to promote smoke-free policy in rural communities.

    Science.gov (United States)

    Hahn, Ellen J; Rayens, Mary Kay; Adkins, Sarah; Begley, Kathy; York, Nancy

    2015-01-01

    Rural, tobacco-growing areas are disproportionately affected by tobacco use, secondhand smoke, and weak tobacco control policies. The purpose was to test the effects of a stage-specific, tailored policy-focused intervention on readiness for smoke-free policy, and policy outcomes in rural underserved communities. A controlled community-based trial including 37 rural counties. Data were collected annually with community advocates (n = 330) and elected officials (n = 158) in 19 intervention counties and 18 comparison counties over 5 years (average response rate = 68%). Intervention communities received policy development strategies from community advisors tailored to their stage of readiness and designed to build capacity, build demand, and translate and disseminate science. Policy outcomes were tracked over 5 years. Communities receiving the stage-specific, tailored intervention had higher overall community readiness scores and better policy outcomes than the comparison counties, controlling for county-level smoking rate, population size, and education. Nearly one-third of the intervention counties adopted smoke-free laws covering restaurants, bars, and all workplaces compared to none of the comparison counties. The stage-specific, tailored policy-focused intervention acted as a value-added resource to local smoke-free campaigns by promoting readiness for policy, as well as actual policy change in rural communities. Although actual policy change and percent covered by the policies were modest, these areas need additional resources and efforts to build capacity, build demand, and translate and disseminate science in order to accelerate smoke-free policy change and reduce the enormous toll from tobacco in these high-risk communities. © 2014 National Rural Health Association.

  18. Solution to Two-Dimensional Steady Inverse Heat Transfer Problems with Interior Heat Source Based on the Conjugate Gradient Method

    Directory of Open Access Journals (Sweden)

    Shoubin Wang

    2017-01-01

    Full Text Available The compound variable inverse problem which comprises boundary temperature distribution and surface convective heat conduction coefficient of two-dimensional steady heat transfer system with inner heat source is studied in this paper applying the conjugate gradient method. The introduction of complex variable to solve the gradient matrix of the objective function obtains more precise inversion results. This paper applies boundary element method to solve the temperature calculation of discrete points in forward problems. The factors of measuring error and the number of measuring points zero error which impact the measurement result are discussed and compared with L-MM method in inverse problems. Instance calculation and analysis prove that the method applied in this paper still has good effectiveness and accuracy even if measurement error exists and the boundary measurement points’ number is reduced. The comparison indicates that the influence of error on the inversion solution can be minimized effectively using this method.

  19. Numerical solution to a multi-dimensional linear inverse heat conduction problem by a splitting-based conjugate gradient method

    International Nuclear Information System (INIS)

    Dinh Nho Hao; Nguyen Trung Thanh; Sahli, Hichem

    2008-01-01

    In this paper we consider a multi-dimensional inverse heat conduction problem with time-dependent coefficients in a box, which is well-known to be severely ill-posed, by a variational method. The gradient of the functional to be minimized is obtained by aids of an adjoint problem and the conjugate gradient method with a stopping rule is then applied to this ill-posed optimization problem. To enhance the stability and the accuracy of the numerical solution to the problem we apply this scheme to the discretized inverse problem rather than to the continuous one. The difficulties with large dimensions of discretized problems are overcome by a splitting method which only requires the solution of easy-to-solve one-dimensional problems. The numerical results provided by our method are very good and the techniques seem to be very promising.

  20. Fast conjugate phase image reconstruction based on a Chebyshev approximation to correct for B0 field inhomogeneity and concomitant gradients.

    Science.gov (United States)

    Chen, Weitian; Sica, Christopher T; Meyer, Craig H

    2008-11-01

    Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method.

  1. Dexamethasone levels and base-to-apex concentration gradients in the scala tympani perilymph after intracochlear delivery in the guinea pig.

    Science.gov (United States)

    Hahn, Hartmut; Salt, Alec N; Biegner, Thorsten; Kammerer, Bernd; Delabar, Ursular; Hartsock, Jared J; Plontke, Stefan K

    2012-06-01

    To determine whether intracochlearly applied dexamethasone will lead to better control of drug levels, higher peak concentrations, and lower base-to-apex concentration gradients in the scala tympani (ST) of the guinea pig than after intratympanic (round window [RW]) application. Local application of drugs to the RW results in substantial variation of intracochlear drug levels and significant base-to-apex concentration gradients in ST. Two microliters of dexamethasone-phosphate (10 mg/ml) were injected into ST either through the RW membrane, which was covered with 1% sodium hyaluronate gel or through a cochleostomy with a fluid tight seal of the micropipette. Perilymph was sequentially sampled from the apex at a single time point for each animal, at 20, 80, or 200 min after the injection ended. Results were mathematically interpreted by means of an established computer model and compared with previous experiments performed by our group with the same experimental techniques but using intratympanic applications. Single intracochlear injections of 20 minutes resulted in approximately 10 times higher peak concentrations (on average) than 2 to 3 hours of intratympanic application to the RW niche. Intracochlear drug levels were less variable and could be measured for over 220 minutes. Concentration gradients along the scala tympani were less pronounced. The remaining variability in intracochlear drug levels was attributable to perilymph and drug leak from the injection site. With significantly higher, less variable drug levels and smaller base-to-apex concentration gradients, intracochlear applications have advantages to intratympanic injections. For further development of this technique, it is of importance to control leaks of perilymph and drug from the injection site and to evaluate its clinical feasibility and associated risks.

  2. Increasing Use of Research Findings in Improving Evidence-Based Health Policy at the National Level

    Directory of Open Access Journals (Sweden)

    Meiwita Budiharsana

    2017-11-01

    Full Text Available In February 2016, the Minister of Health decided to increase the use of research findings in improving the quality of the national health policy and planning. The Ministry of Health has instructed the National Institute of Health Research and Development or NIHRD to play a stronger role of monitoring and evaluating all health programs, because “their opinion and research findings should be the basis for changes in national health policies and planning”. Compared to the past, the Ministry of Health has increased the research budget for evidence-based research tremendously. However, there is a gap between the information needs of program and policy-makers and the information offered by researchers. A close dialogue is needed between the users (program managers, policy makers and planners and the suppliers (researchers and evaluators to ensure that the evidence-based supplied by research is useful for programs, planning and health policy.

  3. Tobacco plain packaging: Evidence based policy or public health advocacy?

    Science.gov (United States)

    McKeganey, Neil; Russell, Christopher

    2015-06-01

    In December 2012, Australia became the first country to require all tobacco products be sold solely in standardised or 'plain' packaging, bereft of the manufacturers' trademarked branding and colours, although retaining large graphic and text health warnings. Following the publication of Sir Cyril Chantler's review of the evidence on the effects of plain tobacco packaging, the Ministers of the United Kingdom Parliament voted in March 2015 to implement similar legislation. Support for plain packaging derives from the belief that tobacco products sold in plain packs have reduced appeal and so are more likely to deter young people and non-smokers from starting tobacco use, and more likely to motivate smokers to quit and stay quit. This article considers why support for the plain packaging policy has grown among tobacco control researchers, public health advocates and government ministers, and reviews Australian survey data that speak to the possible introductory effect of plain packaging on smoking prevalence within Australia. The article concludes by emphasising the need for more detailed research to be undertaken before judging the capacity of the plain packaging policy to deliver the multitude of positive effects that have been claimed by its most ardent supporters. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Research on taxi software policy based on big data

    Directory of Open Access Journals (Sweden)

    Feng Daoming

    2017-01-01

    Full Text Available Through big data analysis, statistical analysis of a large number of factors affect the establishment of the rally car index set, By establishing a mathematical model to analyze the different space-time taxi resource “to match supply and demand” degree, combined with intelligent deployment to solve the “taxi difficult” this hot social issues. This article takes Shanghai as an example, the central park, Lu Xun park, century park three areas as the object of study. From the “sky drops fast travel intelligence platform” big data, Extracted passenger demand and the number of taxi Kongshi data. Then demand and supply of taxis to establish indicators matrix, get the degree of matching supply needs of the region. Then through the big data relevant policies of each taxi company. Using the method of cluster analysis, to find the decisive role of the three aspects of the factors, using principal component analysis, compare the advantages and disadvantages of the existing company’s programs. Finally, according to the above research to develop a reasonable taxi software related policies.

  5. Priority Queue Based Reactive Buffer Management Policy for Delay Tolerant Network under City Based Environments.

    Directory of Open Access Journals (Sweden)

    Qaisar Ayub

    Full Text Available Delay Tolerant Network (DTN multi-copy routing protocols are privileged to create and transmit multiple copies of each message that causes congestion and some messages are dropped. This process is known as reactive drop because messages were dropped re-actively to overcome buffer overflows. The existing reactive buffer management policies apply a single metric to drop source, relay and destine messages. Hereby, selection to drop a message is dubious because each message as source, relay or destine may have consumed dissimilar magnitude of network resources. Similarly, DTN has included time to live (ttl parameter which defines lifetime of message. Hence, when ttl expires then message is automatically destroyed from relay nodes. However, time-to-live (ttl is not applicable on messages reached at their destinations. Moreover, nodes keep replicating messages till ttl expires even-though large number of messages has already been dispersed. In this paper, we have proposed Priority Queue Based Reactive Buffer Management Policy (PQB-R for DTN under City Based Environments. The PQB-R classifies buffered messages into source, relay and destine queues. Moreover, separate drop metric has been applied on individual queue. The experiment results prove that proposed PQB-R has reduced number of messages transmissions, message drop and increases delivery ratio.

  6. Priority Queue Based Reactive Buffer Management Policy for Delay Tolerant Network under City Based Environments.

    Science.gov (United States)

    Ayub, Qaisar; Ngadi, Asri; Rashid, Sulma; Habib, Hafiz Adnan

    2018-01-01

    Delay Tolerant Network (DTN) multi-copy routing protocols are privileged to create and transmit multiple copies of each message that causes congestion and some messages are dropped. This process is known as reactive drop because messages were dropped re-actively to overcome buffer overflows. The existing reactive buffer management policies apply a single metric to drop source, relay and destine messages. Hereby, selection to drop a message is dubious because each message as source, relay or destine may have consumed dissimilar magnitude of network resources. Similarly, DTN has included time to live (ttl) parameter which defines lifetime of message. Hence, when ttl expires then message is automatically destroyed from relay nodes. However, time-to-live (ttl) is not applicable on messages reached at their destinations. Moreover, nodes keep replicating messages till ttl expires even-though large number of messages has already been dispersed. In this paper, we have proposed Priority Queue Based Reactive Buffer Management Policy (PQB-R) for DTN under City Based Environments. The PQB-R classifies buffered messages into source, relay and destine queues. Moreover, separate drop metric has been applied on individual queue. The experiment results prove that proposed PQB-R has reduced number of messages transmissions, message drop and increases delivery ratio.

  7. Rationality versus reality: the challenges of evidence-based decision making for health policy makers.

    Science.gov (United States)

    McCaughey, Deirdre; Bruning, Nealia S

    2010-05-26

    Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the EBDM/EBPM process can be

  8. Rationality versus reality: the challenges of evidence-based decision making for health policy makers

    Directory of Open Access Journals (Sweden)

    Bruning Nealia S

    2010-05-01

    Full Text Available Abstract Background Current healthcare systems have extended the evidence-based medicine (EBM approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. Discussion We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM and evidence-based policy making (EBPM because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. Summary In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial

  9. Rationality versus reality: the challenges of evidence-based decision making for health policy makers

    Science.gov (United States)

    2010-01-01

    Background Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. Discussion We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. Summary In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the

  10. Imaging disturbance zones ahead of a tunnel by elastic full-waveform inversion: Adjoint gradient based inversion vs. parameter space reduction using a level-set method

    Directory of Open Access Journals (Sweden)

    Andre Lamert

    2018-03-01

    Full Text Available We present and compare two flexible and effective methodologies to predict disturbance zones ahead of underground tunnels by using elastic full-waveform inversion. One methodology uses a linearized, iterative approach based on misfit gradients computed with the adjoint method while the other uses iterative, gradient-free unscented Kalman filtering in conjunction with a level-set representation. Whereas the former does not involve a priori assumptions on the distribution of elastic properties ahead of the tunnel, the latter introduces a massive reduction in the number of explicit model parameters to be inverted for by focusing on the geometric form of potential disturbances and their average elastic properties. Both imaging methodologies are validated through successful reconstructions of simple disturbances. As an application, we consider an elastic multiple disturbance scenario. By using identical synthetic time-domain seismograms as test data, we obtain satisfactory, albeit different, reconstruction results from the two inversion methodologies. The computational costs of both approaches are of the same order of magnitude, with the gradient-based approach showing a slight advantage. The model parameter space reduction approach compensates for this by additionally providing a posteriori estimates of model parameter uncertainty. Keywords: Tunnel seismics, Full waveform inversion, Seismic waves, Level-set method, Adjoint method, Kalman filter

  11. School-based obesity policy, social capital, and gender differences in weight control behaviors.

    Science.gov (United States)

    Zhu, Ling; Thomas, Breanca

    2013-06-01

    We examined the associations among school-based obesity policies, social capital, and adolescents' self-reported weight control behaviors, focusing on how the collective roles of community and adopted policies affect gender groups differently. We estimated state-level ecologic models using 1-way random effects seemingly unrelated regressions derived from panel data for 43 states from 1991 to 2009, which we obtained from the Centers for Disease Control and Prevention's Youth Risk Behavior Surveillance System. We used multiplicative interaction terms to assess how social capital moderates the effects of school-based obesity policies. School-based obesity policies in active communities were mixed in improving weight control behaviors. They increased both healthy and unhealthy weight control behaviors among boys but did not increase healthy weight control behaviors among girls. Social capital is an important contextual factor that conditions policy effectiveness in large contexts. Heterogeneous behavioral responses are associated with both school-based obesity policies and social capital. Building social capital and developing policy programs to balance outcomes for both gender groups may be challenging in managing childhood obesity.

  12. Realizing IoT service's policy privacy over publish/subscribe-based middleware.

    Science.gov (United States)

    Duan, Li; Zhang, Yang; Chen, Shiping; Wang, Shiyao; Cheng, Bo; Chen, Junliang

    2016-01-01

    The publish/subscribe paradigm makes IoT service collaborations more scalable and flexible, due to the space, time and control decoupling of event producers and consumers. Thus, the paradigm can be used to establish large-scale IoT service communication infrastructures such as Supervisory Control and Data Acquisition systems. However, preserving IoT service's policy privacy is difficult in this paradigm, because a classical publisher has little control of its own event after being published; and a subscriber has to accept all the events from the subscribed event type with no choice. Few existing publish/subscribe middleware have built-in mechanisms to address the above issues. In this paper, we present a novel access control framework, which is capable of preserving IoT service's policy privacy. In particular, we adopt the publish/subscribe paradigm as the IoT service communication infrastructure to facilitate the protection of IoT services policy privacy. The key idea in our policy-privacy solution is using a two-layer cooperating method to match bi-directional privacy control requirements: (a) data layer for protecting IoT events; and (b) application layer for preserving the privacy of service policy. Furthermore, the anonymous-set-based principle is adopted to realize the functionalities of the framework, including policy embedding and policy encoding as well as policy matching. Our security analysis shows that the policy privacy framework is Chosen-Plaintext Attack secure. We extend the open source Apache ActiveMQ broker by building into a policy-based authorization mechanism to enforce the privacy policy. The performance evaluation results indicate that our approach is scalable with reasonable overheads.

  13. Analytical and policy issues in energy economics: Uses of the FRS data base

    Science.gov (United States)

    1981-12-01

    The relevant literature concerning several major analytical and policy issues in energy economics is reviewed and criticized. The possible uses of the Financial Reporting System (FRS) data base for the analysis of energy policy issues are investigated. Certain features of FRS data suggest several ways in which the data base can be used by policy makers. FRS data are collected on the firm level, and different segments of the same firm operating in different markets can be separately identified. The methods of collection as well as FRS's elaborate data verification process guarantee a high degree of accuracy and consistency among firms.

  14. Moving an Evidence-Based Policy Agenda Forward: Leadership Tips From the Field.

    Science.gov (United States)

    Garrett, Teresa

    2018-05-01

    Advancing evidence-based policy change is a leadership challenge that nurses should embrace. Key tips to ensure that evidence-based policy changes are successful at the individual, community, and population levels are offered to help nurses through the change process. The public trust in the nursing profession is a leverage point that should be used to advance the use of evidence, expedite change, and improve health for students and across communities.

  15. Science Based Policies: How Can Scientist Communicate their Points Across?

    International Nuclear Information System (INIS)

    Elnakat, A. C.

    2002-01-01

    With the complexity of environmental problems faced today, both scientists and policymakers are striving to combine policy and administration with the physical and natural sciences in order to mitigate and prevent environmental degradation. Nevertheless, communicating science to policymakers has been difficult due to many barriers. Even though scientists and policymakers share the blame in the miscommunication. This paper will provide recommendations targeted to the scientific arena. Establishing guidelines for the cooperation of scientists and policymakers can be an unattainable goal due to the complexity and diversity of political policymaking and environmental issues. However, the recommendations provided in this paper are simple enough to be followed by a wide variety of audiences and institutions in the scientific fields. This will aid when trying to fill the gap that has prevented the enhancement of scientific policymaking strategies, which decide on the critical issue s such as the disposal, transportation and production of hazardous waste

  16. Heat and water mass transfer in unsaturated swelling clay based buffer: discussion on the effect of the thermal gradient and on the diffusion of water vapour

    Energy Technology Data Exchange (ETDEWEB)

    Robinet, J.O. [Euro-Geomat-Consulting (France)]|[Institut National des Sciences Appliquees (INSA), 35 - Rennes (France); Plas, F. [Agence Nationale pour la Gestion des Dechets Radioactifs (ANDRA), 92 - Chatenay Malabry (France)

    2005-07-01

    on (i) the effect of the temperature and the gradient of temperature o n the water transfer and (ii) the diffusion of water vapour, in unsaturated compacted swelling clays. This discussion is based on experimental data and modelling data. Simulations using Soret effect and low values of water vapour diffusion coefficient give profiles of water saturation similar to those obtained with classical approach. (author)

  17. Correlation of acidic and basic carrier ampholyte and immobilized pH gradient two-dimensional gel electrophoresis patterns based on mass spectrometric protein identification

    DEFF Research Database (Denmark)

    Nawrocki, A; Larsen, Martin Røssel; Podtelejnikov, A V

    1998-01-01

    Separation of proteins on either carrier ampholyte-based or immobilized pH gradient-based two-dimensional (2-D) gels gives rise to electrophoretic patterns that are difficult to compare visually. In this paper we have used matrix-assisted laser desorption/ionization mass spectrometry (MALDI......-MS) to determine the identities of 335 protein spots in these two 2-D gel systems, including a substantial number of basic proteins which had never been identified before. Proteins that were identified in both gel systems allowed us to cross-reference the gel patterns. Vector analysis of these cross...

  18. Using Internet-Based Videos as Pedagogical Tools in the Social Work Policy Classroom

    Directory of Open Access Journals (Sweden)

    Sarabeth Leukefeld

    2011-11-01

    Full Text Available Students often feel disconnected from their introductory social welfare policy courses. Therefore, it is important that instructors employ engaging pedagogical methods in the classroom. A review of the literature reveals that a host of methods have been utilized to attempt to interest students in policy courses, but there is no mention of using internet-based videos in the social welfare policy classroom. This article describes how to select and use appropriate internet-based videos from websites such as YouTube and SnagFilms, to effectively engage students in social welfare policy courses. Four rules are offered for choosing videos based on emotional impact, brevity, and relevance to course topics. The selected videos should elicit students’ passions and stimulate critical thinking when used in concert with instructor-generated discussion questions, writing assignments, and small group dialogue. Examples of the process of choosing videos, discussion questions, and student reactions to the use of videos are provided.

  19. A new macroscopically anisotropic pressure dependent yield function for metal matrix composite based on strain gradient plasticity for the microstructure

    DEFF Research Database (Denmark)

    Azizi, Reza; Legarth, Brian Nyvang; Niordson, Christian Frithiof

    2013-01-01

    Metal matrix composites with long aligned elastic fibers are studied using an energetic rate independent strain gradient plasticity theory with an isotropic pressure independent yield function at the microscale. The material response is homogenized to obtain a conventional macroscopic model...... is investigated numerically using a unit cell model with periodic boundary conditions containing a single fiber deformed under generalized plane strain conditions. The homogenized response can be modeled by conventional plasticity with an anisotropic yield surface and a free energy depending on plastic strain...

  20. A spherically-shaped PZT thin film ultrasonic transducer with an acoustic impedance gradient matching layer based on a micromachined periodically structured flexible substrate.

    Science.gov (United States)

    Feng, Guo-Hua; Liu, Wei-Fan

    2013-10-09

    This paper presents the microfabrication of an acoustic impedance gradient matching layer on a spherically-shaped piezoelectric ultrasonic transducer. The acoustic matching layer can be designed to achieve higher acoustic energy transmission and operating bandwidth. Also included in this paper are a theoretical analysis of the device design and a micromachining technique to produce the novel transducer. Based on a design of a lead titanium zirconium (PZT) micropillar array, the constructed gradient acoustic matching layer has much better acoustic transmission efficiency within a 20-50 MHz operation range compared to a matching layer with a conventional quarter-wavelength thickness Parylene deposition. To construct the transducer, periodic microcavities are built on a flexible copper sheet, and then the sheet forms a designed curvature with a ball shaping. After PZT slurry deposition, the constructed PZT micropillar array is released onto a curved thin PZT layer. Following Parylene conformal coating on the processed PZT micropillars, the PZT micropillars and the surrounding Parylene comprise a matching layer with gradient acoustic impedance. By using the proposed technique, the fabricated transducer achieves a center frequency of 26 MHz and a -6 dB bandwidth of approximately 65%.

  1. The Study of Geological Structures in Suli and Tulehu Geothermal Regions (Ambon, Indonesia Based on Gravity Gradient Tensor Data Simulation and Analytic Signal

    Directory of Open Access Journals (Sweden)

    Richard Lewerissa

    2017-12-01

    Full Text Available In early 2017, the geothermal system in the Suli and Tulehu areas of Ambon (Indonesia was investigated using a gravity gradient tensor and analytic signal. The gravity gradient tensor and analytic signal were obtained through forward modeling based on a rectangular prism. It was applied to complete Bouguer anomaly data over the study area by using Fast Fourier Transform (FFT. The analysis was conducted to enhance the geological structure like faults as a pathway of geothermal fluid circulation that is not visible on the surface because it is covered by sediment. The complete Bouguer anomaly ranges of 93 mGal up to 105 mGal decrease from the southwest in Suli to the northeast in Tulehu. A high gravity anomaly indicates a strong magmatic intrusion below the Suli region. The gravity anomalies decrease occurs in the Eriwakang mountain and most of Tulehu, and it is associated with a coral limestone. The lower gravity anomalies are located in the north to the northeast part of Tulehu are associated with alluvium. The residual anomaly shows that the drill well TLU-01 and geothermal manifestations along with the Banda, and Banda-Hatuasa faults are associated with lowest gravity anomaly (negative zone. The gravity gradient tensor simulation and an analytic signal of Suli and Tulehu give more detailed information about the geological features. The gzz component allows accurate description of the shape structures, especially the Banda fault associated with a zero value. This result will be useful as a geophysical constraint to subsurface modeling according to gravity gradient inversion over the area.

  2. Improving a maximum horizontal gradient algorithm to determine geological body boundaries and fault systems based on gravity data

    Science.gov (United States)

    Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc

    2018-05-01

    The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.

  3. Highly transparent, stable, and superhydrophobic coatings based on gradient structure design and fast regeneration from physical damage

    International Nuclear Information System (INIS)

    Chen, Zao; Liu, Xiaojiang; Wang, Yan; Li, Jun; Guan, Zisheng

    2015-01-01

    Graphical abstract: - Highlights: • Highly transparent, stable, and superhydrophobic PET film was fabricated by dip-coating way. • The gradient structure is beneficial to both hydrophobicity and transparency. • The superhydrophobic PET film after physical damage can quickly regain by one-step spary. • The fabrication method is available for various substrates and large-scale production. - Abstract: Optical transparency, mechanical flexibility, and fast regeneration are important factors to expand the application of superhydrophobic surfaces. Herein, we fabricated highly transparent, stable, and superhydrophobic coatings through a novel gradient structure design by versatile dip-coating of silica colloid particles (SCPs) and diethoxydimethysiliane cross-linked silica nanoparticles (DDS-SNPs) on polyethylene terephthalate (PET) film and glass, followed by the modification of octadecyltrichlorosiliane (OTCS). When the DDS concentration reached 5 wt%, the modified SCPs/DDS-SNPs coating exhibited a water contact angle (WCA) of 153° and a sliding angle (SA) <5°. Besides, the average transmittance of this superhydrophobic coating on PET film and glass was increased by 2.7% and 1% in the visible wavelength, respectively. This superhydrophobic coating also showed good robustness and stability against water dropping impact, ultrasonic damage, and acid solution. Moreover, the superhydrophobic PET film after physical damage can quickly regain the superhydrophobicity by one-step spray regenerative solution of dodecyltrichlorosilane (DTCS) modified silica nanoparticles at room temperature. The demonstrated method for the preparation and regeneration of superhydrophobic coating is available for different substrates and large-scale production at room temperature.

  4. Highly efficient separation of surfactant stabilized water-in-oil emulsion based on surface energy gradient and flame retardancy.

    Science.gov (United States)

    Long, Mengying; Peng, Shan; Deng, Wanshun; Miao, Xinrui; Wen, Ni; Zhou, Qiannan; Deng, Wenli

    2018-06-15

    Surface energy gradient would generate an imbalance force to drive tiny water droplets in dry air from the hydrophilic bumps to superhydrophobic domains, which has found on the Stenocara beetle's back. Inspired by this phenomenon, we introduced a pristine superhydrophilic filter paper on the lower surface energy superhydrophobic filter paper. ZnSn(OH) 6 particles and polydimethylsiloxane were mixed to prepare the superhydrophobic coating, and the coating was spray-coated on the poly(dialkyldimethylammonium chloride) covered filter paper to separate the span 80 stabilized water-in-isooctane emulsion. A pristine filter paper was added on the superhydrophobic filter paper to fabricate another membrane for separation. The results revealed that with a pristine filter paper, the membrane performed higher efficiency and more recyclability, and it could separate the emulsions with higher surfactant concentrations. The stabilized water droplets passed the superamphiphilic surface, and hindered by the superhydrophobic surface, generating a surface energy gradient for better separation. In addition, the superhydrophobic membrane could be protected from fire to some degree due to the introduced ZnSn(OH) 6 particles with excellent flame retardancy. This easy and efficient approach via simply bringing in pristine superhydrophilic membrane has great potential applications for water-in-oil emulsion separation or oil purification. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Community-Based Wildlife Management In Tanzania: The Policy ...

    African Journals Online (AJOL)

    Community-based wildlife management (CWM) approach – known to others as community-based conservation – was first introduced in Tanzania in 1987/88. The approach intends to reconcile wildlife conservation and rural economic development. In the 1990s Tanzanians witnessed a rush by government Ministries and ...

  6. The Role Played by Agricultural Policy-based Finance in New Village Construction

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The necessity of the agricultural policy-based finance in terms of supporting the new village construction is analyzed: in the first place, the theoretical roots of agricultural policy-based finance supporting new village construction are "market failure" and "government intervention"; in the second place, the continual decline of agriculture and the "rural financial market failure" in recent years have become the objective evidence and historical mission for agricultural policy-based finance to support new village construction; in the third place, the combination of agricultural policy-based finance and new village construction is conducive to solving the "three agriculture" problems and facilitating the reform of new village construction. The feasibility of the support is analyzed: firstly, agricultural policy-based finance boasts the status and position of the "primary drive" in new village construction; secondly, the nation continuously deepens the reform of rural financial system and policy-based banks and strengthens the functions of Agricultural Development Bank, which provides policies for agricultural policy-based finance to support new village construction; thirdly, the 14 years’ reform and development of Agricultural Development Bank and the eleventh five year plan lay sound practical basis for the support of agricultural policy-based finance to new village construction. Based on the necessity and feasibility, the following six aspects are analyzed to fully display the function of the "first engine" of agricultural policy-related finance to new village construction. Firstly, strengthening the credit and loan aid to grain and cotton and some other agricultural products in the circulation domain; secondly, strengthening the credit and loan aid to agricultural industrialization in processing field; thirdly, intensifying the credit and loan aid to agricultural comprehensive development, rural infrastructure construction, application and promotion of

  7. Combining rate-based and cap-and-trade emissions policies

    International Nuclear Information System (INIS)

    Fischer, Carolyn

    2003-12-01

    Rate-based emissions policies (like tradable performance standards, TPS) fix average emissions intensity, while cap-and-trade (CAT) policies fix total emissions. This paper shows that unfettered trade between rate-based and cap-and-trade programs always raises combined emissions, except when product markets are related in particular ways. Gains from trade are fully passed on to consumers in the rate-based sector, resulting in more output and greater emissions allocations. We consider several policy options to offset the expansion, including a tax, an 'exchange rate' to adjust for relative permit values, output-based allocation (OBA) for the rate-based sector, and tightening the cap. A range of combinations of tighter allocations could improve situations in both sectors with trade while holding emissions constant

  8. Value-based Q(s,S) policy for joint replenishments

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Thorstenson, Anders

    replenishment order is issued, if the expected cost of ordering immediately according to the (s, S) policy is less than the expected cost of deferring the order until the next demand or until the level Q is reached. We use simulation to evaluate our policy. Applying the value-based Q(s, S) policy to a standard...... set of 12-item numerical examples from the literature, the long-run average cost of the best known solution is reduced by approximately 1%. Further examples are also investigated and in some cases for which the cost structure implies a high service level, the cost reduction exceeds 10% of the cost...

  9. Food Availability in School Stores in Seoul, South Korea after Implementation of Food- and Nutrient-Based Policies

    Science.gov (United States)

    Choi, Seul Ki; Frongillo, Edward A.; Blake, Christine E.; Thrasher, James F.

    2017-01-01

    Background: To improve school store food environments, the South Korean government implemented 2 policies restricting unhealthy food sales in school stores. A food-based policy enacted in 2007 restricts specific food sales (soft drinks); and a nutrient-based policy enacted in 2009 restricts energy-dense and nutrient-poor (EDNP) food sales. The…

  10. The theory-based policy evaluation method applied to the ex-post evaluation of climate change policies in the built environment in the Netherlands

    International Nuclear Information System (INIS)

    Harmelink, Mirjam; Joosen, Suzanne; Blok, Kornelis

    2005-01-01

    The challenge within ex-post policy evaluation research is to unravel the whole policy process and evaluate the effect and effectiveness of the different steps. Through this unravelling of the policy implementation process, insight is gained on where something went wrong in the process of policy design and implementation and where the keys are for improving the effectiveness and efficiency. This article presents the results of an ex-post policy evaluation of the effect and effectiveness of the Energy Premium Regulation scheme and the Long Term Voluntary agreements to reduce CO 2 emissions in the built environment in the Netherlands applying the theory-based policy evaluation method. The article starts with a description of the theory-based policy evaluation method. The method begins with the formulation of a program theory, which describes the 'ideal' operation of a policy instrument, from the viewpoint of the policy makers. Thereupon the theory is checked and adapted through interviews with policy makers and executors, and the cause and effect chain is finally translated to (quantitative) indicators. The article shows that the theory-based evaluation method has benefits over other ex-post evaluation methods that include: The whole policy implementation process is evaluated and the focus is not just on the 'end-result' (i.e. efficiency improvement and CO 2 emission reduction). Through the development of indicators for each step in the implementation process, 'successes and failures' are quantified to the greatest possible extent. By applying this approach we not only learn whether policies are successful or not, but also why they succeeded or failed and how they can be improved

  11. Rational and Efficient Preparative Isolation of Natural Products by MPLC-UV-ELSD based on HPLC to MPLC Gradient Transfer.

    Science.gov (United States)

    Challal, Soura; Queiroz, Emerson Ferreira; Debrus, Benjamin; Kloeti, Werner; Guillarme, Davy; Gupta, Mahabir Prashad; Wolfender, Jean-Luc

    2015-11-01

    In natural product research, the isolation of biomarkers or bioactive compounds from complex natural extracts represents an essential step for de novo identification and bioactivity assessment. When pure natural products have to be obtained in milligram quantities, the chromatographic steps are generally labourious and time-consuming. In this respect, an efficient method has been developed for the reversed-phase gradient transfer from high-performance liquid chromatography to medium-performance liquid chromatography for the isolation of pure natural products at the level of tens of milligrams from complex crude natural extracts. The proposed method provides a rational way to predict retention behaviour and resolution at the analytical scale prior to medium-performance liquid chromatography, and guarantees similar performances at both analytical and preparative scales. The optimisation of the high-performance liquid chromatography separation and system characterisation allows for the prediction of the gradient at the medium-performance liquid chromatography scale by using identical stationary phase chemistries. The samples were introduced in medium-performance liquid chromatography using a pressure-resistant aluminium dry load cell especially designed for this study to allow high sample loading while maintaining a maximum achievable flow rate for the separation. The method has been validated with a mixture of eight natural product standards. Ultraviolet and evaporative light scattering detections were used in parallel for a comprehensive monitoring. In addition, post-chromatographic mass spectrometry detection was provided by high-throughput ultrahigh-performance liquid chromatography time-of-flight mass spectrometry analyses of all fractions. The processing of all liquid chromatography-mass spectrometry data in the form of an medium-performance liquid chromatography x ultra high-performance liquid chromatography time-of-flight mass spectrometry matrix enabled an

  12. Combining Step Gradients and Linear Gradients in Density.

    Science.gov (United States)

    Kumar, Ashok A; Walz, Jenna A; Gonidec, Mathieu; Mace, Charles R; Whitesides, George M

    2015-06-16

    Combining aqueous multiphase systems (AMPS) and magnetic levitation (MagLev) provides a method to produce hybrid gradients in apparent density. AMPS—solutions of different polymers, salts, or surfactants that spontaneously separate into immiscible but predominantly aqueous phases—offer thermodynamically stable steps in density that can be tuned by the concentration of solutes. MagLev—the levitation of diamagnetic objects in a paramagnetic fluid within a magnetic field gradient—can be arranged to provide a near-linear gradient in effective density where the height of a levitating object above the surface of the magnet corresponds to its density; the strength of the gradient in effective density can be tuned by the choice of paramagnetic salt and its concentrations and by the strength and gradient in the magnetic field. Including paramagnetic salts (e.g., MnSO4 or MnCl2) in AMPS, and placing them in a magnetic field gradient, enables their use as media for MagLev. The potential to create large steps in density with AMPS allows separations of objects across a range of densities. The gradients produced by MagLev provide resolution over a continuous range of densities. By combining these approaches, mixtures of objects with large differences in density can be separated and analyzed simultaneously. Using MagLev to add an effective gradient in density also enables tuning the range of densities captured at an interface of an AMPS by simply changing the position of the container in the magnetic field. Further, by creating AMPS in which phases have different concentrations of paramagnetic ions, the phases can provide different resolutions in density. These results suggest that combining steps in density with gradients in density can enable new classes of separations based on density.

  13. Informing principal policy reforms in South Africa through data-based evidence

    OpenAIRE

    Gabrielle Wills

    2015-01-01

    In the past decade there has been a notable shift in South African education policy that raises the value of school leadership as a lever for learning improvements. Despite a growing discourse on school leadership, there has been a lack of empirical based evidence on principals to inform, validate or debate the efficacy of proposed policies in raising the calibre of school principals. Drawing on findings from a larger study to understand the labour market for school principals in South Africa...

  14. Combined use of headwind ramps and gradients based on LIDAR data in the alerting of low-level windshear/turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Chan, P.W.; Hon, K.K. [Hong Kong Observatory, Hong Kong (China); Shin, D.K. [Korea Meteorological Administration, Seoul (Korea, Republic of)

    2011-12-15

    A sophisticated algorithm based on the detection of significant headwind changes, the so-called ''windshear ramps'', has been developed by the Hong Kong Observatory (HKO) in the alerting of low-level windshear using LIDAR data. The method, named as LIWAS (LIDAR Windshear Alerting System), is particularly efficient in detecting airflow disturbances in the vicinity of the Hong Kong International Airport (HKIA) due to terrain disruption of the background wind. It puts emphasis on sustained headwind change from one level to another level. However, for terrain-disrupted airflow, there may also be abrupt wind changes of smaller spatial scales (e.g. over a distance of a few hundred metres) embedded in the windshear ramp which typically spans a larger spatial scale (e.g. over a couple of kilometres). As such, for the alerting of low-level windshear it may be advantageous to consider both the larger scale windshear ramps and the smaller scale wind changes, i.e. headwind gradients. This paper examines the usefulness of such an approach by applying the method to the windshear cases in spring time over four years. It turns out that the inclusion of headwind gradients helps capture 5-10 % more of the significant windshear reported by the pilots. For a particular runway corridor, the combined use of the two windshear detection methods even outperforms the existing windshear alerting service at HKIA. The paper will discuss the rationale behind the headwind gradient method, a prototype of its implementation, and its combined use with the existing LIWAS alerts. It will also discuss preliminary results on the climatology of headwind changes at HKIA based on LIDAR data, as well as the use of aircraft simulator in improving the calculation of LIDAR-based F-factor. (orig.)

  15. Air pollution assessment based on elemental concentration of leaves tissue and foliage dust along an urbanization gradient in Vienna.

    Science.gov (United States)

    Simon, Edina; Braun, Mihály; Vidic, Andreas; Bogyó, Dávid; Fábián, István; Tóthmérész, Béla

    2011-05-01

    Foliage dust contains heavy metal that may have harmful effects on human health. The elemental contents of tree leaves and foliage dust are especially useful to assess air environmental pollution. We studied the elemental concentrations in foliage dust and leaves of Acer pseudoplatanus along an urbanization gradient in Vienna, Austria. Samples were collected from urban, suburban and rural areas. We analysed 19 elements in both kind of samples: aluminium, barium, calcium, copper, iron, potassium, magnesium, sodium, phosphor, sulphur, strontium and zinc. We found that the elemental concentrations of foliage dust were significantly higher in the urban area than in the rural area for aluminium, barium, iron, lead, phosphor and selenium. Elemental concentrations of leaves were significantly higher in urban than in rural area for manganese and strontium. Urbanization changed significantly the elemental concentrations of foliage dust and leaves and the applied method can be useful for monitoring the environmental load. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Policy recommendations to promote shale gas development in China based on a technical and economic evaluation

    International Nuclear Information System (INIS)

    Yuan, Jiehui; Luo, Dongkun; Xia, Liangyu; Feng, Lianyong

    2015-01-01

    Because of its resource potential and clean burning advantages, the development of shale gas can significantly increase the supply of cleaner energy while offering the associated benefits. To foster shale gas development, many policy incentives have been introduced in China. However, the current incentives have not been sufficiently aggressive, and the shale gas industry has been slow to develop. Existing policies thus need to be further improved. To provide effective support for decision makers in China, a technical and economic evaluation is performed in this study to explore the profitability of shale gas production in pilot zones. The results show that shale gas production is subeconomic under the current technical and economic conditions. Based on this evaluation, a policy analysis is conducted to investigate the profitability improvement offered by the major policies available in China to elucidate a path toward improving incentive policies. The results indicate that policy instruments related to gas prices, financial subsidies, corporate income taxes or combinations thereof could be used as priority options to improve policy incentives. Based on these results, recommendations are presented to improve the current incentive polices aimed at accelerating shale gas development. -- Highlights: •We explore the economic feasibility of shale gas development in China. •Current incentive policies cannot render shale gas development economically viable. •These incentives must be improved to effectively promote shale gas development. •We investigate the effect of the major policies available in China to light a path. •Recommendations are proposed to continually improve the incentive polices in China

  17. Economic transition policies in Chinese resource-based cities: An overview of government efforts

    International Nuclear Information System (INIS)

    Li, Huijuan; Long, Ruyin; Chen, Hong

    2013-01-01

    Resource-based cities in China have made momentous contributions to the development of the national economy for decades. However, with the depletion of natural resources, their sustainable development is challenging and transition is important. The Chinese government has made great efforts to help resource-based cities. The purpose of this study is to investigate transition policies and their implementation. Firstly, we reviewed previous studies and summarized the essential elements of some successful resource-based cities, which are useful experiences for Chinese resource-based cities. Secondly, we studied the development of resource-based cities over the past 10 years with a focus on economic development, industrial structure, government revenue and environmental conditions. We found that resource-based cities were less developed compared to other cities. The main reasons are the after-effects of a planned economy, an unreasonable tax system, planning mistakes and misguided resources exploitation policies. Thirdly, we analyzed several aspects of the policy responses after the introduction of transition policies, including designating 69 resource-exhausted cities, supporting cities with funds and projects, formulating transition plans and evaluating transition performance. However, there are some deficiencies in the process of policy implementation. Finally, some recommendations were provided to improve transition performance and sustainable development for resource-based cities. - Highlights: ► Analyze the development of Chinese resource-based cities from four aspects. ► Analyze the causes of less development in resource-based cities. ► Investigate policies and their responses to transformation. ► Provide recommendations to improve transformation performance and sustainable development

  18. Optimal policy for value-based decision-making.

    Science.gov (United States)

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-08-18

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.

  19. Developing Moral Sport Policies Through Act-Utilitarianism Based on Bentham’s Hedonic Calculus

    Directory of Open Access Journals (Sweden)

    ROBERT C. SCHNEIDER

    2010-01-01

    Full Text Available Moral policy can be developed and maintained in sport organizations through an approach that incorporates act-utilitarianism (AU based on Jeremy Bentham’s hedonic calculus (HC. Sport managers’ effective application of AU based on HC takes on the form of a holistic approach to moral policy development and maintenance and requires an under-standing of the parts and process of a strict adherence to AU based on HC. The traits of common sense, habits, and past experience are supported by the utilitarian views held by Bentham and Mill to accurately predict happiness and un-happiness that result from actions (Beauchamp, 1982 and are also necessary to drive a holistic approach of AU based on HC that develops and maintains moral policy in sport organizations.

  20. Understanding the cost bases of Space Shuttle pricing policies for commercial and foreign customers

    Science.gov (United States)

    Stone, Barbara A.

    1984-01-01

    The principles and underlying cost bases of the 1977 and 1982 Space Shuttle Reimbursement Policies are compared and contrasted. Out-of-pocket cost recovery has been chosen as the base of the price for the 1986-1988 time period. With this cost base, it is NASA's intent to recover the total cost of consumables and the launch and flight operations costs added by commercial and foreign customers over the 1986-1988 time period. Beyond 1988, NASA intends to return to its policy of full cost recovery.

  1. Policy-based Network Management in Home Area Networks: Interim Test Results

    OpenAIRE

    Ibrahim Rana, Annie; Ó Foghlú, Mícheál

    2009-01-01

    This paper argues that Home Area Networks (HANs) are a good candidate for advanced network management automation techniques, such as Policy-Based Network Management (PBNM). What is proposed is a simple use of policy based network management to introduce some level of Quality of Service (QoS) and Security management in the HAN, whilst hiding this complexity from the home user. In this paper we have presented the interim test results of our research experiments (based on a scenario) using the H...

  2. Policy-Aware Sender Anonymity in Location-Based Services

    Science.gov (United States)

    Vyas, Avinash

    2011-01-01

    Sender anonymity in Location-based services (LBS) refers to hiding the identity of a mobile device user who sends requests to the LBS provider for services in her proximity (e.g. "find the nearest gas station etc."). The goal is to keep the requester's interest private even from attackers who (via hacking or subpoenas) gain access to the LBS…

  3. Base stock policies with degraded service to large orders

    DEFF Research Database (Denmark)

    Du, Bisheng; Larsen, Christian

    2011-01-01

    and the base stock levels of each rule are such that all customers (of both order types) are indifferent between the two rules. When comparing the difference in the average on-hand inventory levels, we can then make an assessment of the threshold value of the cost of splitting an order (which may otherwise...

  4. Policy design and performance of emissions trading markets: an adaptive agent-based analysis.

    Science.gov (United States)

    Bing, Zhang; Qinqin, Yu; Jun, Bi

    2010-08-01

    Emissions trading is considered to be a cost-effective environmental economic instrument for pollution control. However, the pilot emissions trading programs in China have failed to bring remarkable success in the campaign for pollution control. The policy design of an emissions trading program is found to have a decisive impact on its performance. In this study, an artificial market for sulfur dioxide (SO2) emissions trading applying the agent-based model was constructed. The performance of the Jiangsu SO2 emissions trading market under different policy design scenario was also examined. Results show that the market efficiency of emissions trading is significantly affected by policy design and existing policies. China's coal-electricity price system is the principal factor influencing the performance of the SO2 emissions trading market. Transaction costs would also reduce market efficiency. In addition, current-level emissions discharge fee/tax and banking mechanisms do not distinctly affect policy performance. Thus, applying emissions trading in emission control in China should consider policy design and interaction with other existing policies.

  5. STOCHASTIC GRADIENT METHODS FOR UNCONSTRAINED OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Nataša Krejić

    2014-12-01

    Full Text Available This papers presents an overview of gradient based methods for minimization of noisy functions. It is assumed that the objective functions is either given with error terms of stochastic nature or given as the mathematical expectation. Such problems arise in the context of simulation based optimization. The focus of this presentation is on the gradient based Stochastic Approximation and Sample Average Approximation methods. The concept of stochastic gradient approximation of the true gradient can be successfully extended to deterministic problems. Methods of this kind are presented for the data fitting and machine learning problems.

  6. IMPACTS OF GROUP-BASED SIGNAL CONTROL POLICY ON DRIVER BEHAVIOR AND INTERSECTION SAFETY

    Directory of Open Access Journals (Sweden)

    Keshuang TANG

    2008-01-01

    Full Text Available Unlike the typical stage-based policy commonly applied in Japan, the group-based control (often called movement-based in the traffic control industry in Japan refers to such a control pattern that the controller is capable of separately allocating time to each signal group instead of stage based on traffic demand. In order to investigate its applicability at signalized intersections in Japan, an intersection located in Yokkaichi City of Mie Prefecture was selected as an experimental application site by the Japan Universal Traffic Management Society (UTMS. Based on the data collected at the intersection before and after implementing the group-based control policy respectively, this study evaluated the impacts of such a policy on driver behavior and intersection safety. To specify those impacts, a few models utilizing cycle-based data were first developed to interpret the occurrence probability and rate of red-light-running (RLR. Furthermore, analyses were performed on the yellow-entry time (Ye of the last cleared vehicle and post encroachment time (PET during the phase switching. Conclusions supported that the group-based control policy, along with certain other factors, directly or indirectly influenced the RLR behavior of through and right-turn traffics. Meanwhile, it has potential safety benefits as well, indicated by the declined Ye and increased PET values.

  7. An Object-Oriented Information Model for Policy-based Management of Distributed Applications

    NARCIS (Netherlands)

    Diaz, G.; Gay, V.C.J.; Horlait, E.; Hamza, M.H.

    2002-01-01

    This paper presents an object-oriented information model to support a policy-based management for distributed multimedia applications. The information base contains application-level information about the users, the applications, and their profile. Our Information model is described in details and

  8. Web and Internet-based Capabilities (IbC) Policies - U.S. Department of

    Science.gov (United States)

    &IIC DCIO IE DCIO R&A DCIO CS In the News Library Contact us Web and Internet-based Capabilities (IbC) Policies Army Navy Air Force Marine Corps General DoD Internet Services and Internet-Based ) Information Collection under the Paperwork Reduction Act (PRA) (OMB Memo) Internet Domain Name and Internet

  9. Macroprudential policies in an agent-based artificial economy

    OpenAIRE

    Raberto, Marco; Teglio, Andrea; Cincotti, Silvano

    2012-01-01

    Basel III is a recently-agreed regulatory standard for bank capital adequacy with focus on the macroprudential dimension of banking regulation, i.e., the system- wide implications of banks’ lending and risk. An important Basel III provision is to reduce procyclicality of present banking regulation and promote countercyclical capital buffers for banks. The Eurace agent-based macroeconomic model and sim- ulator has been recently showed to be able to reproduce a credit-fueled boom-bust dynamics ...

  10. Improving societal acceptance of rad waste management policy decisions: an approach based on complex intelligence

    International Nuclear Information System (INIS)

    Rao, Suman

    2008-01-01

    In today's context elaborate public participation exercises are conducted around the world to elicit and incorporate societal risk perceptions into nuclear policy Decision-Making. However, on many occasions, such as in the case of rad waste management, the society remains unconvinced about these decisions. This naturally leads to the questions: are techniques for incorporating societal risk perceptions into the rad waste policy decision making processes sufficiently mature? How could societal risk perceptions and legal normative principles be better integrated in order to render the decisions more equitable and convincing to society? Based on guidance from socio-psychological research this paper postulates that a critical factor for gaining/improving societal acceptance is the quality and adequacy of criteria for option evaluation that are used in the policy decision making. After surveying three rad waste public participation cases, the paper identifies key lacunae in criteria abstraction processes as currently practiced. A new policy decision support model CIRDA: Complex Intelligent Risk Discourse Abstraction model that is based on the heuristic of Risk-Risk Analysis is proposed to overcome these lacunae. CIRDA's functionality of rad waste policy decision making is modelled as a policy decision-making Abstract Intelligent Agent and the agent program/abstraction mappings are presented. CIRDA is then applied to a live (U.K.) rad waste management case and the advantages of this method as compared to the Value Tree Method as practiced in the GB case are demonstrated. (author)

  11. Developing a nursing personnel policy to address body art using an evidence-based model.

    Science.gov (United States)

    Dorwart, Shawna D; Kuntz, Sandra W; Armstrong, Myrna L

    2010-12-01

    An increase in the prevalence of body art as a form of self-expression has motivated health care organizations to develop policies addressing nursing personnel's body art. A systematic review of literature on body art was completed and a telephone survey of 15 hospitals was conducted to query existing policy statements addressing nursing personnel's body art. The literature established no prevalence of body art among nurses or effect of nurses' body art. Of the 13 hospitals (86%) that shared their policy on body art, none provided a rationale or references to support their existing policies. A lack of published evidence identifying the effect of body art among nurses shifts the burden of determining care outcomes to the leadership of individual hospitals. Further research on patients' perception of nursing personnel with visible body art, using an evidence-based model, is recommended. Copyright 2010, SLACK Incorporated.

  12. A flexible environmental reuse/recycle policy based on economic strength.

    Science.gov (United States)

    Tsiliyannis, C A

    2007-01-01

    Environmental policies based on fixed recycling rates may lead to increased environmental impacts (e.g., landfilled wastes) during economic expansion. A rate policy is proposed, which is adjusted according to the overall strength or weakness of the economy, as reflected by overall packaging demand and consumption, production and imports-exports. During economic expansion featuring rising consumption, production or exports, the proposed flexible policy suggests a higher reuse/recycle rate. During economic slowdown a lower rate results in lower impacts. The flexible target rates are determined in terms of annual data, including consumption, imports-exports and production. Higher environmental gains can be achieved at lower cost if the flexible policy is applied to widely consumed packaging products and materials associated with low rates, or if cleaner recycling technology is adopted.

  13. Condition-based inspection/replacement policies for non-monotone deteriorating systems with environmental covariates

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Xuejing [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); School of mathematics and statistics, Lanzhou University, Lanzhou 730000 (China); Fouladirad, Mitra, E-mail: mitra.fouladirad@utt.f [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); Berenguer, Christophe [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); Bordes, Laurent [Universite de Pau et des Pays de l' Adour, LMA UMR CNRS 5142, 64013 PAU Cedex (France)

    2010-08-15

    The aim of this paper is to discuss the problem of modelling and optimising condition-based maintenance policies for a deteriorating system in presence of covariates. The deterioration is modelled by a non-monotone stochastic process. The covariates process is assumed to be a time-homogenous Markov chain with finite state space. A model similar to the proportional hazards model is used to show the influence of covariates on the deterioration. In the framework of the system under consideration, an appropriate inspection/replacement policy which minimises the expected average maintenance cost is derived. The average cost under different conditions of covariates and different maintenance policies is analysed through simulation experiments to compare the policies performances.

  14. Condition-based inspection/replacement policies for non-monotone deteriorating systems with environmental covariates

    International Nuclear Information System (INIS)

    Zhao Xuejing; Fouladirad, Mitra; Berenguer, Christophe; Bordes, Laurent

    2010-01-01

    The aim of this paper is to discuss the problem of modelling and optimising condition-based maintenance policies for a deteriorating system in presence of covariates. The deterioration is modelled by a non-monotone stochastic process. The covariates process is assumed to be a time-homogenous Markov chain with finite state space. A model similar to the proportional hazards model is used to show the influence of covariates on the deterioration. In the framework of the system under consideration, an appropriate inspection/replacement policy which minimises the expected average maintenance cost is derived. The average cost under different conditions of covariates and different maintenance policies is analysed through simulation experiments to compare the policies performances.

  15. Fostering integrity in postgraduate research: an evidence-based policy and support framework.

    Science.gov (United States)

    Mahmud, Saadia; Bretag, Tracey

    2014-01-01

    Postgraduate research students have a unique position in the debate on integrity in research as students and novice researchers. To assess how far policies for integrity in postgraduate research meet the needs of students as "research trainees," we reviewed online policies for integrity in postgraduate research at nine particular Australian universities against the Australian Code for Responsible Conduct of Research (the Code) and the five core elements of exemplary academic integrity policy identified by Bretag et al. (2011 ), i.e., access, approach, responsibility, detail, and support. We found inconsistency with the Code in the definition of research misconduct and a lack of adequate detail and support. Based on our analysis, previous research, and the literature, we propose a framework for policy and support for postgraduate research that encompasses a consistent and educative approach to integrity maintained across the university at all levels of scholarship and for all stakeholders.

  16. Customer based distributed generation: Economics and policy issues

    International Nuclear Information System (INIS)

    Maribu, Karl Magnus

    2005-01-01

    This paper presents a model for finding distributed generation systems that maximizes the economic benefits for buildings with electricity, heat and cooling loads. Important factors for profitability are identified using simulated data for a standard health care and office building in California. Under the assumed time of use (TOU) prices demand charges are critical factors for the profitability. Systems with lower reliability than promised can infer large losses to the developer. The outage risk is a diversifiable risk hence demands charges should not be a barrier to distributed generation adoption in a well functioning market. In a variety of natural gas and electricity price scenarios the optimal decision is to install distributed generation units with heat recovery and absorption cooling. The benefit maximizing solution reduces building carbon emissions in most scenarios. Low natural gas price scenarios have the highest carbon emissions. An introduction of a carbon tax can further reduce emissions. Small photovoltaic systems gets profitable at prices around 2.4 $/W and larger systems from prices around 1.8-2 $/W if they are analyzed independently from gas fueled generators. In competition with natural gas fueled equipment both the break-even cost and the installed capacity is reduced in both buildings. It is possible to find a profitable solution for real discount rates up to 20 percent under the base case solution for the health care building. High discount rates favor small less capital intensive base load generation systems with heat recovery. (Author)

  17. A simulation model for reliability-based appraisal of an energy policy: The case of Lebanon

    International Nuclear Information System (INIS)

    Hamdan, H.A.; Ghajar, R.F.; Chedid, R.B.

    2012-01-01

    The Lebanese Electric Power System (LEPS) has been suffering from technical and financial deficiencies for decades and mirrors the problems encountered in many developing countries suffering from inadequate or no power systems planning resulting in incomplete and ill-operating infrastructure, and suffering from effects of political instability, huge debts, unavailability of financing desired projects and inefficiency in operation. The upgrade and development of the system necessitate the adoption of a comprehensive energy policy that introduces solutions to a diversity of problems addressing the technical, financial, administrative and governance aspects of the system. In this paper, an energy policy for Lebanon is proposed and evaluated based on integration between energy modeling and financial modeling. The paper utilizes the Load Modification Technique (LMT) as a probabilistic tool to assess the impact of policy implementation on energy production, overall cost, technical/commercial losses and reliability. Scenarios reflecting implementation of policy projects are assessed and their impacts are compared with business-as-usual scenarios which assume no new investment is to take place in the sector. Conclusions are drawn on the usefulness of the proposed evaluation methodology and the effectiveness of the adopted energy policy for Lebanon and other developing countries suffering from similar power system problems. - Highlights: ► Evaluation methodology based on a probabilistic simulation tool is proposed. ► A business-as-usual scenario for a given study period of the LEPS was modeled. ► Mitigation scenarios reflecting implementation of the energy policy are modeled. ► Policy simulated and compared with business-as-usual scenarios of the LEPS. ► Results reflect usefulness of proposed methodology and the adopted energy policy.

  18. Ternary gradient metal-organic frameworks.

    Science.gov (United States)

    Liu, Chong; Rosi, Nathaniel L

    2017-09-08

    Gradient MOFs contain directional gradients of either structure or functionality. We have successfully prepared two ternary gradient MOFs based on bMOF-100 analogues, namely bMOF-100/102/106 and bMOF-110/100/102, via cascade ligand exchange reactions. The cubic unit cell parameter discrepancy within an individual ternary gradient MOF crystal is as large as ∼1 nm, demonstrating the impressive compatibility and flexibility of the component MOF materials. Because of the presence of a continuum of unit cells, the pore diameters within individual crystals also change in a gradient fashion from ∼2.5 nm to ∼3.0 nm for bMOF-100/102/106, and from ∼2.2 nm to ∼2.7 nm for bMOF-110/100/102, indicating significant porosity gradients. Like previously reported binary gradient MOFs, the composition of the ternary gradient MOFs can be easily controlled by adjusting the reaction conditions. Finally, X-ray diffraction and microspectrophotometry were used to analyse fractured gradient MOF crystals by comparing unit cell parameters and absorbance spectra at different locations, thus revealing the profile of heterogeneity (i.e. gradient distribution of properties) and further confirming the formation of ternary gradient MOFs.

  19. Study on the temperature gradient evolution of large size nonlinear crystal based on the fluid-solid coupling theory

    Science.gov (United States)

    Sun, F. Z.; Zhang, P.; Liang, Y. C.; Lu, L. H.

    2014-09-01

    In the non-critical phase-matching (NCPM) along the Θ =90° direction, ADP and DKDP crystals which have many advantages, including a large effective nonlinear optical coefficient, a small PM angular sensitivity and non beam walk-off, at the non-critical phase-matching become the competitive candidates in the inertial confinement fusion(ICF) facility, so the reasonable temperature control of crystals has become more and more important .In this paper, the fluid-solid coupling models of ADP crystal and DKDP crystal which both have anisotropic thermal conductivity in the states of vacuum and non-vacuum were established firstly, and then simulated using the fluid analysis software Fluent. The results through the analysis show that the crystal surface temperature distribution is a ring shape, the temperature gradients in the direction of the optical axis both the crystals are 0.02°C and 0.01°C due to the air, the lowest temperature points of the crystals are both at the center of surface, and the temperatures are lower than 0.09°C and 0.05°C compared in the vacuum and non-vacuum environment, then propose two designs for heating apparatus.

  20. Size-dependent dynamic stability analysis of microbeams actuated by piezoelectric voltage based on strain gradient elasticity theory

    Energy Technology Data Exchange (ETDEWEB)

    Sahmani, Saeid; Bahrami, Mohsen [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of)

    2015-01-15

    In the current paper, dynamic stability analysis of microbeams subjected to piezoelectric voltage is presented in which the microbeam is integrated with piezoelectric layers on the lower and upper surfaces. Both of the flutter and divergence instabilities of microbeams with clamped-clamped and clamped-free boundary conditions are predicted corresponding to various values of applied voltage. To take size effect into account, the classical Timoshenko beam theory in conjunction with strain gradient elasticity theory is utilized to develop nonclassical beam model containing three additional internal length scale parameters. By using Hamilton's principle, the higher-order governing differential equations and associated boundary conditions are derived. Afterward, generalized differential quadrature method is employed to discretize the size-dependent governing differential equations along with clamped-clamped and clamped-free end supports. The critical piezoelectric voltages corresponding to various values dimensionless length scale parameter are evaluated and compared with those predicted by the classical beam theory. It is revealed that in the case of clamped-free boundary conditions, the both of flutter and divergence instabilities occur. However, for the clamped-clamped microbeams, only divergence instability takes place.

  1. Highly transparent, stable, and superhydrophobic coatings based on gradient structure design and fast regeneration from physical damage

    Science.gov (United States)

    Chen, Zao; Liu, Xiaojiang; Wang, Yan; Li, Jun; Guan, Zisheng

    2015-12-01

    Optical transparency, mechanical flexibility, and fast regeneration are important factors to expand the application of superhydrophobic surfaces. Herein, we fabricated highly transparent, stable, and superhydrophobic coatings through a novel gradient structure design by versatile dip-coating of silica colloid particles (SCPs) and diethoxydimethysiliane cross-linked silica nanoparticles (DDS-SNPs) on polyethylene terephthalate (PET) film and glass, followed by the modification of octadecyltrichlorosiliane (OTCS). When the DDS concentration reached 5 wt%, the modified SCPs/DDS-SNPs coating exhibited a water contact angle (WCA) of 153° and a sliding angle (SA) glass was increased by 2.7% and 1% in the visible wavelength, respectively. This superhydrophobic coating also showed good robustness and stability against water dropping impact, ultrasonic damage, and acid solution. Moreover, the superhydrophobic PET film after physical damage can quickly regain the superhydrophobicity by one-step spray regenerative solution of dodecyltrichlorosilane (DTCS) modified silica nanoparticles at room temperature. The demonstrated method for the preparation and regeneration of superhydrophobic coating is available for different substrates and large-scale production at room temperature.

  2. Optimal Allocation of Thermal-Electric Decoupling Systems Based on the National Economy by an Improved Conjugate Gradient Method

    Directory of Open Access Journals (Sweden)

    Shuang Rong

    2015-12-01

    Full Text Available Aiming to relieve the large amount of wind power curtailment during the heating period in the North China region, a thermal-electric decoupling (TED approach is proposed to both bring down the constraint of forced power output of combined heat and power plants and increase the electric load level during valley load times that assist the power grid in consuming more wind power. The operating principles of the thermal-electric decoupling approach is described, the mathematical model of its profits is developed, the constraint conditions of its operation are listed, also, an improved parallel conjugate gradient is utilized to bypass the saddle problem and accelerate the optimal speed. Numerical simulations are implemented and reveal an optimal allocation of TED which with a rated power of 280 MW and 185 MWh heat storage capacity are possible. This allocation of TED could bring approximately 16.9 billion Yuan of economic profit and consume more than 80% of the surplus wind energy which would be curtailed without the participation of TED. The results in this article verify the effectiveness of this method that could provide a referential guidance for thermal-electric decoupling system allocation in practice.

  3. From theory based policy evaluation to SMART Policy Design: Lessons learned from 20 ex-post evaluations of energy efficiency instruments

    International Nuclear Information System (INIS)

    Harmelink, Mirjam; Harmsen, Robert; Nilsson, Lars

    2007-01-01

    This article presents the results of an in-depth ex-post analysis of 20 energy efficiency policy instruments applied across different sectors and countries. Within the AID-EE project, we reconstructed and analysed the implementation process of energy efficiency policy instruments with the aim to identify key factors behind successes and failures. The analysis was performed using a uniform methodology called 'theory based policy evaluation'. With this method the whole implementation process is assessed with the aim to identify: (i) the main hurdles in each step of the implementation process, (ii) key success factors for different types of instruments and (iii) the key indicators that need to be monitored to enable a sound evaluation of the energy efficiency instruments. Our analysis shows that: Energy efficiency policies often lack quantitative targets and clear timeframes; Often policy instruments have multiple and/or unclear objectives; The need for monitoring information does often not have priority in the design phase; For most instruments, monitoring information is collected on a regular basis. However, this information is often insufficient to determine the impact on energy saving, cost-effectiveness and target achievement of an instrument; Monitoring and verification of actual energy savings have a relatively low priority for most of the analyzed instruments. There is no such thing as the 'best' policy instrument. However, typical circumstances in which to apply different types of instruments and generic characteristics that determine success or failure can be identified. Based on the assessments and the experience from applying theory based policy evaluation ex-post, we suggest that this should already be used in the policy formulation and design phase of instruments. We conclude that making policy theory an integral and mandated part of the policy process would facilitate more efficient and effective energy efficiency instruments

  4. Evidence-based policy learning: the case of the research excellence indicat

    Energy Technology Data Exchange (ETDEWEB)

    Hardeman, S.; Vertesy, D.

    2016-07-01

    Excellence is arguably the single most important concept in academia today, especially when it comes to science policy making. At the same time, however, excellence leads to a great amount of discomfort, leading some to plea for an overall rejection of the concept. The discomfort with excellence reaches its heights whenever proposals are made for measuring it. Yet, especially given the period of professionalization science policy making finds itself in, these same metrics are frequently called upon to legitimate policy interventions. Excellence and its measurement, it seems therefore, is something we can neither life with nor without. This paper offers some middle ground in the debate on excellence and its measurement for science policy purposes. Using the case of the European Commission’s Research Excellence Indicator as an example, we show how the development and use of indicators offers an opportunity for learning in science policy making. Ultimately, therefore, we show how and in what ways measuring excellence can contribute to evidence-based science policy learning in practice. (Author)

  5. Improving the evidence base for energy policy: The role of systematic reviews

    International Nuclear Information System (INIS)

    Sorrell, Steve

    2007-01-01

    The concept of evidence-based policy and practice (EBPP) has gained increasing prominence in the UK over the last 10 years and now plays a dominant role in a number of policy areas, including healthcare, education, social work, criminal justice and urban regeneration. But despite this substantial, influential and growing activity, the concept remains largely unknown to policymakers and researchers within the energy field. This paper defines EBPP, identifies its key features and examines the potential role of systematic reviews of evidence in a particular area of policy. It summarises the methods through which systematic reviews are achieved; discusses their advantages and limitations; identifies the particular challenges they face in the energy policy area; and assesses whether and to what extent they can usefully be applied to contemporary energy policy questions. The concept is illustrated with reference to a proposed review of evidence for a 'rebound effect' from improved energy efficiency. The paper concludes that systematic reviews may only be appropriate for a subset of energy policy questions and that research-funding priorities may need to change if their use is to become more widespread

  6. Sources of variation in innate immunity in great tit nestlings living along a metal pollution gradient: An individual-based approach

    International Nuclear Information System (INIS)

    Vermeulen, Anke; Müller, Wendt; Matson, Kevin D.; Irene Tieleman, B.; Bervoets, Lieven; Eens, Marcel

    2015-01-01

    Excessive deposition of metals in the environment is a well-known example of pollution worldwide. Chronic exposure of organisms to metals can have a detrimental effect on reproduction, behavior, health and survival, due to the negative effects on components of the immune system. However, little is known about the effects of chronic sublethal metal exposure on immunity, especially for wildlife. In our study, we examined the constitutive innate immunity of great tit (Parus major) nestlings (N = 234) living in four populations along a metal pollution gradient. For each nestling, we determined the individual metal concentrations (lead, cadmium, arsenic) present in the red blood cells and measured four different innate immune parameters (agglutination, lysis, haptoglobin concentrations and nitric oxide concentrations) to investigate the relationship between metal exposure and immunological condition. While we found significant differences in endogenous metal concentrations among populations with the highest concentrations closest to the pollution source, we did not observe corresponding patterns in our immune measures. However, when evaluating relationships between metal concentrations and immune parameters at the individual level, we found negative effects of lead and, to a lesser extent, arsenic and cadmium on lysis. In addition, high arsenic concentrations appear to elicit inflammation, as reflected by elevated haptoglobin concentrations. Thus despite the lack of a geographic association between pollution and immunity, this type of association was present at the individual level at a very early life stage. The high variation in metal concentrations and immune measures observed within populations indicates a high level of heterogeneity along an existing pollution gradient. Interestingly, we also found substantial within nest variation, for which the sources remain unclear, and which highlights the need of an individual-based approach. - Highlights: • Innate immunity

  7. Sources of variation in innate immunity in great tit nestlings living along a metal pollution gradient: An individual-based approach

    Energy Technology Data Exchange (ETDEWEB)

    Vermeulen, Anke, E-mail: anke.vermeulen@uantwerpen.be [Department of Biology — Ethology, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Müller, Wendt, E-mail: wendt.mueller@uantwerpen.be [Department of Biology — Ethology, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Matson, Kevin D., E-mail: k.d.matson@rug.nl [Animal Ecology Group, Centre for Ecological and Evolutionary Studies, University of Groningen, P.O. Box 11103, 9700 CC Groningen (Netherlands); The Resource Ecology Group, Department of Environmental Sciences, Wageningen University, Droevendaalsesteeg 3a, 6708PB Wageningen (Netherlands); Irene Tieleman, B., E-mail: b.i.tieleman@rug.nl [Animal Ecology Group, Centre for Ecological and Evolutionary Studies, University of Groningen, P.O. Box 11103, 9700 CC Groningen (Netherlands); Bervoets, Lieven, E-mail: lieven.bervoets@uantwerpen.be [Department of Biology — SPHERE, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerpen (Belgium); Eens, Marcel, E-mail: marcel.eens@uantwerpen.be [Department of Biology — Ethology, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium)

    2015-03-01

    Excessive deposition of metals in the environment is a well-known example of pollution worldwide. Chronic exposure of organisms to metals can have a detrimental effect on reproduction, behavior, health and survival, due to the negative effects on components of the immune system. However, little is known about the effects of chronic sublethal metal exposure on immunity, especially for wildlife. In our study, we examined the constitutive innate immunity of great tit (Parus major) nestlings (N = 234) living in four populations along a metal pollution gradient. For each nestling, we determined the individual metal concentrations (lead, cadmium, arsenic) present in the red blood cells and measured four different innate immune parameters (agglutination, lysis, haptoglobin concentrations and nitric oxide concentrations) to investigate the relationship between metal exposure and immunological condition. While we found significant differences in endogenous metal concentrations among populations with the highest concentrations closest to the pollution source, we did not observe corresponding patterns in our immune measures. However, when evaluating relationships between metal concentrations and immune parameters at the individual level, we found negative effects of lead and, to a lesser extent, arsenic and cadmium on lysis. In addition, high arsenic concentrations appear to elicit inflammation, as reflected by elevated haptoglobin concentrations. Thus despite the lack of a geographic association between pollution and immunity, this type of association was present at the individual level at a very early life stage. The high variation in metal concentrations and immune measures observed within populations indicates a high level of heterogeneity along an existing pollution gradient. Interestingly, we also found substantial within nest variation, for which the sources remain unclear, and which highlights the need of an individual-based approach. - Highlights: • Innate immunity

  8. Age gradients in the stellar populations of massive star forming regions based on a new stellar chronometer

    Energy Technology Data Exchange (ETDEWEB)

    Getman, Konstantin V.; Feigelson, Eric D.; Kuhn, Michael A.; Broos, Patrick S.; Townsley, Leisa K.; Luhman, Kevin L. [Department of Astronomy and Astrophysics, 525 Davey Laboratory, Pennsylvania State University, University Park, PA 16802 (United States); Naylor, Tim [School of Physics and Astronomy, University of Exeter, Stocker Road, Exeter, EX4 4QL (United Kingdom); Povich, Matthew S. [Department of Physics and Astronomy, California State Polytechnic University, 3801 West Temple Avenue, Pomona, CA 91768 (United States); Garmire, Gordon P. [Huntingdon Institute for X-ray Astronomy, LLC, 10677 Franks Road, Huntingdon, PA 16652 (United States)

    2014-06-01

    A major impediment to understanding star formation in massive star-forming regions (MSFRs) is the absence of a reliable stellar chronometer to unravel their complex star formation histories. We present a new estimation of stellar ages using a new method that employs near-infrared (NIR) and X-ray photometry, Age {sub JX} . Stellar masses are derived from X-ray luminosities using the L{sub X} -M relation from the Taurus cloud. J-band luminosities are compared to mass-dependent pre-main-sequence (PMS) evolutionary models to estimate ages. Age {sub JX} is sensitive to a wide range of evolutionary stages, from disk-bearing stars embedded in a cloud to widely dispersed older PMS stars. The Massive Young Star-Forming Complex Study in Infrared and X-ray (MYStIX) project characterizes 20 OB-dominated MSFRs using X-ray, mid-infrared, and NIR catalogs. The Age {sub JX} method has been applied to 5525 out of 31,784 MYStIX Probable Complex Members. We provide a homogeneous set of median ages for over 100 subclusters in 15 MSFRs; median subcluster ages range between 0.5 Myr and 5 Myr. The important science result is the discovery of age gradients across MYStIX regions. The wide MSFR age distribution appears as spatially segregated structures with different ages. The Age {sub JX} ages are youngest in obscured locations in molecular clouds, intermediate in revealed stellar clusters, and oldest in distributed populations. The NIR color index J – H, a surrogate measure of extinction, can serve as an approximate age predictor for young embedded clusters.

  9. Is counter-terrorism policy evidence-based? What works, what harms, and what is unknown.

    Science.gov (United States)

    Lum, Cynthia; Kennedy, Leslie W; Sherley, Alison

    2008-02-01

    Is counter-terrorism policy evidence-based? What works, what harms, and what is unknown. One of the central concerns surrounding counter-terrorism interventions today, given the attention and money spent on them, is whether such interventions are effective. To explore this issue, we conducted a general review of terrorism literature as well as a Campbell systematic review on counter-terrorism strategies. In this article, we summarize some of our findings from these works. Overall, we found an almost complete absence of evaluation research on counter-terrorism strategies and conclude that counter-terrorism policy is not evidence-based. The findings of this review emphasise the need for government leaders, policy makers, researchers, and funding agencies to include and insist on evaluations of the effectiveness of these programs in their agendas.

  10. Consumers’ evaluation of national new energy vehicle policy in China: An analysis based on a four paradigm model

    International Nuclear Information System (INIS)

    Li, Wenbo; Long, Ruyin; Chen, Hong

    2016-01-01

    The Chinese government has issued numerous policies to promote the development and adoption of new energy vehicles (NEVs) to address the problem of excessive energy consumption and environmental pollution. In this study we divided these policies into seven categories: macroscopic, demonstration, subsidization, preferential tax, technical support, industry management, and infrastructure. Since consumers’ opinions affect the policy choices of government, based on questionnaire data we use a four paradigm model to analyze the consumers’ evaluation of each policy in terms of perceptions of importance and satisfaction. The results show that macroscopic policies are perceived to be of high importance and satisfaction, whereas for industry management policies they are perceived to be of low importance and satisfaction. The importance perceptions of preferential tax and demonstration policies are low, whereas perceptions of their satisfaction are high. Perceptions of the importance of subsidization, technical support, and infrastructure policies are high, whereas perceptions of their satisfaction are low. We find that the subsidization, technical support, and infrastructure policies need urgent improvement. Finally, we put forward several suggestions to improve the current policies and increase the consumers’ intention to adopt NEVs. - Highlights: • This study divided Chinese NEV-related policies into seven types. • This study analyzed consumers’ evaluation of NEV-related policies. • Consumers’ evaluations about NEV-related policies were diverse. • Subsidization, technical support, and infrastructure policies need improvement.

  11. Exploring Dutch surgeons' views on volume-based policies: a qualitative interview study.

    Science.gov (United States)

    Mesman, Roos; Faber, Marjan J; Westert, Gert P; Berden, Bart

    2018-01-01

    Objective In many countries, the evidence for volume-outcome associations in surgery has been transferred into policy. Despite the large body of research that exists on the topic, qualitative studies aimed at surgeons' views on, and experiences with, these volume-based policies are lacking. We interviewed Dutch surgeons to gain more insight into the implications of volume-outcome policies for daily clinical practice, as input for effective surgical quality improvement. Methods Semi-structured interviews were conducted with 20 purposively selected surgeons from a stratified sample for hospital type and speciality. The interviews were recorded, transcribed verbatim and underwent inductive content analysis. Results Two overarching themes were inductively derived from the data: (1) minimum volume standards and (2) implications of volume-based policies. Although surgeons acknowledged the premise 'more is better', they were critical about the validity and underlying evidence for minimum volume standards. Patients often inquire about caseload, which is met with both understanding and discomfort. Surgeons offered many examples of controversies surrounding the process of determining thresholds as well as the ways in which health insurers use volume as a purchasing criterion. Furthermore, being held accountable for caseload may trigger undesired strategic behaviour, such as unwarranted operations. Volume-based policies also have implications for the survival of low-volume providers and affect patient travel times, although the latter is not necessarily problematic in the Dutch context. Conclusions Surgeons in this study acknowledged that more volume leads to better quality. However, validity issues, undesired strategic behaviour and the ways in which minimum volume standards are established and applied have made surgeons critical of current policy practice. These findings suggest that volume remains a controversial quality measure and causes polarization that is not

  12. Developing consensus-based policy solutions for medicines adherence for Europe: a delphi study

    Science.gov (United States)

    2012-01-01

    Background Non-adherence to prescribed medication is a pervasive problem that can incur serious effects on patients’ health outcomes and well-being, and the availability of resources in healthcare systems. This study aimed to develop practical consensus-based policy solutions to address medicines non-adherence for Europe. Methods A four-round Delphi study was conducted. The Delphi Expert Panel comprised 50 participants from 14 countries and was representative of: patient/carers organisations; healthcare providers and professionals; commissioners and policy makers; academics; and industry representatives. Participants engaged in the study remotely, anonymously and electronically. Participants were invited to respond to open questions about the causes, consequences and solutions to medicines non-adherence. Subsequent rounds refined responses, and sought ratings of the relative importance, and operational and political feasibility of each potential solution to medicines non-adherence. Feedback of individual and group responses was provided to participants after each round. Members of the Delphi Expert Panel and members of the research group participated in a consensus meeting upon completion of the Delphi study to discuss and further refine the proposed policy solutions. Results 43 separate policy solutions to medication non-adherence were agreed by the Panel. 25 policy solutions were prioritised based on composite scores for importance, and operational and political feasibility. Prioritised policy solutions focused on interventions for patients, training for healthcare professionals, and actions to support partnership between patients and healthcare professionals. Few solutions concerned actions by governments, healthcare commissioners, or interventions at the system level. Conclusions Consensus about practical actions necessary to address non-adherence to medicines has been developed for Europe. These actions are also applicable to other regions. Prioritised

  13. Investigation of orientation gradients around a hard Laves particle in a warm-rolled Fe3Al-based alloy using a 3D EBSD-FIB technique

    International Nuclear Information System (INIS)

    Konrad, J.; Zaefferer, S.; Raabe, D.

    2006-01-01

    We present a study of the microstructure around a hard Laves particle in a warm-rolled intermetallic Fe 3 Al-based alloy. The experiments are conducted using a system for three-dimensional orientation microscopy (3D electron backscattering diffraction, EBSD). The approach is realized by a combination of a focused ion beam (FIB) unit for serial sectioning with high-resolution field emission scanning electron microscopy with EBSD. We observe the formation of steep 3D orientation gradients in the Fe 3 Al matrix around the rigid precipitate which entail in part particle-stimulated nucleation events in the immediate vicinity of the particle. The orientation gradients assume a characteristic pattern around the particle in the transverse plane while revealing an elongated tubular morphology in the rolling direction. However, they do not reveal a characteristic common rotation axis. Recovered areas in the matrix appear both in the transverse and rolling directions around the particle. The work demonstrates that the new 3D EBSD-FIB technique provides a new level of microstructure information that cannot be achieved by conventional 2D-EBSD analysis

  14. Prediction of the chromatographic retention of acid-base compounds in pH buffered methanol-water mobile phases in gradient mode by a simplified model.

    Science.gov (United States)

    Andrés, Axel; Rosés, Martí; Bosch, Elisabeth

    2015-03-13

    Retention of ionizable analytes under gradient elution depends on the pH of the mobile phase, the pKa of the analyte and their evolution along the programmed gradient. In previous work, a model depending on two fitting parameters was recommended because of its very favorable relationship between accuracy and required experimental work. It was developed using acetonitrile as the organic modifier and involves pKa modeling by means of equations that take into account the acidic functional group of the compound (carboxylic acid, protonated amine, etc.). In this work, the two-parameter predicting model is tested and validated using methanol as the organic modifier of the mobile phase and several compounds of higher pharmaceutical relevance and structural complexity as testing analytes. The results have been quite good overall, showing that the predicting model is applicable to a wide variety of acid-base compounds using mobile phases prepared with acetonitrile or methanol. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Informing principal policy reforms in South Africa through data-based evidence

    Directory of Open Access Journals (Sweden)

    Gabrielle Wills

    2015-12-01

    Full Text Available In the past decade there has been a notable shift in South African education policy that raises the value of school leadership as a lever for learning improvements. Despite a growing discourse on school leadership, there has been a lack of empirical based evidence on principals to inform, validate or debate the efficacy of proposed policies in raising the calibre of school principals. Drawing on findings from a larger study to understand the labour market for school principals in South Africa, this paper highlights four overarching characteristics of this market with implications for informing principal policy reforms. The paper notes that improving the design and implementation of policies guiding the appointment process for principals is a matter of urgency. A substantial and increasing number of principal replacements are taking place across South African schools given a rising age profile of school principals. In a context of low levels of principal mobility and high tenure, the leadership trajectory of the average school is established for nearly a decade with each principal replacement. Evidence-based policy making has a strong role to play in getting this right.

  16. Evidence-based medicine - an appropriate tool for evidence-based health policy? A case study from Norway.

    Science.gov (United States)

    Malterud, Kirsti; Bjelland, Anne Karen; Elvbakken, Kari Tove

    2016-03-05

    Evidence-based policy (EBP), a concept modelled on the principles of evidence-based medicine (EBM), is widely used in different areas of policymaking. Systematic reviews (SRs) with meta-analyses gradually became the methods of choice for synthesizing research evidence about interventions and judgements about quality of evidence and strength of recommendations. Critics have argued that the relation between research evidence and service policies is weak, and that the notion of EBP rests on a misunderstanding of policy processes. Having explored EBM standards and knowledge requirements for health policy decision-making, we present an empirical point of departure for discussing the relationship between EBM and EBP. In a case study exploring the Norwegian Knowledge Centre for the Health Services (NOKC), an independent government unit, we first searched for information about the background and development of the NOKC to establish a research context. We then identified, selected and organized official NOKC publications as an empirical sample of typical top-of-the-line knowledge delivery adhering to EBM standards. Finally, we explored conclusions in this type of publication, specifically addressing their potential as policy decision tools. From a total sample of 151 SRs published by the NOKC in the period 2004-2013, a purposive subsample from 2012 (14 publications) advised major caution about their conclusions because of the quality or relevance of the underlying documentation. Although the case study did not include a systematic investigation of uptake and policy consequences, SRs were found to be inappropriate as universal tools for health policy decision-making. The case study demonstrates that EBM is not necessarily suited to knowledge provision for every kind of policy decision-making. Our analysis raises the question of whether the evidence-based movement, represented here by an independent government organization, undertakes too broad a range of commissions using

  17. The Development and Application of Policy-Based Tools for Institutional Green Buildings

    Science.gov (United States)

    Cupido, Anthony F.

    2010-01-01

    In 2008, APPA forwarded a Web-based survey on the author's behalf to all designated representatives of APPA member institutions. The purpose of the survey was to determine if institutional policies are an important criterion for an institution's sustainable building practices and the use of Leadership in Energy and Environmental Design (LEED[R]).…

  18. An Exploration of the System Dynamics Field : A Model-Based Policy Analysis

    NARCIS (Netherlands)

    Rose, A.C.

    2014-01-01

    This report presents a first look study at the field of System Dynamics. The objective of the study is to perform a model-based policy analysis in order to investigate the future advancement of the System Dynamics field. The aim of this investigation is to determine what this advancement should look

  19. Sustaining School-Based Asthma Interventions through Policy and Practice Change

    Science.gov (United States)

    Carpenter, Laurie M.; Lachance, Laurie; Wilkin, Margaret; Clark, Noreen M.

    2013-01-01

    Background: Schools are an ideal setting for implementation of asthma interventions for children; however, sustaining school-based programs can be challenging. This study illustrates policy and practice changes brought about through the Childhood Asthma Linkages in Missouri (CALM) program to sustain such programs. Methods: Researchers analyzed…

  20. Evidence-based policy for environmental sustainability: a path forward for South Africa

    CSIR Research Space (South Africa)

    Funke, Nicola S

    2009-03-01

    Full Text Available This handbook summarises the background behind, format of and lessons learned from a Collaborative Workshop on Evidence-based Policy-making that was held in November 2008. The workshop was organised by CSIR and funded by the UK's Department of Food...

  1. Selecting, Evaluating and Creating Policies for Computer-Based Resources in the Behavioral Sciences and Education.

    Science.gov (United States)

    Richardson, Linda B., Comp.; And Others

    This collection includes four handouts: (1) "Selection Critria Considerations for Computer-Based Resources" (Linda B. Richardson); (2) "Software Collection Policies in Academic Libraries" (a 24-item bibliography, Jane W. Johnson); (3) "Circulation and Security of Software" (a 19-item bibliography, Sara Elizabeth Williams); and (4) "Bibliography of…

  2. Modernizing Schools in Mexico: The Rise of Teacher Assessment and School-Based Management Policies

    Science.gov (United States)

    Echávarri, Jaime; Peraza, Cecilia

    2017-01-01

    In this paper we analyze the evolution of the teacher assessment policy and the origins of school-based management initiatives in the Mexican education context from the late 1980s until the last 2012-2013 Education Reform (RE2012-2013). Mexico joined the Global Education Reform Movement during the 1990s through the National Agreement for the…

  3. Attempts to Dodge Drowning in Data : Rule- and Risk-Based Anti Money Laundering Policies Compared

    NARCIS (Netherlands)

    Unger, B.; van Waarden, F.

    Both in the US and in Europe anti money laundering policy switched from a rule-to a risk-based reporting system in order to avoid over-reporting by the private sector. However, reporting increased in most countries, while the quality of information decreased. Governments drowned in data because

  4. Organizations' ways of employing early retirees: the role of age-based HR policies

    NARCIS (Netherlands)

    Oude Mulders, J.; Henkens, K.; Schippers, J.

    2015-01-01

    Purpose of the Study: We examine whether from an organizational perspective it is possible to distinguish different ways of employing early retirees and explore how the employment of early retirees is related to the application of 4 age-based human resource (HR) policies, namely demotion, offering

  5. Policy Change and Its Effect on Australian Community-Based Natural Resource Management Practices

    Science.gov (United States)

    Cooke, Penelope R.; Hemmings, Brian C.

    2016-01-01

    The authors of this article report on a qualitative study of Australian community-based natural resource management groups known as Landcare groups. They discuss how four Landcare groups contributed to sustainability practices and how a policy change implemented in 2003 influenced the efforts of the groups to remain active in their activities.…

  6. Organizations' Ways of Employing Early Retirees : The Role of Age-Based HR Policies

    NARCIS (Netherlands)

    Oude Mulders, Jaap; Henkens, Kène; Schippers, Joop

    PURPOSE OF THE STUDY: We examine whether from an organizational perspective it is possible to distinguish different ways of employing early retirees and explore how the employment of early retirees is related to the application of 4 age-based human resource (HR) policies, namely demotion, offering

  7. Toward Coordinated Robust Allocation of Reserve Policies for a Cell-based Power System

    DEFF Research Database (Denmark)

    Hu, Junjie; Heussen, Kai; Claessens, Bert

    2016-01-01

    Conventional regulation reserves have fixed participation factors and are thus not well suited to utilize differentiated capabilities of ancillary service providers. This study applies linear decision rules-based (LDR) control policies, which effectively adapt the present participation factor...... in dependence of the imbalance signal of previous time steps. The LDR-policies are centrally computed using a robust optimization approach which takes into account both the covariances of historic imbalance signals and the operational flexibility of ancillary service providers. The concept is then extended...... to the cooperation of multiple cells. Two illustrating examples are presented to show the functioning of the proposed LDR method....

  8. Written institutional ethics policies on euthanasia: an empirical-based organizational-ethical framework.

    Science.gov (United States)

    Lemiengre, Joke; Dierckx de Casterlé, Bernadette; Schotsmans, Paul; Gastmans, Chris

    2014-05-01

    As euthanasia has become a widely debated issue in many Western countries, hospitals and nursing homes especially are increasingly being confronted with this ethically sensitive societal issue. The focus of this paper is how healthcare institutions can deal with euthanasia requests on an organizational level by means of a written institutional ethics policy. The general aim is to make a critical analysis whether these policies can be considered as organizational-ethical instruments that support healthcare institutions to take their institutional responsibility for dealing with euthanasia requests. By means of an interpretative analysis, we conducted a process of reinterpretation of results of former Belgian empirical studies on written institutional ethics policies on euthanasia in dialogue with the existing international literature. The study findings revealed that legal regulations, ethical and care-oriented aspects strongly affected the development, the content, and the impact of written institutional ethics policies on euthanasia. Hence, these three cornerstones-law, care and ethics-constituted the basis for the empirical-based organizational-ethical framework for written institutional ethics policies on euthanasia that is presented in this paper. However, having a euthanasia policy does not automatically lead to more legal transparency, or to a more professional and ethical care practice. The study findings suggest that the development and implementation of an ethics policy on euthanasia as an organizational-ethical instrument should be considered as a dynamic process. Administrators and ethics committees must take responsibility to actively create an ethical climate supporting care providers who have to deal with ethical dilemmas in their practice.

  9. In search of synergies between policy-based systems management and economic models for autonomic computing

    OpenAIRE

    Anthony, Richard

    2011-01-01

    Policy-based systems management (PBM) and economics-based systems management (EBM) are two of the many techniques available for implementing autonomic systems, each having specific benefits and limitations, and thus different applicability; choosing the most appropriate technique is\\ud the first of many challenges faced by the developer. This talk begins with a critical discussion of the general design goals of autonomic systems and the main issues involved with their development and deployme...

  10. Establishment of Virtual Policy Based Network Management Scheme By Load Experiments in Virtual Environment

    OpenAIRE

    Kazuya Odagiri; Shogo Shimizu; Naohiro Ishii

    2016-01-01

    In the current Internet-based systems, there are many problems using anonymity of the network communication such as personal information leak and crimes using the Internet systems. This is because the TCP/IP protocol used in Internet systems does not have the user identification information on the communication data, and it is difficult to supervise the user performing the above acts immediately. As a solution for solving the above problem, there is the approach of Policy-based Ne...

  11. A Proposed Intelligent Policy-Based Interface for a Mobile eHealth Environment

    Science.gov (United States)

    Tavasoli, Amir; Archer, Norm

    Users of mobile eHealth systems are often novices, and the learning process for them may be very time consuming. In order for systems to be attractive to potential adopters, it is important that the interface should be very convenient and easy to learn. However, the community of potential users of a mobile eHealth system may be quite varied in their requirements, so the system must be able to adapt easily to suit user preferences. One way to accomplish this is to have the interface driven by intelligent policies. These policies can be refined gradually, using inputs from potential users, through intelligent agents. This paper develops a framework for policy refinement for eHealth mobile interfaces, based on dynamic learning from user interactions.

  12. Place-based innovation in Cohesion Policy: meeting and measuring the challenges

    Directory of Open Access Journals (Sweden)

    Alys Solly

    2016-01-01

    Full Text Available This paper, prepared in conjunction with the European Union’s Open Days 2015, examines current Cohesion Policy in terms of its place-based logic, a key aspect of the new Smart Specialisation strategy platform. After discussing changing notions of urbanization and governance, which seem to be shifting Cohesion Policy towards a more performance-oriented analysis of its outcomes, the paper focuses on the question of identifying an appropriate set of indicators and measuring framework. It suggests that measurements of Cohesion Policy performance should analyse the outcomes and indicators, as well as the European and national data sources and statistics, through the lens of effectiveness and well-being.

  13. Health: Policy or Law? A Population-Based Analysis of the Supreme Court's ACA Cases.

    Science.gov (United States)

    Parmet, Wendy E

    2016-12-01

    This essay argues that it matters for the fate of health policies challenged in court whether courts consider health merely as a policy goal that must be subordinate to law, or as a legal norm warranting legal weight and consideration. Applying population-based legal analysis, this article demonstrates that courts have traditionally treated health as a legal norm. However, this norm appears to have weakened in recent years, a trend evident in the Supreme Court's first two decisions concerning the Affordable Care Act, NFIB v. Sebelius and Burwell v. Hobby Lobby However, in its more recent Affordable Care Act decision, King v. Burwell , the health legal norm is once again evident. Whether the Court will continue to treat health as a legal norm will prove critical to the deference and weight it grants health policies in the future. Copyright © 2016 by Duke University Press.

  14. Policy in management based on corruption growth of resources agropolitan in the Gorontalo Province

    Science.gov (United States)

    Wantu, S. M.; Moonti, U.; Wantu, A.

    2018-02-01

    Gorontalo was formed as a new province in Indonesia as a producer of national maize, but until now the agropolitan-based agricultural development condition is still inadequate and needs to be developed well. Thus, it is necessary to conduct a governance-related study on empowering farmers in order to increase their capacity in exploring the agricultural sector. This study aims to see the need for local government policies in managing human resources in the agricultural sector in this case the potential and contribution of farmers in creating new jobs in rural areas. This study uses a qualitative approach to the use of secondary data and primary data as a basis for tracing or researching local government management policies in empowering farmers to improve their welfare and also how this policy is expected to organize effective market mechanisms for farmers.

  15. ASN guide project. Safety policy and management in INBs (base nuclear installations)

    International Nuclear Information System (INIS)

    2010-01-01

    This guide presents the recommendations of the French Nuclear Safety Authority (ASN) in the field of safety policy and management (PMS) for base nuclear installations (INBs). It gives an overview and comments of some prescriptions of the so-called INB order and PMS decision. These regulatory texts define a framework for provisions any INB operator must implement to establish his safety policy, to define and implement a system which allows the safety to be maintained, the improvement of his INB safety to be permanently looked for. The following issues are addressed: operator's safety policy, identification of elements important for safety, of activities pertaining to safety, and of associated requirements, safety management organization and system, management of activities pertaining to safety, documentation and archiving

  16. An equity- and sustainability-based policy response to global climate change

    International Nuclear Information System (INIS)

    Byrne, J.; Wang, Y.-D.; Kim, Jong-dall

    1998-01-01

    In the debate over policy options for the reduction of greenhouse gas emissions, two precautionary approaches, 'no regrets' and 'insurance' have been proposed. An alternative to these is put forward which adopts an equity and sustainability based approach. It will not be easy to meet the challenge which this approach demands. From wealthy countries it will require a strong commitment to a social policy at home and an economic policy abroad that aims at sharing the ability of humankind to meet the needs of the present without compromising the ability of future generations to meet their needs. In developing countries the necessary improvement in lives and livelihoods must be achieved without repeating the unsustainable environmental and social legacy of the industrial era. (UK)

  17. Management of clandestine drug laboratories: need for evidence-based environmental health policies.

    Science.gov (United States)

    Al-Obaidi, Tamara A; Fletcher, Stephanie M

    2014-01-01

    Clandestine drug laboratories (CDLs) have been emerging and increasing as a public health problem in Australia, with methamphetamine being the dominant illegally manufactured drug. However, management and remediation of contaminated properties are still limited in terms of regulation and direction, especially in relation to public and environmental health practice. Therefore, this review provides an update on the hazards and health effects associated with CDLs, with a specific look at the management of these labs from an Australian perspective. Particularly, the paper attempts to describe the policy landscape for management of CDLs, and identifies current gaps and how further research may be utilised to advance understanding and management of CDLs and inform public health policies. The paper highlights a significant lack of evidence-based policies and guidelines to guide regulatory authority including environmental health officers in Australia. Only recently, the national Clandestine Drug Laboratory Guidelines were developed to assist relevant authority and specialists manage and carry out investigations and remediation of contaminated sites. However, only three states have developed state-based guidelines, some of which are inadequate to meet environmental health requirements. The review recommends well-needed inter-sectoral collaborations and further research to provide an evidence base for the development of robust policies and standard operating procedures for safe and effective environmental health management and remediation of CDLs.

  18. Efficient Attribute-Based Secure Data Sharing with Hidden Policies and Traceability in Mobile Health Networks

    Directory of Open Access Journals (Sweden)

    Changhee Hahn

    2016-01-01

    Full Text Available Mobile health (also written as mHealth provisions the practice of public health supported by mobile devices. mHealth systems let patients and healthcare providers collect and share sensitive information, such as electronic and personal health records (EHRs at any time, allowing more rapid convergence to optimal treatment. Key to achieving this is securely sharing data by providing enhanced access control and reliability. Typically, such sharing follows policies that depend on patient and physician preferences defined by a set of attributes. In mHealth systems, not only the data but also the policies for sharing it may be sensitive since they directly contain sensitive information which can reveal the underlying data protected by the policy. Also, since the policies usually incur linearly increasing communication costs, mHealth is inapplicable to resource-constrained environments. Lastly, access privileges may be publicly known to users, so a malicious user could illegally share his access privileges without the risk of being traced. In this paper, we propose an efficient attribute-based secure data sharing scheme in mHealth. The proposed scheme guarantees a hidden policy, constant-sized ciphertexts, and traces, with security analyses. The computation cost to the user is reduced by delegating approximately 50% of the decryption operations to the more powerful storage systems.

  19. When to base clinical policies on observational versus randomized trial data.

    Science.gov (United States)

    Hornberger, J; Wrone, E

    1997-10-15

    Physicians must decide when the evidence is sufficient to adopt a new clinical policy. Analysis of large clinical and administrative databases is becoming an important source of evidence for changing clinical policies. Because such analysis cannot control for the effects of all potential confounding variables, physicians risk drawing the wrong conclusion about the cause-and-effect relation between a change in clinical policy and outcomes. Randomized studies offer protection against drawing a conclusion that would lead to adoption of an inferior policy. However, a randomized study may be difficult to justify because of the extra costs of collecting data for a randomized study and concerns that a study will not directly benefit the patients enrolled in the study. This article reviews the advantages and disadvantages of basing clinical policy on analysis of large databases compared with conducting a randomized study. A technique is described and illustrated for accessing the potential costs and benefits of conducting such a study. This type of analysis formed the basis for a physician-managed health care organization deciding to sponsor a randomized study among patients with end-stage renal disease as part of a quality-improvement initiative.

  20. New diffusive gradients in a thin film technique for measuring inorganic arsenic and selenium(IV) using a titanium dioxide based adsorbent

    DEFF Research Database (Denmark)

    Bennett, William W.; Teasdale, Peter R.; Panther, Jared G.

    2010-01-01

    A new diffusive gradients in a thin film (DGT) technique, using a titanium dioxide based adsorbent (Metsorb), has been developed and evaluated for the determination of dissolved inorganic arsenic and selenium. AsIII, AsV, and SeIV were found to be quantitatively accumulated by the adsorbent (uptake...... measurement of inorganic arsenic. Reproducibility of the technique in field deployments was good (relative standard deviation arsenic and 0.05 μg L-1 for SeIV. The results of this study confirmed that DGT with Metsorb was a reliable...... and robust method for the measurement of inorganic arsenic and the selective measurement of SeIV within useful limits of accuracy....

  1. Testing the limits of gradient sensing.

    Directory of Open Access Journals (Sweden)

    Vinal Lakhani

    2017-02-01

    Full Text Available The ability to detect a chemical gradient is fundamental to many cellular processes. In multicellular organisms gradient sensing plays an important role in many physiological processes such as wound healing and development. Unicellular organisms use gradient sensing to move (chemotaxis or grow (chemotropism towards a favorable environment. Some cells are capable of detecting extremely shallow gradients, even in the presence of significant molecular-level noise. For example, yeast have been reported to detect pheromone gradients as shallow as 0.1 nM/μm. Noise reduction mechanisms, such as time-averaging and the internalization of pheromone molecules, have been proposed to explain how yeast cells filter fluctuations and detect shallow gradients. Here, we use a Particle-Based Reaction-Diffusion model of ligand-receptor dynamics to test the effectiveness of these mechanisms and to determine the limits of gradient sensing. In particular, we develop novel simulation methods for establishing chemical gradients that not only allow us to study gradient sensing under steady-state conditions, but also take into account transient effects as the gradient forms. Based on reported measurements of reaction rates, our results indicate neither time-averaging nor receptor endocytosis significantly improves the cell's accuracy in detecting gradients over time scales associated with the initiation of polarized growth. Additionally, our results demonstrate the physical barrier of the cell membrane sharpens chemical gradients across the cell. While our studies are motivated by the mating response of yeast, we believe our results and simulation methods will find applications in many different contexts.

  2. Designing a climate change policy for the international maritime transport sector: Market-based measures and technological options for global and regional policy actions

    International Nuclear Information System (INIS)

    Miola, A.; Marra, M.; Ciuffo, B.

    2011-01-01

    The international maritime transport sector has a significant abatement potential and some technical improvements that reduce GHG emissions would already be profitable without any policy in place. This paper analyses in-depth the limits and opportunities of policy options currently under consideration at the international level to stimulate the sector to reduce its GHG emissions. In particular, in order for the maritime transport sector to become more environmentally friendly, the flexible nature of international market-based measures and the European Union Emission Trading Scheme provide a definite window of opportunity without placing unnecessary high burden on the sector. However, the development of a regional policy, such as at European level, for the international maritime transport sector faces several obstacles: allocation of emissions, carbon leakage, permit allocation, treatment of the great variety in ship type, size and usage, and transaction cost. Global market-based policies could overcome most of these challenges. This paper provides an in-depth analysis of the policy instruments currently under discussion to reduce the sector's burden on the environment, and focuses on economic theory, legal principles, technological options, and the political framework that together make up the basis of decision-making regarding the international maritime transport sector's climate change policies. - Highlights: → Technologies for a more environmental friendly maritime transport sector and their cost-effectiveness. → How to combine ambitious CO 2 reduction goals with a sector-wide market-based policy. → Permits should be auctioned frequently and small emitters have to be excluded. → Inclusion of shipping in the EU ETS causes carbon leakage, so the policy should aim at expansion.

  3. Ultra-high gradient compact accelerator developments

    NARCIS (Netherlands)

    Brussaard, G.J.H.; Wiel, van der M.J.

    2004-01-01

    Continued development of relatively compact, although not quite 'table-top', lasers with peak powers in the range up to 100 TW has enabled laser-plasma-based acceleration experiments with amazing gradients of up to 1 TV/m. In order to usefully apply such gradients to 'controlled' acceleration,

  4. National Drought Policy: Shifting the Paradigm from Crisis to Risk-based Management

    Science.gov (United States)

    Wilhite, D. A.; Sivakumar, M. K.; Stefanski, R.

    2011-12-01

    Drought is a normal part of climate for virtually all of the world's climatic regimes. To better address the risks associated with this hazard and societal vulnerability, there must be a dramatic paradigm shift in our approach to drought management in the coming decade in the light of the increasing frequency of droughts and projections of increased severity and duration of these events in the future for many regions, especially in the developing world. Addressing this challenge will require an improved awareness of drought as a natural hazard, the establishment of integrated drought monitoring and early warning systems, a higher level of preparedness that fully incorporates risk-based management, and the adoption of national drought policies that are directed at increasing the coping capacity and resilience of populations to future drought episodes. The World Meteorological Organization (WMO), in partnership with other United Nations' agencies, the National Drought Mitigation Center at the University of Nebraska, NOAA, the U.S. Department of Agriculture, and other partners, is currently launching a program to organize a High Level Meeting on National Drought Policy (HMNDP) in March 2013 to encourage the development of national drought policies through the development of a compendium of key policy elements. The key objectives of a national drought policy are to: (1) encourage vulnerable economic sectors and population groups to adopt self-reliant measures that promote risk management; (2) promote sustainable use of the agricultural and natural resource base; and (3) facilitate early recovery from drought through actions consistent with national drought policy objectives. The key elements of a drought policy framework are policy and governance, including political will; addressing risk and improving early warnings, including vulnerability analysis, impact assessment, and communication; mitigation and preparedness, including the application of effective and

  5. The challenges of changing national malaria drug policy to artemisinin-based combinations in Kenya

    Directory of Open Access Journals (Sweden)

    Otieno Dorothy N

    2007-05-01

    Full Text Available Abstract Backgound Sulphadoxine/sulphalene-pyrimethamine (SP was adopted in Kenya as first line therapeutic for uncomplicated malaria in 1998. By the second half of 2003, there was convincing evidence that SP was failing and had to be replaced. Despite several descriptive investigations of policy change and implementation when countries moved from chloroquine to SP, the different constraints of moving to artemisinin-based combination therapy (ACT in Africa are less well documented. Methods A narrative description of the process of anti-malarial drug policy change, financing and implementation in Kenya is assembled from discussions with stakeholders, reports, newspaper articles, minutes of meetings and email correspondence between actors in the policy change process. The narrative has been structured to capture the timing of events, the difficulties and hurdles faced and the resolutions reached to the final implementation of a new treatment policy. Results Following a recognition that SP was failing there was a rapid technical appraisal of available data and replacement options resulting in a decision to adopt artemether-lumefantrine (AL as the recommended first-line therapy in Kenya, announced in April 2004. Funding requirements were approved by the Global Fund to Fight AIDS, Tuberculosis and Malaria (GFATM and over 60 million US$ were agreed in principle in July 2004 to procure AL and implement the policy change. AL arrived in Kenya in May 2006, distribution to health facilities began in July 2006 coincidental with cascade in-service training in the revised national guidelines. Both training and drug distribution were almost complete by the end of 2006. The article examines why it took over 32 months from announcing a drug policy change to completing early implementation. Reasons included: lack of clarity on sustainable financing of an expensive therapeutic for a common disease, a delay in release of funding, a lack of comparative efficacy data

  6. Analytical method for optimization of maintenance policy based on available system failure data

    International Nuclear Information System (INIS)

    Coria, V.H.; Maximov, S.; Rivas-Dávalos, F.; Melchor, C.L.; Guardado, J.L.

    2015-01-01

    An analytical optimization method for preventive maintenance (PM) policy with minimal repair at failure, periodic maintenance, and replacement is proposed for systems with historical failure time data influenced by a current PM policy. The method includes a new imperfect PM model based on Weibull distribution and incorporates the current maintenance interval T 0 and the optimal maintenance interval T to be found. The Weibull parameters are analytically estimated using maximum likelihood estimation. Based on this model, the optimal number of PM and the optimal maintenance interval for minimizing the expected cost over an infinite time horizon are also analytically determined. A number of examples are presented involving different failure time data and current maintenance intervals to analyze how the proposed analytical optimization method for periodic PM policy performances in response to changes in the distribution of the failure data and the current maintenance interval. - Highlights: • An analytical optimization method for preventive maintenance (PM) policy is proposed. • A new imperfect PM model is developed. • The Weibull parameters are analytically estimated using maximum likelihood. • The optimal maintenance interval and number of PM are also analytically determined. • The model is validated by several numerical examples

  7. An extended EPQ-based problem with a discontinuous delivery policy, scrap rate, and random breakdown.

    Science.gov (United States)

    Chiu, Singa Wang; Lin, Hong-Dar; Song, Ming-Syuan; Chen, Hsin-Mei; Chiu, Yuan-Shyi P

    2015-01-01

    In real supply chain environments, the discontinuous multidelivery policy is often used when finished products need to be transported to retailers or customers outside the production units. To address this real-life production-shipment situation, this study extends recent work using an economic production quantity- (EPQ-) based inventory model with a continuous inventory issuing policy, defective items, and machine breakdown by incorporating a multiple delivery policy into the model to replace the continuous policy and investigates the effect on the optimal run time decision for this specific EPQ model. Next, we further expand the scope of the problem to combine the retailer's stock holding cost into our study. This enhanced EPQ-based model can be used to reflect the situation found in contemporary manufacturing firms in which finished products are delivered to the producer's own retail stores and stocked there for sale. A second model is developed and studied. With the help of mathematical modeling and optimization techniques, the optimal run times that minimize the expected total system costs comprising costs incurred in production units, transportation, and retail stores are derived, for both models. Numerical examples are provided to demonstrate the applicability of our research results.

  8. An Extended EPQ-Based Problem with a Discontinuous Delivery Policy, Scrap Rate, and Random Breakdown

    Directory of Open Access Journals (Sweden)

    Singa Wang Chiu

    2015-01-01

    Full Text Available In real supply chain environments, the discontinuous multidelivery policy is often used when finished products need to be transported to retailers or customers outside the production units. To address this real-life production-shipment situation, this study extends recent work using an economic production quantity- (EPQ- based inventory model with a continuous inventory issuing policy, defective items, and machine breakdown by incorporating a multiple delivery policy into the model to replace the continuous policy and investigates the effect on the optimal run time decision for this specific EPQ model. Next, we further expand the scope of the problem to combine the retailer’s stock holding cost into our study. This enhanced EPQ-based model can be used to reflect the situation found in contemporary manufacturing firms in which finished products are delivered to the producer’s own retail stores and stocked there for sale. A second model is developed and studied. With the help of mathematical modeling and optimization techniques, the optimal run times that minimize the expected total system costs comprising costs incurred in production units, transportation, and retail stores are derived, for both models. Numerical examples are provided to demonstrate the applicability of our research results.

  9. Policy recommendations for addressing privacy challenges associated with cell-based research and interventions.

    Science.gov (United States)

    Ogbogu, Ubaka; Burningham, Sarah; Ollenberger, Adam; Calder, Kathryn; Du, Li; El Emam, Khaled; Hyde-Lay, Robyn; Isasi, Rosario; Joly, Yann; Kerr, Ian; Malin, Bradley; McDonald, Michael; Penney, Steven; Piat, Gayle; Roy, Denis-Claude; Sugarman, Jeremy; Vercauteren, Suzanne; Verhenneman, Griet; West, Lori; Caulfield, Timothy

    2014-02-03

    The increased use of human biological material for cell-based research and clinical interventions poses risks to the privacy of patients and donors, including the possibility of re-identification of individuals from anonymized cell lines and associated genetic data. These risks will increase as technologies and databases used for re-identification become affordable and more sophisticated. Policies that require ongoing linkage of cell lines to donors' clinical information for research and regulatory purposes, and existing practices that limit research participants' ability to control what is done with their genetic data, amplify the privacy concerns. To date, the privacy issues associated with cell-based research and interventions have not received much attention in the academic and policymaking contexts. This paper, arising out of a multi-disciplinary workshop, aims to rectify this by outlining the issues, proposing novel governance strategies and policy recommendations, and identifying areas where further evidence is required to make sound policy decisions. The authors of this paper take the position that existing rules and norms can be reasonably extended to address privacy risks in this context without compromising emerging developments in the research environment, and that exceptions from such rules should be justified using a case-by-case approach. In developing new policies, the broader framework of regulations governing cell-based research and related areas must be taken into account, as well as the views of impacted groups, including scientists, research participants and the general public. This paper outlines deliberations at a policy development workshop focusing on privacy challenges associated with cell-based research and interventions. The paper provides an overview of these challenges, followed by a discussion of key themes and recommendations that emerged from discussions at the workshop. The paper concludes that privacy risks associated with cell-based

  10. Deciding Who Decides Questions at the Intersection of School Finance Reform Litigation and Standards-Based Accountability Policies

    Science.gov (United States)

    Superfine, Benjamin Michael

    2009-01-01

    Courts hearing school finance reform cases have recently begun to consider several issues related to standards-based accountability policies. This convergence of school finance reform litigation and standards-based accountability policies represents a chance for the courts to reallocate decision-making authority for each type of reform across the…

  11. A Case Study of Policies and Procedures to Address Cyberbullying at a Technology-Based Middle School

    Science.gov (United States)

    Tate, Bettina Polite

    2017-01-01

    This qualitative case study explored the policies and procedures used to effectively address cyberbullying at a technology-based middle school. The purpose of the study was to gain an in-depth understanding of policies and procedures used to address cyberbullying at a technology-based middle school in the southern United States. The study sought…

  12. Non-linear extension of FFT-based methods accelerated by conjugate gradients to evaluate the mechanical behavior of composite materials

    International Nuclear Information System (INIS)

    Gelebart, Lionel; Mondon-Cancel, Romain

    2013-01-01

    FFT-based methods are used to solve the problem of a heterogeneous unit-cell submitted to periodic boundary conditions, which is of a great interest in the context of numerical homogenization. Recently (in 2010), Brisard and Zeman proposed simultaneously to use Conjugate Gradient based solvers in order to improve the convergence properties (when compared to the basic scheme, proposed initially in 1994). The purpose of the paper is to extend this idea to the case of non-linear behaviors. The proposed method is based on a Newton-Raphson algorithm and can be applied to various kinds of behaviors (time dependant or independent, with or without internal variables) through a conventional integration procedure as used in finite element codes. It must be pointed out that this approach is fundamentally different from the traditional FFT-based approaches which rely on a fixed-point algorithm (e.g. basic scheme, Eyre and Milton accelerated scheme, Augmented Lagrangian scheme, etc.). The method is compared to the basic scheme on the basis of a simple application (a linear elastic spherical inclusion within a non-linear elastic matrix): a low sensitivity to the reference material and an improved efficiency, for a soft or a stiff inclusion, are observed. At first proposed for a prescribed macroscopic strain, the method is then extended to mixed loadings. (authors)

  13. Dynamic UAV-based traffic monitoring under uncertainty as a stochastic arc-inventory routing policy

    Directory of Open Access Journals (Sweden)

    Joseph Y.J. Chow

    2016-10-01

    Full Text Available Given the rapid advances in unmanned aerial vehicles, or drones, and increasing need to monitor at a city level, one of the current research gaps is how to systematically deploy drones over multiple periods. We propose a real-time data-driven approach: we formulate the first deterministic arc-inventory routing problem and derive its stochastic dynamic policy. The policy is expected to be of greatest value in scenarios where uncertainty is highest and costliest, such as city monitoring during major events. The Bellman equation for an approximation of the proposed inventory routing policy is formulated as a selective vehicle routing problem. We propose an approximate dynamic programming algorithm based on Least Squares Monte Carlo simulation to find that policy. The algorithm has been modified so that the least squares dependent variable is defined to be the “expected stock out cost upon the next replenishment”. The new algorithm is tested on 30 simulated instances of real time trajectories over 5 time periods of the selective vehicle routing problem to evaluate the proposed policy and algorithm. Computational results on the selected instances show that the algorithm on average outperforms the myopic policy by 23–28%, depending on the parametric design. Further tests are conducted on classic benchmark arc routing problem instances. The 11-link instance gdb19 (Golden et al., 1983 is expanded into a sequential 15-period stochastic dynamic example and used to demonstrate why a naïve static multi-period deployment plan would not be effective in real networks.

  14. Evidence Based Weighing Policy during the First Week to Prevent Neonatal Hypernatremic Dehydration while Breastfeeding.

    Science.gov (United States)

    Boer, Suzanne; Unal, Sevim; van Wouwe, Jacobus P; van Dommelen, Paula

    2016-01-01

    Neonatal hypernatremic dehydration is prevented by daily neonatal weight monitoring. We aim to provide evidence-based support of this universally promoted weighing policy and to establish the most crucial days of weighing. Weight measurements of 2,359 healthy newborns and of 271 newborns with clinical hypernatremic dehydration were used within the first seven days of life to simulate various weighting policies to prevent hypernatremic dehydration; its sensitivity, specificity and positive predictive value (PPV) of these policies were calculated. Various referral criteria were also evaluated. A policy of daily weighing with a cut-off value of -2.5 Standard Deviation Score (SDS) on the growth chart for weight loss, had a 97.6% sensitivity, 97.6% specificity and a PPV of 2.80%. Weighing at birth and only at days two, four and seven with the same -2.5 SDS cut-off, resulted in 97.3% sensitivity, 98.5% specificity and a PPV of 4.43%. A weighing policy with measurements restricted to birth and day two, four and seven applying the -2.5 SDS cut-off seems an optimal policy to detect hypernatremic dehydration. Therefore we recommend to preferably weigh newborns at least on day two (i.e. ~48h), four and seven, and refer them to clinical pediatric care if their weight loss increases below -2.5 SDS. We also suggest lactation support for the mother, full clinical assessment of the infant and weighing again the following day in all newborns reaching a weight loss below -2.0 SDS.

  15. Culture-based and denaturing gradient gel electrophoresis analysis of the bacterial community from Chungkookjang, a traditional Korean fermented soybean food.

    Science.gov (United States)

    Hong, Sung Wook; Choi, Jae Young; Chung, Kun Sub

    2012-10-01

    The bacterial community of Chungkookjang and raw rice-straw collected from various areas in South Korea was investigated using both culture-dependent and culture-independent methods. Pure cultures were isolated from Chungkookjang and raw rice-straw on tryptic soy agar plates with 72 to 121 colonies and identified by 16S rDNA gene sequence analysis, respectively. The traditional culture-based method and denaturing gradient gel electrophoresis analysis of PCR-amplified 16S rDNA confirmed that Pantoea agglomerans and B. subtilis were identified as predominant in the raw rice-straw and Chungkookjang, respectively, from Iljuk district of Gyeonggi province, P. ananatis and B. licheniformis were identified as predominant in the raw rice-straw and Chungkookjang from Wonju district of Gangwon province, and Microbacterium sp. and B. licheniformis were identified as predominant in the raw rice-straw and Chungkookjang from Sunchang district of Jeolla province. Other strains, such as Bacillus, Enterococcus, Pseudomonas, Rhodococcus, and uncultured bacteria were also present in raw rice-straw and Chungkookjang. A comprehensive analysis of these microorganisms would provide a more detailed understanding of the biologically active components of Chungkookjang and help improve its quality. Polymerase chain reaction-denaturing gradient gel electrophoresis analysis can be successfully applied to a fermented food to detect unculturable or more species than the culture-dependent method. This technique is an effective and convenient culture-independent method for studying the bacterial community in Chungkookjang. In this study, the bacterial community of Chungkookjang collected from various areas in South Korea was investigated using both culture-dependent and culture-independent methods. © 2012 Institute of Food Technologists®

  16. Gradient decent based multi-objective cultural differential evolution for short-term hydrothermal optimal scheduling of economic emission with integrating wind power and photovoltaic power

    International Nuclear Information System (INIS)

    Zhang, Huifeng; Yue, Dong; Xie, Xiangpeng; Dou, Chunxia; Sun, Feng

    2017-01-01

    With the integration of wind power and photovoltaic power, optimal operation of hydrothermal power system becomes great challenge due to its non-convex, stochastic and complex-coupled constrained characteristics. This paper extends short-term hydrothermal system optimal model into short-term hydrothermal optimal scheduling of economic emission while considering integrated intermittent energy resources (SHOSEE-IIER). For properly solving SHOSEE-IIER problem, a gradient decent based multi-objective cultural differential evolution (GD-MOCDE) is proposed to improve the optimal efficiency of SHOSEE-IIER combined with three designed knowledge structures, which mainly enhances search ability of differential evolution in the shortest way. With considering those complex-coupled and stochastic constraints, a heuristic constraint-handling measurement is utilized to tackle with them both in coarse and fine tuning way, and probability constraint-handling procedures are taken to properly handle those stochastic constraints combined with their probability density functions. Ultimately, those approaches are implemented on five test systems, which testify the optimization efficiency of proposed GD-MOCDE and constraint-handling efficiency for system load balance, water balance and stochastic constraint-handling measurements, those obtained results reveal that the proposed GD-MOCDE can properly solve the SHOSEE-IIER problem combined with those constraint-handling approaches. - Highlights: • Gradient decent method is proposed to improve mutation operator. • Hydrothermal system is extended to hybrid energy system. • The uncertainty constraint is converted into deterministic constraint. • The results show the viability and efficiency of proposed algorithm.

  17. Organizations' Ways of Employing Early Retirees: The Role of Age-Based HR Policies.

    Science.gov (United States)

    Oude Mulders, Jaap; Henkens, Kène; Schippers, Joop

    2015-06-01

    We examine whether from an organizational perspective it is possible to distinguish different ways of employing early retirees and explore how the employment of early retirees is related to the application of 4 age-based human resource (HR) policies, namely demotion, offering training opportunities to older workers, offering early retirement, and allowing flexible working hours. We perform a latent class analysis on a sample of 998 Dutch organizations in order to categorize them based on 3 dimensions of their employment of early retirees. We then run a multinomial logistic regression to relate the employment of early retirees to the 4 age-based HR policies. We distinguish 4 types of organizations based on their way of employing early retirees: nonusers (52.6%), users for mainly standard work (20.8%), users for mainly nonstandard work (9.8%), and users for standard and nonstandard work (16.7%). We find that organizations that apply demotion, offer early retirement, and allow flexible working hours are more likely to be users for mainly standard work. Also, organizations that do not offer early retirement are less likely to employ early retirees. Age-based HR policies, especially demotion, offering early retirement, and allowing flexible working hours, are conducive to the employment of early retirees for mainly standard work. Broader implementation of these policies may provide opportunities for older workers to make a more gradual transition from work to retirement. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Base Stock Policy in a Join-Type Production Line with Advanced Demand Information

    Science.gov (United States)

    Hiraiwa, Mikihiko; Tsubouchi, Satoshi; Nakade, Koichi

    Production control such as the base stock policy, the kanban policy and the constant work-in-process policy in a serial production line has been studied by many researchers. Production lines, however, usually have fork-type, join-type or network-type figures. In addition, in most previous studies on production control, a finished product is required at the same time as arrival of demand at the system. Demand information is, however, informed before due date in practice. In this paper a join-type (assembly) production line under base stock control with advanced demand information in discrete time is analyzed. The recursive equations for the work-in-process are derived. The heuristic algorithm for finding appropriate base stock levels of all machines at short time is proposed and the effect of advanced demand information is examined by simulation with the proposed algorithm. It is shown that the inventory cost can decreases with little backlogs by using the appropriate amount of demand information and setting appropriate base stock levels.

  19. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  20. Model-Based Comprehensive Analysis of School Closure Policies for Mitigating Influenza Epidemics and Pandemics.

    Science.gov (United States)

    Fumanelli, Laura; Ajelli, Marco; Merler, Stefano; Ferguson, Neil M; Cauchemez, Simon

    2016-01-01

    School closure policies are among the non-pharmaceutical measures taken into consideration to mitigate influenza epidemics and pandemics spread. However, a systematic review of the effectiveness of alternative closure policies has yet to emerge. Here we perform a model-based analysis of four types of school closure, ranging from the nationwide closure of all schools at the same time to reactive gradual closure, starting from class-by-class, then grades and finally the whole school. We consider policies based on triggers that are feasible to monitor, such as school absenteeism and national ILI surveillance system. We found that, under specific constraints on the average number of weeks lost per student, reactive school-by-school, gradual, and county-wide closure give comparable outcomes in terms of optimal infection attack rate reduction, peak incidence reduction or peak delay. Optimal implementations generally require short closures of one week each; this duration is long enough to break the transmission chain without leading to unnecessarily long periods of class interruption. Moreover, we found that gradual and county closures may be slightly more easily applicable in practice as they are less sensitive to the value of the excess absenteeism threshold triggering the start of the intervention. These findings suggest that policy makers could consider school closure policies more diffusely as response strategy to influenza epidemics and pandemics, and the fact that some countries already have some experience of gradual or regional closures for seasonal influenza outbreaks demonstrates that logistic and feasibility challenges of school closure strategies can be to some extent overcome.

  1. Denaturing gradient gel electrophoresis

    International Nuclear Information System (INIS)

    Kocherginskaya, S.A.; Cann, I.K.O.; Mackie, R.I.

    2005-01-01

    It is worthwhile considering that only some 30 species make up the bulk of the bacterial population in human faeces at any one time based on the classical cultivation-based approach. The situation in the rumen is similar. Thus, it is practical to focus on specific groups of interest within the complex community. These may be the predominant or the most active species, specific physiological groups or readily identifiable (genetic) clusters of phylogenetically related organisms. Several 16S rDNA fingerprinting techniques can be invaluable for selecting and monitoring sequences or phylogenetic groups of interest and are described below. Over the past few decades, considerable attention was focussed on the identification of pure cultures of microbes on the basis of genetic polymorphisms of DNA encoding rRNA such as ribotyping, amplified fragment length polymorphism and randomly amplified polymorphic DNA. However, many of these methods require prior cultivation and are less suitable for use in analysis of complex mixed populations although important in describing cultivated microbial diversity in molecular terms. Much less attention was given to molecular characterization of complex communities. In particular, research into diversity and community structure over time has been revolutionized by the advent of molecular fingerprinting techniques for complex communities. Denaturing or temperature gradient gel electrophoresis (DGGE/TGGE) methods have been successfully applied to the analysis of human, pig, cattle, dog and rodent intestinal populations

  2. Robust Allocation of Reserve Policies for a Multiple-Cell Based Power System

    DEFF Research Database (Denmark)

    Hu, Junjie; Lan, Tian; Heussen, Kai

    2018-01-01

    modifications. The LDR method can effectively adapt the participation factors of reserve providers to respond to system imbalance signals. The policies considered the covariance of historic system imbalance signals to reduce the overall reserve cost. When applying this method to the cell-based power system...... for a certain horizon, the influence of different time resolutions on policy-making is also investigated, which presents guidance for its practical application. The main results illustrate that: (a) the LDR-based method shows better performance, by producing smaller reserve costs compared to the costs given...... by a reference method; and (b) the cost index decreases with increased time intervals, however, longer intervals might result in insufficient reserves, due to low time resolution. On the other hand, shorter time intervals require heavy computational time. Thus, it is important to choose a proper time interval...

  3. Developing an evidence-based approach to Public Health Nutrition: translating evidence into policy.

    Science.gov (United States)

    Margetts, B; Warm, D; Yngve, A; Sjöström, M

    2001-12-01

    The aim of this paper is to highlight the importance of an evidence-based approach to the development, implementation and evaluation of policies aimed at improving nutrition-related health in the population. Public Health Nutrition was established to realise a population-level approach to the prevention of the major nutrition-related health problems world-wide. The scope is broad and integrates activity from local, national, regional and international levels. The aim is to inform and develop coherent and effective policies that address the key rate-limiting steps critical to improving nutrition-related public health. This paper sets out the rationale for an evidence-based approach to Public Health Nutrition developed under the umbrella of the European Network for Public Health Nutrition.

  4. Status of costing hospital nursing work within Australian casemix activity-based funding policy.

    Science.gov (United States)

    Heslop, Liza

    2012-02-01

    Australia has a long history of patient level costing initiated when casemix funding was implemented in several states in the early 1990s. Australia includes, to some extent, hospital payment based on nursing intensity adopted within casemix funding policy and the Diagnostic Related Group system. Costing of hospital nursing services in Australia has not changed significantly in the last few decades despite widespread introduction of casemix funding policy at the state level. Recent Commonwealth of Australia National Health Reform presents change to the management of the delivery of health care including health-care costing. There is agreement for all Australian jurisdictions to progress to casemix-based activity funding. Within this context, nurse costing infrastructure presents contemporary issues and challenges. An assessment is made of the progress of costing nursing services within casemix funding models in Australian hospitals. Valid and reliable Australian-refined nursing service weights might overcome present cost deficiencies and limitations. © 2012 Blackwell Publishing Asia Pty Ltd.

  5. The evolution of HIV policy in Vietnam: from punitive control measures to a more rights-based approach.

    Science.gov (United States)

    Nguyen Ha, Pham; Pharris, Anastasia; Huong, Nguyen Thanh; Chuc, Nguyen Thi Kim; Brugha, Ruairi; Thorson, Anna

    2010-08-28

    Policymaking in Vietnam has traditionally been the preserve of the political elite, not open to the scrutiny of those outside the Communist Party. This paper aims to analyse Vietnam's HIV policy development in order to describe and understand the policy content, policy-making processes, actors and obstacles to policy implementation. Nine policy documents on HIV were analysed and 17 key informant interviews were conducted in Hanoi and Quang Ninh Province, based on a predesigned interview guide. Framework analysis, a type of qualitative content analysis, was applied for data analysis. Our main finding was that during the last two decades, developments in HIV policy in Vietnam were driven in a top-down way by the state organs, with support and resources coming from international agencies. Four major themes were identified: HIV policy content, the policy-making processes, the actors involved and human resources for policy implementation. Vietnam's HIV policy has evolved from one focused on punitive control measures to a more rights-based approach, encompassing harm reduction and payment of health insurance for medical costs of patients with HIV-related illness. Low salaries and staff reluctance to work with patients, many of whom are drug users and female sex workers, were described as the main barriers to low health staff motivation. Health policy analysis approaches can be applied in a traditional one party state and can demonstrate how similar policy changes take place, as those found in pluralistic societies, but through more top-down and somewhat hidden processes. Enhanced participation of other actors, like civil society in the policy process, is likely to contribute to policy formulation and implementation that meets the diverse needs and concerns of its population.

  6. The evolution of HIV policy in Vietnam: from punitive control measures to a more rights-based approach

    Directory of Open Access Journals (Sweden)

    Pham Nguyen Ha

    2010-08-01

    Full Text Available Aim: Policymaking in Vietnam has traditionally been the preserve of the political elite, not open to the scrutiny of those outside the Communist Party. This paper aims to analyse Vietnam's HIV policy development in order to describe and understand the policy content, policy-making processes, actors and obstacles to policy implementation. Methods: Nine policy documents on HIV were analysed and 17 key informant interviews were conducted in Hanoi and Quang Ninh Province, based on a predesigned interview guide. Framework analysis, a type of qualitative content analysis, was applied for data analysis. Results: Our main finding was that during the last two decades, developments in HIV policy in Vietnam were driven in a top-down way by the state organs, with support and resources coming from international agencies. Four major themes were identified: HIV policy content, the policy-making processes, the actors involved and human resources for policy implementation. Vietnam's HIV policy has evolved from one focused on punitive control measures to a more rights-based approach, encompassing harm reduction and payment of health insurance for medical costs of patients with HIV-related illness. Low salaries and staff reluctance to work with patients, many of whom are drug users and female sex workers, were described as the main barriers to low health staff motivation. Conclusion: Health policy analysis approaches can be applied in a traditional one party state and can demonstrate how similar policy changes take place, as those found in pluralistic societies, but through more top-down and somewhat hidden processes. Enhanced participation of other actors, like civil society in the policy process, is likely to contribute to policy formulation and implementation that meets the diverse needs and concerns of its population.

  7. The relative efficiency of market-based environmental policy instruments with imperfect compliance

    OpenAIRE

    Rousseau, Sandra; Proost, Stef

    2004-01-01

    This paper examines to what extent incomplete compliance of environmental regulation mitigates the distortions caused by pre-existing labour taxes. We study the relative cost efficiency of three market-based instruments: emission taxes, tradable permits and output taxes. In a first-best setting and given that monitoring and enforcement is costless, we find that the same utility levels can be reached with and without incomplete compliance. However, allowing for violations makes the policy i...

  8. A strain gradient plasticity theory with application to wire torsion

    KAUST Repository

    Liu, J. X.; El Sayed, Tamer S.

    2014-01-01

    Based on the framework of the existing strain gradient plasticity theories, we have examined three kinds of relations for the plastic strain dependence of the material intrinsic length scale, and thus developed updated strain gradient plasticity

  9. European Union research in support of environment and health: Building scientific evidence base for policy.

    Science.gov (United States)

    Karjalainen, Tuomo; Hoeveler, Arnd; Draghia-Akli, Ruxandra

    2017-06-01

    Opinion polls show that the European Union citizens are increasingly concerned about the impact of environmental factors on their health. In order to respond and provide solid scientific evidence for the numerous policies related to the protection of human health and the environment managed at the Union level, the European Union made a substantial investment in research and innovation in the past two decades through its Framework Programmes for Research and Technological Development, including the current programme, Horizon 2020, which started in 2014. This policy review paper analysed the portfolio of forty collaborative projects relevant to environment and health, which received a total amount of around 228 million euros from the EU. It gives details on their contents and general scientific trends observed, the profiles of the participating countries and institutions, and the potential policy implications of the results obtained. The increasing knowledge base is needed to make informed policy decisions in Europe and beyond, and should be useful to many stakeholders including the scientific community and regulatory authorities. Copyright © 2017. Published by Elsevier Ltd.

  10. Social gradients in child and adolescent antisocial behavior: a systematic review protocol

    Directory of Open Access Journals (Sweden)

    Piotrowska Patrycja J

    2012-08-01

    Full Text Available Abstract Background The relationship between social position and physical health is well-established across a range of studies. The evidence base regarding social position and mental health is less well developed, particularly regarding the development of antisocial behavior. Some evidence demonstrates a social gradient in behavioral problems, with children from low-socioeconomic backgrounds experiencing more behavioral difficulties than children from high-socioeconomic families. Antisocial behavior is a heterogeneous concept that encompasses behaviors as diverse as physical fighting, vandalism, stealing, status violation and disobedience to adults. Whether all forms of antisocial behavior show identical social gradients is unclear from previous published research. The mechanisms underlying social gradients in antisocial behavior, such as neighborhood characteristics and family processes, have not been fully elucidated. This review will synthesize findings on the social gradient in antisocial behavior, considering variation across the range of antisocial behaviors and evidence regarding the mechanisms that might underlie the identified gradients. Methods In this review, an extensive manual and electronic literature search will be conducted for papers published from 1960 to 2011. The review will include empirical and quantitative studies of children and adolescents ( Discussion This systematic review has been proposed in order to synthesize cross-disciplinary evidence of the social gradient in antisocial behavior and mechanisms underlying this effect. The results of the review will inform social policies aiming to reduce social inequalities and levels of antisocial behavior, and identify gaps in the present literature to guide further research.

  11. A new approach to mixed H2/H infinity controller synthesis using gradient-based parameter optimization methods

    Science.gov (United States)

    Ly, Uy-Loi; Schoemig, Ewald

    1993-01-01

    In the past few years, the mixed H(sub 2)/H-infinity control problem has been the object of much research interest since it allows the incorporation of robust stability into the LQG framework. The general mixed H(sub 2)/H-infinity design problem has yet to be solved analytically. Numerous schemes have considered upper bounds for the H(sub 2)-performance criterion and/or imposed restrictive constraints on the class of systems under investigation. Furthermore, many modern control applications rely on dynamic models obtained from finite-element analysis and thus involve high-order plant models. Hence the capability to design low-order (fixed-order) controllers is of great importance. In this research a new design method was developed that optimizes the exact H(sub 2)-norm of a certain subsystem subject to robust stability in terms of H-infinity constraints and a minimal number of system assumptions. The derived algorithm is based on a differentiable scalar time-domain penalty function to represent the H-infinity constraints in the overall optimization. The scheme is capable of handling multiple plant conditions and hence multiple performance criteria and H-infinity constraints and incorporates additional constraints such as fixed-order and/or fixed structure controllers. The defined penalty function is applicable to any constraint that is expressible in form of a real symmetric matrix-inequity.

  12. Validation of measured poleward TEC gradient using multi-station GPS with Artificial Neural Network based TEC model in low latitude region for developing predictive capability of ionospheric scintillation

    Science.gov (United States)

    Sur, D.; Paul, A.

    2017-12-01

    The equatorial ionosphere shows sharp diurnal and latitudinal Total Electron Content (TEC) variations over a major part of the day. Equatorial ionosphere also exhibits intense post-sunset ionospheric irregularities. Accurate prediction of TEC in these low latitudes is not possible from standard ionospheric models. An Artificial Neural Network (ANN) based Vertical TEC (VTEC) model has been designed using TEC data in low latitude Indian longitude sector for accurate prediction of VTEC. GPS TEC data from the stations Calcutta (22.58°N, 88.38°E geographic, magnetic dip 32°), Baharampore (24.09°N, 88.25°E geographic, magnetic dip 35°) and Siliguri (26.72°N, 88.39°E geographic; magnetic dip 40°) are used as training dataset for the duration of January 2007-September 2011. Poleward VTEC gradients from northern EIA crest to region beyond EIA crest have been calculated from measured VTEC and compared with that obtained from ANN based VTEC model. TEC data from Calcutta and Siliguri are used to compute VTEC gradients during April 2013 and August-September 2013. It has been observed that poleward VTEC gradient computed from ANN based TEC model has shown good correlation with measured values during vernal and autumnal equinoxes of high solar activity periods of 2013. Possible correlation between measured poleward TEC gradients and post-sunset scintillations (S4 ≥ 0.4) from northern crest of EIA has been observed in this paper. From the observation, a suitable threshold poleward VTEC gradient has been proposed for possible occurrence of post-sunset scintillations at northern crest of EIA along 88°E longitude. Poleward VTEC gradients obtained from ANN based VTEC model are used to forecast possible ionospheric scintillation after post-sunset period using the threshold value. It has been observed that these predicted VTEC gradients obtained from ANN based VTEC model can forecast post-sunset L-band scintillation with an accuracy of 67% to 82% in this dynamic low latitude

  13. Travelling gradient thermocouple calibration

    International Nuclear Information System (INIS)

    Broomfield, G.H.

    1975-01-01

    A short discussion of the origins of the thermocouple EMF is used to re-introduce the idea that the Peltier and Thompson effects are indistinguishable from one another. Thermocouples may be viewed as devices which generate an EMF at junctions or as integrators of EMF's developed in thermal gradients. The thermal gradient view is considered the more appropriate, because of its better accord with theory and behaviour, the correct approach to calibration, and investigation of service effects is immediately obvious. Inhomogeneities arise in thermocouples during manufacture and in service. The results of travelling gradient measurements are used to show that such effects are revealed with a resolution which depends on the length of the gradient although they may be masked during simple immersion calibration. Proposed tests on thermocouples irradiated in a nuclear reactor are discussed

  14. Simulation of co-phase error correction of optical multi-aperture imaging system based on stochastic parallel gradient decent algorithm

    Science.gov (United States)

    He, Xiaojun; Ma, Haotong; Luo, Chuanxin

    2016-10-01

    The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.

  15. Sector-based political analysis of energy transition: Green shift in the forest policy regime in France

    International Nuclear Information System (INIS)

    Sergent, Arnaud

    2014-01-01

    This article examines energy transition political process from a sector-based approach, through the analysis of recent shift in the French forest policy regime. We demonstrate that, since 2007, energy transition policies have led to a harvesting turn within the French forest policy framework, meaning that priority is given to wood mobilisation, mainly for biomass uses. In addition, our findings suggest that the political authority wielded by the state over forest policy has shifted from forest administrative services to energy agencies and local authorities. Finally, we show that, although implementation of the harvesting turn is a cause of sectoral and inter-sectoral tensions, energy transition challenge also contributes to a process of (re)institutionalisation of mediation relationships among forestry stakeholders and wood-based industries representatives. The article concludes by arguing that sectors should retain relevant institutional frameworks for actors when choosing political arrangements required for implementing energy transition policy. - Highlights: • Implementing energy transition policy potentially challenges sector-based politics. • We propose a policy regime framework and socio-political investigations. • We analyse the political impact of energy transition policy on French forest sector. • Shifts occur in sectoral policy framework, authority, and mediation relationships

  16. T2*-based MR imaging (gradient echo or susceptibility-weighted imaging) in midline and off-midline intracranial germ cell tumors. A pilot study

    International Nuclear Information System (INIS)

    Morana, Giovanni; Tortora, Domenico; Severino, Mariasavina; Rossi, Andrea; Alves, Cesar Augusto; Finlay, Jonathan L.; Nozza, Paolo; Ravegnani, Marcello; Pavanello, Marco; Milanaccio, Claudia; Garre, Maria Luisa; Maghnie, Mohamad

    2018-01-01

    The role of T2*-based MR imaging in intracranial germ cell tumors (GCTs) has not been fully elucidated. The aim of this study was to evaluate the susceptibility-weighted imaging (SWI) or T2* gradient echo (GRE) features of germinomas and non-germinomatous germ cell tumors (NGGCTs) in midline and off-midline locations. We retrospectively evaluated all consecutive pediatric patients referred to our institution between 2005 and 2016, for newly diagnosed, treatment-naive intracranial GCT, who underwent MRI, including T2*-based MR imaging (T2* GRE sequences or SWI). Standard pre- and post-contrast T1- and T2-weighted imaging characteristics along with T2*-based MR imaging features of all lesions were evaluated. Diagnosis was performed in accordance with the SIOP CNS GCT protocol criteria. Twenty-four subjects met the inclusion criteria (17 males and 7 females). There were 17 patients with germinomas, including 5 basal ganglia primaries, and 7 patients with secreting NGGCT. All off-midline germinomas presented with SWI or GRE hypointensity; among midline GCT, all NGGCTs showed SWI or GRE hypointensity whereas all but one pure germinoma were isointense or hyperintense to normal parenchyma. A significant difference emerged on T2*-based MR imaging among midline germinomas, NGGCTs, and off-midline germinomas (p < 0.001). Assessment of the SWI or GRE characteristics of intracranial GCT may potentially assist in differentiating pure germinomas from NGGCT and in the characterization of basal ganglia involvement. T2*-based MR imaging is recommended in case of suspected intracranial GCT. (orig.)

  17. T2*-based MR imaging (gradient echo or susceptibility-weighted imaging) in midline and off-midline intracranial germ cell tumors. A pilot study

    Energy Technology Data Exchange (ETDEWEB)

    Morana, Giovanni; Tortora, Domenico; Severino, Mariasavina; Rossi, Andrea [Istituto Giannina Gaslini, Neuroradiology Unit, Genoa (Italy); Alves, Cesar Augusto [Hospital Das Clinicas, Radiology Department, Sao Paulo (Brazil); Finlay, Jonathan L. [Nationwide Children' s Hospital and The Ohio State University, Division of Hematology, Oncology and BMT, Columbus, OH (United States); Nozza, Paolo [Istituto Giannina Gaslini, Pathology Unit, Genoa (Italy); Ravegnani, Marcello; Pavanello, Marco [Istituto Giannina Gaslini, Neurosurgery Unit, Genoa (Italy); Milanaccio, Claudia; Garre, Maria Luisa [Istituto Giannina Gaslini, Neuro-oncology Unit, Genoa (Italy); Maghnie, Mohamad [Istituto Giannina Gaslini, University of Genova, Pediatric Endocrine Unit, Genoa (Italy)

    2018-01-15

    The role of T2*-based MR imaging in intracranial germ cell tumors (GCTs) has not been fully elucidated. The aim of this study was to evaluate the susceptibility-weighted imaging (SWI) or T2* gradient echo (GRE) features of germinomas and non-germinomatous germ cell tumors (NGGCTs) in midline and off-midline locations. We retrospectively evaluated all consecutive pediatric patients referred to our institution between 2005 and 2016, for newly diagnosed, treatment-naive intracranial GCT, who underwent MRI, including T2*-based MR imaging (T2* GRE sequences or SWI). Standard pre- and post-contrast T1- and T2-weighted imaging characteristics along with T2*-based MR imaging features of all lesions were evaluated. Diagnosis was performed in accordance with the SIOP CNS GCT protocol criteria. Twenty-four subjects met the inclusion criteria (17 males and 7 females). There were 17 patients with germinomas, including 5 basal ganglia primaries, and 7 patients with secreting NGGCT. All off-midline germinomas presented with SWI or GRE hypointensity; among midline GCT, all NGGCTs showed SWI or GRE hypointensity whereas all but one pure germinoma were isointense or hyperintense to normal parenchyma. A significant difference emerged on T2*-based MR imaging among midline germinomas, NGGCTs, and off-midline germinomas (p < 0.001). Assessment of the SWI or GRE characteristics of intracranial GCT may potentially assist in differentiating pure germinomas from NGGCT and in the characterization of basal ganglia involvement. T2*-based MR imaging is recommended in case of suspected intracranial GCT. (orig.)

  18. Climate-based policies may increase life-cycle social costs of vehicle fleet operation

    International Nuclear Information System (INIS)

    Emery, Isaac; Mbonimpa, Eric; Thal, Alfred E.

    2017-01-01

    Sustainability guidelines and regulations in the United States often focus exclusively on carbon or petroleum reductions. Though some of these policies have resulted in substantial progress toward their goals, the effects of these efforts on other social and environmental externalities are often ignored. In this study, we examine the life-cycle air pollutant emissions for alternative fuel and vehicle purchase scenarios at a military installation near a typical urban area in the United States (U.S.). We find that scenarios which minimize petroleum use or greenhouse gas emissions do not concomitantly minimize criteria air pollutant emissions. We also employ social cost methodologies to quantify economic externalities due to climate change and health-related air pollutant impacts. Accounting for the social costs of climate change and air pollution from vehicle use reveals that criteria air pollutants may have a greater total impact than greenhouse gas emissions in locations similar to the urban area examined in this study. Use of first-generation biofuels, particularly corn grain ethanol, may reduce net petroleum use at the cost of increased total health impacts. More comprehensive policies may be needed to ensure that sustainability policies result in a net benefit to society. - Highlights: • U.S. energy and transportation policies focus on petroleum use and greenhouse gases. • Use of corn ethanol at a military base in Ohio, U.S. increases total social costs vs. gasoline. • Renewable electricity provides cost-effective climate and health protection. • DOD strategy to improve energy security may damage Americans' health. • More inclusive policies needed to protect health and climate.

  19. Magnetoelectric Transverse Gradient Sensor with High Detection Sensitivity and Low Gradient Noise

    OpenAIRE

    Zhang, Mingji; Or, Siu Wing

    2017-01-01

    We report, theoretically and experimentally, the realization of a high detection performance in a novel magnetoelectric (ME) transverse gradient sensor based on the large ME effect and the magnetic field gradient (MFG) technique in a pair of magnetically-biased, electrically-shielded, and mechanically-enclosed ME composites having a transverse orientation and an axial separation. The output voltage of the gradient sensor is directly obtained from the transverse MFG-induced difference in ME vo...

  20. Policy trends and reforms in the German DRG-based hospital payment system.

    Science.gov (United States)

    Klein-Hitpaß, Uwe; Scheller-Kreinsen, David

    2015-03-01

    A central structural point in all DRG-based hospital payment systems is the conversion of relative weights into actual payments. In this context policy makers need to address (amongst other things) (a) how the price level of DRG-payments from one period to the following period is changed and (b) whether and how hospital payments based on DRGs are to be differentiated beyond patient characteristics, e.g. by organizational, regional or state-level factors. Both policy problems can be and in international comparison often are empirically addressed. In Germany relative weights are derived from a highly sophisticated empirical cost calculation, whereas the annual changes of DRG-based payments (base rates) as well as the differentiation of DRG-based hospital payments beyond patient characteristics are not empirically addressed. Rather a complex set of regulations and quasi-market negotiations are applied. There were over the last decade also timid attempts to foster the use of empirical data to address these points. However, these reforms failed to increase the fairness, transparency and rationality of the mechanism to convert relative weights into actual DRG-based hospital payments. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  1. Computational Strain Gradient Crystal Plasticity

    DEFF Research Database (Denmark)

    Niordson, Christian Frithiof; Kysar, Jeffrey W.

    2011-01-01

    A model for strain gradient crystal visco-plasticity is formulated along the lines proposed by Fleck andWillis (2009) for isotropic plasticity. Size-effects are included in the model due to the addition of gradient terms in both the free energy as well as through a dissipation potential. A finite...... element solution method is presented, which delivers the slip-rate field and the velocity-field based on two minimum principles. Some plane deformation problems relevant for certain specific orientations of a face centered cubic crystal under plane loading conditions are studied, and effective in......-plane parameters are developed based on the crystallographic properties of the material. The problem of cyclic shear of a single crystal between rigid platens is studied as well as void growth of a cylindrical void....

  2. Project-Based Market Competition and Policy Implications for Sustainable Developments in Building and Construction Sectors

    Directory of Open Access Journals (Sweden)

    Min-Ren Yan

    2015-11-01

    Full Text Available Building and construction sectors are significant contributors to the global economy, but their energy consumption necessitates greater commitment to sustainable developments. There is therefore a growing demand for green innovation in the form of cleaner production and policies to meet the modern requirements of sustainability. However, the nature in which public work is undertaken is in an environment of project-based market competition, whereby contractors routinely bid for contracts under specific project awarding systems, and variations are accompanied with the unique scope of individual projects before the final goods or services are delivered. A comprehensive understanding of the characteristics and contractors’ behavior in systems could help to identify the leverage points of policies. This paper proposes a system dynamics model, with quantitative analysis and simulations, to demonstrate the problems of a system with different project awarding systems and ineffective market performance. The framework of market efficiency and performance measures has been proposed to evaluate the project-based competition mechanism. Managerial policy implications for market efficiency and sustainable developments can thus be systematically discussed and compared through iterative computer simulations and scenario analysis.

  3. Assessment of Direct-to-Consumer Genetic Testing Policy in Korea Based on Consumer Preference.

    Science.gov (United States)

    Jeong, Gicheol

    2017-01-01

    In June 2016, Korea permitted direct-to-consumer genetic testing (DTC-GT) on 42 genes. However, both the market and industry have not yet been fully activated. Considering the aforementioned context, this study provides important insights. The Korean DTC-GT policy assessment is based on consumer preference analysis using a discrete choice experiment. In August 2016, a web-based survey was conducted to collect data from 1,200 respondents. The estimation results show that consumers prefer a DTC-GT product that is cheap, tests various items or genes, offers accurate test results, and guarantees the confidentiality of all information. However, consumers are not entirely satisfied by current DTC-GT products due to the existence of insufficient and/or inadequate policies. First, the permitted testing of 42 genes is insufficient to satisfy consumers' curiosity regarding their genes. Second, the accuracy of the DTC-GT products has not been fully verified, assessed, and communicated to consumers. Finally, regulatory loopholes that allow information leaks in the DTC-GT process can occur. These findings imply that DTC-GT requires an improvement in government policy-making criteria and the implementation of practical measures to guarantee test accuracy and genetic information. © 2017 S. Karger AG, Basel.

  4. Biodiversity and Habitat Markets—Policy, Economic, and Ecological implications of Market-Based Conservation

    Science.gov (United States)

    Pindilli, Emily J.; Casey, Frank

    2015-10-26

    This report is a primer on market-like and market-based mechanisms designed to conserve biodiversity and habitat. The types of markets and market-based approaches that were implemented or are emerging to benefit biodiversity and habitat in the United States are examined. The central approaches considered in this report include payments for ecosystem services, conservation banks, habitat exchanges, and eco-labels. Based on literature reviews and input from experts and practitioners, the report characterizes each market-based approach including policy context and structure; the theoretical basis for applying market-based approaches; the ecological effectiveness of practices and tools for measuring performance; and the future outlook for biodiversity and habitat markets. This report draws from previous research and serves as a summary of pertinent information associated with biodiversity and habitat markets while providing references to materials that go into greater detail on specific topics.

  5. Policy windows for school-based health education about nutrition in Ecuador

    DEFF Research Database (Denmark)

    Torres, Irene

    2017-01-01

    The aim of this study is to identify opportunities in policy framing for critical health education (CHE) about food and nutrition in Ecuadorian schools. The research engages in a dialogue between the perspectives of critical nutrition and political ecology, as it seeks to clarify and develop...... through critical, democratic and collaborative processes, anchored in and supported by the local community. Based on a textual analysis of health, food and education policy documents, the study finds that concrete norms endorse a biomedical stance. Consequently, focus remains on prescribing individual...... behavior, and schools are regarded as intervention settings, rather than a site for generating change as would be the case of health promotion using a CHE viewpoint. However, the study finds the possibility for developing a CHE perspective in the overarching rationale of “good living”, which reaffirms...

  6. Gradient Alloy for Optical Packaging

    Data.gov (United States)

    National Aeronautics and Space Administration — Advances in additive manufacturing, such as Laser Engineered Net Shaping (LENS), enables the fabrication of compositionally gradient microstructures, i.e. gradient...

  7. Bacterial Diversity in the Digestive Tracts of Four Indian Air-Breathing Fish Species Investigated by PCR Based Denaturing Gradient Gel Electrophoresis

    Directory of Open Access Journals (Sweden)

    Suxu He

    Full Text Available ABSTRACT An investigation was conducted to identify the allochthonous microbiota (entire intestine and the autochthonous microbiota in proximal intestine (PI and distal intestine (DI of four species of Indian air-breathing fish (climbing perch; Anabas testudineus, murrel; Channa punctatus, walking catfish; Clarias batrachus and stinging catfish; Heteropneustes fossilis by PCR based denaturing gradient gel electrophoresis (DGGE. High similarities of the allochthonous microbiota were observed between climbing perch and murrel, walking catfish and stinging catfish, indicating similar food behavior. The autochthonous microbiota of PI and DI from climbing perch and murrel revealed more similarity, than the result obtained from walking catfish and stinging catfish. The autochthonous microbiota of climbing perch and murrel were similar with regard to the allochthonous microbiota, but no such similarity was observed in case of walking catfish and stinging catfish. The fish genotype and intestinal bacteria are well matched and show co-evolutionary relationship. Three fish species has its unique bacteria; autochthonous Enterobacter cloacae, Edwardsiella tarda and Sphingobium sp. in DI of climbing perch, Pseudomonas sp.; allochthonous and autochthonous in PI of walking catfish and uncultured bacterium (EU697160.1, uncultured bacterium (JF018065.1 and uncultured bacterium (EU697160.1 for stinging catfish. In murrel, no unique bacteria were detected.

  8. A TLBO based gradient descent learning-functional link higher order ANN: An efficient model for learning from non-linear data

    Directory of Open Access Journals (Sweden)

    Bighnaraj Naik

    2018-01-01

    Full Text Available All the higher order ANNs (HONNs including functional link ANN (FLANN are sensitive to random initialization of weight and rely on the learning algorithms adopted. Although a selection of efficient learning algorithms for HONNs helps to improve the performance, on the other hand, initialization of weights with optimized weights rather than random weights also play important roles on its efficiency. In this paper, the problem solving approach of the teaching learning based optimization (TLBO along with learning ability of the gradient descent learning (GDL is used to obtain the optimal set of weight of FLANN learning model. TLBO does not require any specific parameters rather it requires only some of the common independent parameters like number of populations, number of iterations and stopping criteria, thereby eliminating the intricacy in selection of algorithmic parameters for adjusting the set of weights of FLANN model. The proposed TLBO-FLANN is implemented in MATLAB and compared with GA-FLANN, PSO-FLANN and HS-FLANN. The TLBO-FLANN is tested on various 5-fold cross validated benchmark data sets from UCI machine learning repository and analyzed under the null-hypothesis by using Friedman test, Holm’s procedure and post hoc ANOVA statistical analysis (Tukey test & Dunnett test.

  9. Flux-based Enrichment Ratios of Throughfall and Stemflow Found to Vary Significantly within Urban Fragments and Along an Urban-to-Rural Gradient

    Science.gov (United States)

    Dowtin, A. L.; Levia, D. F., Jr.

    2017-12-01

    Throughfall and stemflow are important inputs of water and solutes to forest soils in both rural and urban forests. In metropolitan wooded ecosystems, a number of factors can affect flux-based enrichment ratios, including combustion of fossil fuels and proximity to industry. Use of flux-based enrichment ratios provides a means by which this modification of net precipitation chemistry can be quantified for both throughfall and stemflow, and allows for a characterization of the relative contributions of stemflow and throughfall in the delivery of nutrients and pollutants to forest soils. This study utilizes five mixed deciduous forest stands along an urban-to-rural gradient (3 urban fragments, 1 suburban fragment, and a portion of 1 contiguous rural forest) within a medium-sized metropolitan region of the United States' Northeast megalopolis, to determine how the size, shape, structure, and geographic context of remnant forest fragments determine hydrologic and solute fluxes within them. In situ observations of throughfall and stemflow (the latter of which is limited to Quercus rubra and Quercus alba) within each study plot allow for an identification and characterization of the spatial variability in solute fluxes within and between the respective sites. Preliminary observations indicate significant intra-site variability in solute concentrations as observed in both throughfall and stemflow, with higher concentrations along the respective windward edges of the study plots than at greater depths into their interiors. Higher flux-based stemflow enrichment ratios, for both Q. rubra and Q. alba, were also evident for certain ions (i.e., S2-, NO3-) in the urban forest fragments, with significantly lower ratios observed at the suburban and rural sites. Findings from this research are intended to aid in quantifying the spatial variability of the hydrologic and hydrochemical ecosystem service provisions of remnant metropolitan forest fragments. This research is supported in

  10. A road map for leptospirosis research and health policies based on country needs in Latin America.

    Science.gov (United States)

    Pereira, Martha Maria; Schneider, Maria Cristina; Munoz-Zanzi, Claudia; Costa, Federico; Benschop, Jackie; Hartskeerl, Rudy; Martinez, Julio; Jancloes, Michel; Bertherat, Eric

    2018-02-19

    This report summarizes the presentations, discussions and the recommendations coming from the Oswaldo Cruz Institute/FIOCRUZ International Workshop for Leptospirosis Research Based on Country Needs and the 5th Global Leptospirosis Environmental Action Network meeting, which was held in the city of Rio de Janeiro, Brazil, 10-12 November 2015. The event focused on health policy and worked to develop a road map as a consensus document to help guide decision-making by policymakers, funding bodies, and health care professionals. The direction that leptospirosis research should take in the coming years was emphasized, taking into account the needs of countries of Latin America, as well as experiences from other world regions, as provided by international experts. The operational concepts of "One Health" and translational research underlaid the discussions and the resulting recommendations. Despite the wide geographic distribution of leptospirosis and its impact in terms of incidence, morbidity, and mortality, leptospirosis is not yet considered a "tool-ready" disease for global initiatives. Surveillance programs need new tools and strategies for early detection, prevention, and follow-up. The major recommendations developed at the Rio meeting cover both health policy and research. The health policy recommendations should be taken into account by decisionmakers, government officials, and the Pan American Health Organization. The priorities for research, technological development, and innovation should be considered by research institutions, universities, and stakeholders.

  11. Paper Fish and Policy Conflict: Catch Shares and Ecosystem-Based Management in Maine's Groundfishery

    Directory of Open Access Journals (Sweden)

    Jennifer F. Brewer

    2011-03-01

    Full Text Available The National Oceanic and Atmospheric Administration professes support for ecosystem-based fisheries management, as mandated by Congress in the Fishery Conservation and Management Act, and as endorsed by the Obama Administration's national ocean policy. Nonetheless, driving agency policies, including catch shares and fishing quotas, focus principally on individual species, diverting attention from ecosystem considerations such as habitat, migratory patterns, trophic relationships, fishing gear, and firm-level decision making. Environmental non-governmental organization (ENGO agendas manifest similar inconsistencies. A case study of Maine's groundfishery demonstrates implications of this policy conflict at the local level. There, multigenerational fishing villages have historically pursued diversified and adaptive livelihood strategies, supported by local ecological knowledge. This tradition is increasingly eroded by regulatory constraints, including catch shares. Field observation, interviews, survey data, and archival review reveal that industry-supported, ecosystem-focused proposals have been rejected by the New England Fishery Management Council, despite the apparent failure of single-species approaches to sustain fish populations, fished ecosystems, and fishing-dependent communities. The creation of groundfishery catch share sectors is likely to perpetuate industry consolidation and political entrenchment under more mobile capital, following precedent set by days-at-sea, and making area protections and gear restrictions less likely. Pending marine spatial planning efforts could enhance social-ecological resilience by creating new opportunities for transdisciplinary decision support, and broader public participation and accountability.

  12. High Gradient Accelerator Research

    International Nuclear Information System (INIS)

    Temkin, Richard

    2016-01-01

    The goal of the MIT program of research on high gradient acceleration is the development of advanced acceleration concepts that lead to a practical and affordable next generation linear collider at the TeV energy level. Other applications, which are more near-term, include accelerators for materials processing; medicine; defense; mining; security; and inspection. The specific goals of the MIT program are: • Pioneering theoretical research on advanced structures for high gradient acceleration, including photonic structures and metamaterial structures; evaluation of the wakefields in these advanced structures • Experimental research to demonstrate the properties of advanced structures both in low-power microwave cold test and high-power, high-gradient test at megawatt power levels • Experimental research on microwave breakdown at high gradient including studies of breakdown phenomena induced by RF electric fields and RF magnetic fields; development of new diagnostics of the breakdown process • Theoretical research on the physics and engineering features of RF vacuum breakdown • Maintaining and improving the Haimson / MIT 17 GHz accelerator, the highest frequency operational accelerator in the world, a unique facility for accelerator research • Providing the Haimson / MIT 17 GHz accelerator facility as a facility for outside users • Active participation in the US DOE program of High Gradient Collaboration, including joint work with SLAC and with Los Alamos National Laboratory; participation of MIT students in research at the national laboratories • Training the next generation of Ph. D. students in the field of accelerator physics.

  13. Exploring policy impacts for servicising in product-based markets : A generic agent-based model

    NARCIS (Netherlands)

    van der Veen, R.A.C.; Kisjes, K.H.; Nikolic, I.

    2017-01-01

    The shift to markets based on servicising, i.e. market-level transitions from product-based to service-based production and consumption patterns, may contribute to achieve absolute decoupling, i.e. the combined development of economic growth and environmental impact reduction. However, the

  14. Strengthening vaccination policies in Latin America: an evidence-based approach.

    Science.gov (United States)

    Tapia-Conyer, Roberto; Betancourt-Cravioto, Miguel; Saucedo-Martínez, Rodrigo; Motta-Murguía, Lourdes; Gallardo-Rincón, Héctor

    2013-08-20

    Despite many successes in the region, Latin American vaccination policies have significant shortcomings, and further work is needed to maintain progress and prepare for the introduction of newly available vaccines. In order to address the challenges facing Latin America, the Commission for the Future of Vaccines in Latin America (COFVAL) has made recommendations for strengthening evidence-based policy-making and reducing regional inequalities in immunisation. We have conducted a comprehensive literature review to assess the feasibility of these recommendations. Standardisation of performance indicators for disease burden, vaccine coverage, epidemiological surveillance and national health resourcing can ensure comparability of the data used to assess vaccination programmes, allowing deeper analysis of how best to provide services. Regional vaccination reference schemes, as used in Europe, can be used to develop best practice models for vaccine introduction and scheduling. Successful models exist for the continuous training of vaccination providers and decision-makers, with a new Latin American diploma aiming to contribute to the successful implementation of vaccination programmes. Permanent, independent vaccine advisory committees, based on the US Advisory Committee on Immunization Practices (ACIP), could facilitate the uptake of new vaccines and support evidence-based decision-making in the administration of national immunisation programmes. Innovative financing mechanisms for the purchase of new vaccines, such as advance market commitments and cost front-loading, have shown potential for improving vaccine coverage. A common regulatory framework for vaccine approval is needed to accelerate delivery and pool human, technological and scientific resources in the region. Finally, public-private partnerships between industry, government, academia and non-profit sectors could provide new investment to stimulate vaccine development in the region, reducing prices in the

  15. Gradient-based optimization with B-splines on sparse grids for solving forward-dynamics simulations of three-dimensional, continuum-mechanical musculoskeletal system models.

    Science.gov (United States)

    Valentin, J; Sprenger, M; Pflüger, D; Röhrle, O

    2018-05-01

    Investigating the interplay between muscular activity and motion is the basis to improve our understanding of healthy or diseased musculoskeletal systems. To be able to analyze the musculoskeletal systems, computational models are used. Albeit some severe modeling assumptions, almost all existing musculoskeletal system simulations appeal to multibody simulation frameworks. Although continuum-mechanical musculoskeletal system models can compensate for some of these limitations, they are essentially not considered because of their computational complexity and cost. The proposed framework is the first activation-driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using 3-dimensional, continuum-mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem. Numerical feasibility is achieved by computing sparse grid surrogates with hierarchical B-splines, and adaptive sparse grid refinement further reduces the computational effort. The choice of B-splines allows the use of all existing gradient-based optimization techniques without further numerical approximation. This paper demonstrates that the resulting surrogates have low relative errors (less than 0.76%) and can be used within forward simulations that are subject to constraint optimization. To demonstrate this, we set up several different test scenarios in which an upper limb model consisting of the elbow joint, the biceps and triceps brachii, and an external load is subjected to different optimization criteria. Even though this novel method has only been demonstrated for a 2-muscle system, it can easily be extended to musculoskeletal systems with 3 or more muscles. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Comparison of spin-echo and gradient recalled echo T1 weighted MR images for quantitative voxel-based clinical brain research

    International Nuclear Information System (INIS)

    Barnden, L.R.; Crouch, B.

    2010-01-01

    Full text: New methods to normalise inter-subject global variations in T 1 -weighted MR (T I w) signal levels have permitted their use in voxel based population studies of brain dysfunction. Here we address the question of whether a spin-echo (SE) or a gradient recalled echo (GRE) T I w sequence is better for this purpose. GRE images are commonly referred to as 3D MRL SE has superior signal/noise properties to GRE but is slower to acquire so that typical slice thicknesses are 3-5 mm compared to 1-2 mm for GRE. GRE has better grey/white matter contrast which should permit better spatial normalization. However, unlike SE, GRE is affected by subject-specific magnetic field inhomogeneities that distort the images. We acquired T I brain images for 25 chronic fatigue syndrome (CFS) patients and 25 normal controls (NC) with TRITE/flip-angle of 600 ms/l5 ms/90 deg for SE and 5.76 ms/1.9 ms/9 deg for GRE. For GRE, the magnetic field inhomogeneity related signal level distortions could be corrected, but not the spatial distortions. After spatial normalization we subjected them to voxel-based statistical analysis with adjustment for global signal level using the SPM5 package. Initially, the same spatial normalization deformations were applied to both SE and GRE after coregistering them. Although the SPM regressions of SE and GRE yielded similar spatial distributions of significance, the SE regressions were consistently statistically stronger. For example, in one strong regression, the corrected cluster P value was twenty times stronger (I.Oe-5 versus I.Oe-3). T I w SE have proved better than T I GRE images in quantitative analysis in a clinical research study. (author)

  17. Market power and output-based refunding of environmental policy revenues

    International Nuclear Information System (INIS)

    Fischer, Carolyn

    2011-01-01

    Output-based refunding of environmental policy revenues combines a tax on emissions with a production subsidy, typically in a revenue-neutral fashion. With imperfect competition, subsidies can alleviate output underprovision. However, when market shares are significant, endogenous refunding reduces abatement incentives and the marginal net tax or subsidy. If market shares differ, marginal abatement costs will not be equalized, and production is shifted among participants. In an asymmetric Cournot duopoly, endogenous refunding leads to higher output, emissions, and overall costs compared with a fixed rebate program targeting the same emissions intensity. These results hold whether emissions rates are determined simultaneously with output or strategically in a two-stage model. (author)

  18. No hospital left behind? Education policy lessons for value-based payment in healthcare.

    Science.gov (United States)

    Maurer, Kristin A; Ryan, Andrew M

    2016-01-01

    Value-based payment systems have been widely implemented in healthcare in an effort to improve the quality of care. However, these programs have not broadly improved quality, and some evidence suggests that they may increase inequities in care. No Child Left Behind is a parallel effort in education to address uneven achievement and inequalities. Yet, by penalizing the lowest performers, No Child Left Behind's approach to accountability has led to a number of unintended consequences. This article draws lessons from education policy, arguing that financial incentives should be designed to support the lowest performers to improve quality. © 2015 Society of Hospital Medicine.

  19. Spatial Planning and Policy Evaluation in an Urban Conurbation: a Regional Agent-Based Economic Model

    Directory of Open Access Journals (Sweden)

    Luzius Stricker

    2017-03-01

    Full Text Available This paper studies different functions and relations between 45 agglomerated municipalities in southern Switzerland (Ticino, using a territorial agent-based model. Our research adopts a bottom-up approach to urban systems, considering the agglomeration mechanism and effects of different regional and urban policies, and simulates the individual actions of diverse agents on a real city using an Agent-based model (ABM. Simulating the individual actions of diverse agents on a real city and measuring the resulting system behaviour and outcomes over time, they effectively provide a good test bed for evaluating the impact of different policies. The database is created merging the Swiss official secondary data for one reference year (2011 with Eurostat and OECD-Regpat. The results highlight that the understanding of municipalities’ functions on the territory appears to be essential for designing a solid institutional agglomeration (or city. From a methodological point of view, we contribute to improve the application of territorial ABM. Finally, our results provide a robust base to evaluate in a dynamic way various political interventions, in order to ensure a sustainable development of the agglomeration and the surrounding territories. Applying the analyses and the model on a larger scale, including further regions and conurbations, and including more indicators and variables, to obtain a more detailed and characteristic model, will constitute a further step of the research.

  20. A preventive maintenance policy based on dependent two-stage deterioration and external shocks

    International Nuclear Information System (INIS)

    Yang, Li; Ma, Xiaobing; Peng, Rui; Zhai, Qingqing; Zhao, Yu

    2017-01-01

    This paper proposes a preventive maintenance policy for a single-unit system whose failure has two competing and dependent causes, i.e., internal deterioration and sudden shocks. The internal failure process is divided into two stages, i.e. normal and defective. Shocks arrive according to a non-homogeneous Poisson process (NHPP), leading to the failure of the system immediately. The occurrence rate of a shock is affected by the state of the system. Both an age-based replacement and finite number of periodic inspections are schemed simultaneously to deal with the competing failures. The objective of this study is to determine the optimal preventive replacement interval, inspection interval and number of inspections such that the expected cost per unit time is minimized. A case study on oil pipeline maintenance is presented to illustrate the maintenance policy. - Highlights: • A maintenance model based on two-stage deterioration and sudden shocks is developed. • The impact of internal system state on external shock process is studied. • A new preventive maintenance strategy combining age-based replacements and periodic inspections is proposed. • Postponed replacement of a defective system is provided by restricting the number of inspections.

  1. An Efficient Key-Policy Attribute-Based Encryption Scheme with Constant Ciphertext Length

    Directory of Open Access Journals (Sweden)

    Changji Wang

    2013-01-01

    Full Text Available There is an acceleration of adoption of cloud computing among enterprises. However, moving the infrastructure and sensitive data from trusted domain of the data owner to public cloud will pose severe security and privacy risks. Attribute-based encryption (ABE is a new cryptographic primitive which provides a promising tool for addressing the problem of secure and fine-grained data sharing and decentralized access control. Key-policy attribute-based encryption (KP-ABE is an important type of ABE, which enables senders to encrypt messages under a set of attributes and private keys are associated with access structures that specify which ciphertexts the key holder will be allowed to decrypt. In most existing KP-ABE scheme, the ciphertext size grows linearly with the number of attributes embedded in ciphertext. In this paper, we propose a new KP-ABE construction with constant ciphertext size. In our construction, the access policy can be expressed as any monotone access structure. Meanwhile, the ciphertext size is independent of the number of ciphertext attributes, and the number of bilinear pairing evaluations is reduced to a constant. We prove that our scheme is semantically secure in the selective-set model based on the general Diffie-Hellman exponent assumption.

  2. Exploring Context and the Factors Shaping Team-Based Primary Healthcare Policies in Three Canadian Provinces: A Comparative Analysis.

    Science.gov (United States)

    Misfeldt, Renée; Suter, Esther; Mallinson, Sara; Boakye, Omenaa; Wong, Sabrina; Nasmith, Louise

    2017-08-01

    This paper discusses findings from a high-level scan of the contextual factors and actors that influenced policies on team-based primary healthcare in three Canadian provinces: British Columbia, Alberta and Saskatchewan. The team searched diverse sources (e.g., news reports, press releases, discussion papers) for contextual information relevant to primary healthcare teams. We also conducted qualitative interviews with key health system informants from the three provinces. Data from documents and interviews were analyzed qualitatively using thematic analysis. We then wrote narrative summaries highlighting pivotal policy and local system events and the influence of actors and context. Our overall findings highlight the value of reviewing the context, relationships and power dynamics, which come together and create "policy windows" at different points in time. We observed physician-centric policy processes with some recent moves to rebalance power and be inclusive of other actors and perspectives. The context review also highlighted the significant influence of changes in political leadership and prioritization in driving policies on team-based care. While this existed in different degrees in the three provinces, the push and pull of political and professional power dynamics shaped Canadian provincial policies governing team-based care. If we are to move team-based primary healthcare forward in Canada, the provinces need to review the external factors and the complex set of relationships and trade-offs that underscore the policy process. Copyright © 2017 Longwoods Publishing.

  3. Multicriteria-based decision aiding technique for assessing energy policy elements-demonstration to a case in Bangladesh

    International Nuclear Information System (INIS)

    Rahman, Md. Mizanur; Paatero, Jukka V.; Lahdelma, Risto; Wahid, Mazlan A.

    2016-01-01

    Highlights: • A multicriteria technique for assessing energy policy elements has been proposed. • Energy policy elements have been examined based on assigned criteria. • This assessment gives results which are representative of all stakeholders. • Policy elements which are chosen by this method promote sustainability. - Abstract: The adverse environmental consequences and diminishing trend of fossil fuel reserves indicate a serious need for vibrant and judicious energy policy. Energy policy involves a number of stakeholders, and needs to incorporate the interests and requirements of all the key stakeholder groups. This paper presents a methodological technique to assist with formulating, evaluating, and promoting the energy policy of a country in a transparent and representative way with clear scientific justifications and balanced assessments. The multicriteria decision analysis approach has been a widely used technique for evaluating different alternatives based on the interests of a multitude of stakeholders, and goals. This paper utilizes the SMAA (Stochastic Multicriteria Acceptability Analysis) tool, which can evaluate different alternatives by incorporating multiple criteria, in order to examine the preferences of different policy elements. We further extend this technique by incorporating the LEAP model (Long-range Energy Alternatives Planning system) to assess the emission impacts of different policy elements. We demonstrate the application of this evaluation technique by an analysis of four hypothetical policy elements namely Business-as usual (BAU), Renewables (REN), Renewable-biomass only (REN-b), and Energy conservation and efficient technologies (ECET). These are applied to the case of sharing fuel sources for power generation for the Bangladesh power sector. We found that the REN-b and REN policy elements were the best and second best alternatives with 41% and 32% acceptability respectively. This technique gives transparent information for

  4. Area-based initiatives – engines of innovation in planning and policy?

    DEFF Research Database (Denmark)

    Larsen, Jacob Norvig; Agger, Annika

    . Nevertheless, there is still considerable uncertainty as to the most important outcomes of place-based initiatives. Evaluations have mostly focussed on direct quantitative socio-economic indicators. These have often been quite insignificant, while other effects have been largely neglected. This paper proposes...... and development in planning culture turns out to be a more substantial result than the reduction of social exclusion and economic deprivation. The paper analyses all available official evaluation studies of Danish place-based urban policy initiatives from mid-1990s through 2010. In addition to this, recent...... studies of local planning culture change are discussed. Main findings are that during the past two decades a general change in planning culture has developed gradually, triggered by urban regeneration full scale experimentation with place-based approaches. Second, planners as well as public administrators...

  5. Uniform gradient expansions

    CERN Document Server

    Giovannini, Massimo

    2015-01-01

    Cosmological singularities are often discussed by means of a gradient expansion that can also describe, during a quasi-de Sitter phase, the progressive suppression of curvature inhomogeneities. While the inflationary event horizon is being formed the two mentioned regimes coexist and a uniform expansion can be conceived and applied to the evolution of spatial gradients across the protoinflationary boundary. It is argued that conventional arguments addressing the preinflationary initial conditions are necessary but generally not sufficient to guarantee a homogeneous onset of the conventional inflationary stage.

  6. High gradient superconducting quadrupoles

    International Nuclear Information System (INIS)

    Lundy, R.A.; Brown, B.C.; Carson, J.A.; Fisk, H.E.; Hanft, R.H.; Mantsch, P.M.; McInturff, A.D.; Remsbottom, R.H.

    1987-07-01

    Prototype superconducting quadrupoles with a 5 cm aperture and gradient of 16 kG/cm have been built and tested as candidate magnets for the final focus at SLC. The magnets are made from NbTi Tevatron style cable with 10 inner and 14 outer turns per quadrant. Quench performance and multipole data are presented. Design and data for a low current, high gradient quadrupole, similar in cross section but wound with a cable consisting of five insulated conductors are also discussed

  7. On fracture in finite strain gradient plasticity

    DEFF Research Database (Denmark)

    Martínez Pañeda, Emilio; Niordson, Christian Frithiof

    2016-01-01

    In this work a general framework for damage and fracture assessment including the effect of strain gradients is provided. Both mechanism-based and phenomenological strain gradient plasticity (SGP) theories are implemented numerically using finite deformation theory and crack tip fields are invest......In this work a general framework for damage and fracture assessment including the effect of strain gradients is provided. Both mechanism-based and phenomenological strain gradient plasticity (SGP) theories are implemented numerically using finite deformation theory and crack tip fields...... are investigated. Differences and similarities between the two approaches within continuum SGP modeling are highlighted and discussed. Local strain hardening promoted by geometrically necessary dislocations (GNDs) in the vicinity of the crack leads to much higher stresses, relative to classical plasticity...... in the multiple parameter version of the phenomenological SGP theory. Since this also dominates the mechanics of indentation testing, results suggest that length parameters characteristic of mode I fracture should be inferred from nanoindentation....

  8. Developing evidence-based ethical policies on the migration of health workers: conceptual and practical challenges

    Directory of Open Access Journals (Sweden)

    Adams Orvill

    2003-10-01

    Full Text Available Abstract It is estimated that in 2000 almost 175 million people, or 2.9% of the world's population, were living outside their country of birth, compared to 100 million, or 1.8% of the total population, in 1995. As the global labour market strengthens, it is increasingly highly skilled professionals who are migrating. Medical practitioners and nurses represent a small proportion of highly skilled workers who migrate, but the loss of health human resources for developing countries can mean that the capacity of the health system to deliver health care equitably is compromised. However, data to support claims on both the extent and the impact of migration in developing countries is patchy and often anecdotal, based on limited databases with highly inconsistent categories of education and skills. The aim of this paper is to examine some key issues related to the international migration of health workers in order to better understand its impact and to find entry points to developing policy options with which migration can be managed. The paper is divided into six sections. In the first, the different types of migration are reviewed. Some global trends are depicted in the second section. Scarcity of data on health worker migration is one major challenge and this is addressed in section three, which reviews and discusses different data sources. The consequences of health worker migration and the financial flows associated with it are presented in section four and five, respectively. To illustrate the main issues addressed in the previous sections, a case study based mainly on the United Kingdom is presented in section six. This section includes a discussion on policies and ends by addressing the policy options from a broader perspective.

  9. Ultimate gradient in solid-state accelerators

    International Nuclear Information System (INIS)

    Whittum, D.H.

    1998-08-01

    The authors recall the motivation for research in high-gradient acceleration and the problems posed by a compact collider. They summarize the phenomena known to appear in operation of a solid-state structure with large fields, and research relevant to the question of the ultimate gradient. They take note of new concepts, and examine one in detail, a miniature particle accelerator based on an active millimeter-wave circuit and parallel particle beams

  10. Evidence based policy making in the European Union. The role of the scientific community

    Energy Technology Data Exchange (ETDEWEB)

    Majcen, Spela [Euro-Mediterranean Univ. (EMUNI), Portoroz (Slovenia)

    2017-03-15

    In the times when the acquis of the European Union (EU) has developed so far as to reach a high level of technical complexity, in particular in certain policy fields such as environmental legislation, it is important to look at what kind of information and data policy decisions are based on. This position paper looks at the extent to which evidence-based decision-making process is being considered in the EU institutions when it comes to adopting legislation in the field of environment at the EU level. The paper calls for closer collaboration between scientists and decision-makers in view of ensuring that correct data is understood and taken into consideration when drafting, amending, negotiating and adopting new legal texts at all levels of the EU decision-making process. It concludes that better awareness of the need for such collaboration among the decision-makers as well as the scientific community would benefit the process and quality of the final outcomes (legislation).

  11. Evidence based policy making in the European Union: the role of the scientific community.

    Science.gov (United States)

    Majcen, Špela

    2017-03-01

    In the times when the acquis of the European Union (EU) has developed so far as to reach a high level of technical complexity, in particular in certain policy fields such as environmental legislation, it is important to look at what kind of information and data policy decisions are based on. This position paper looks at the extent to which evidence-based decision-making process is being considered in the EU institutions when it comes to adopting legislation in the field of environment at the EU level. The paper calls for closer collaboration between scientists and decision-makers in view of ensuring that correct data is understood and taken into consideration when drafting, amending, negotiating and adopting new legal texts at all levels of the EU decision-making process. It concludes that better awareness of the need for such collaboration among the decision-makers as well as the scientific community would benefit the process and quality of the final outcomes (legislation).

  12. Evidence based policy making in the European Union. The role of the scientific community

    International Nuclear Information System (INIS)

    Majcen, Spela

    2017-01-01

    In the times when the acquis of the European Union (EU) has developed so far as to reach a high level of technical complexity, in particular in certain policy fields such as environmental legislation, it is important to look at what kind of information and data policy decisions are based on. This position paper looks at the extent to which evidence-based decision-making process is being considered in the EU institutions when it comes to adopting legislation in the field of environment at the EU level. The paper calls for closer collaboration between scientists and decision-makers in view of ensuring that correct data is understood and taken into consideration when drafting, amending, negotiating and adopting new legal texts at all levels of the EU decision-making process. It concludes that better awareness of the need for such collaboration among the decision-makers as well as the scientific community would benefit the process and quality of the final outcomes (legislation).

  13. Modeling urban expansion policy scenarios using an agent-based approach for Guangzhou Metropolitan Region of China

    Directory of Open Access Journals (Sweden)

    Guangjin Tian

    2014-09-01

    Full Text Available Policy makers and the human decision processes of urban planning have an impact on urban expansion. The behaviors and decision modes of regional authority, real estate developer, resident, and farmer agents and their interactions can be simulated by the analytical hierarchy process (AHP method. The driving factors are regressed with urban dynamics instead of static land-use types. Agents' behaviors and decision modes have an impact on the urban dynamic pattern by adjusting parameter weights. We integrate an agent-based model (ABM with AHP to investigate a complex decision-making process and future urban dynamic processes. Three policy scenarios for baseline development, rapid development, and green land protection have been applied to predict the future development patterns of the Guangzhou metropolitan region. A future policy scenario analysis can help policy makers to understand the possible results. These individuals can adjust their policies and decisions according to their different objectives.

  14. A condition-based maintenance policy for multi-component systems with Lévy copulas dependence

    International Nuclear Information System (INIS)

    Li, Heping; Deloux, Estelle; Dieulle, Laurence

    2016-01-01

    In this paper, we propose a new condition-based maintenance policy for multi-component systems taking into account stochastic and economic dependences. The stochastic dependence between components due to common environment is modelled by Lévy copulas. Its influence on the maintenance optimization is investigated with different dependence degrees. On the issue of economic dependence providing opportunities to group maintenance activities, a new maintenance decision rule is proposed which permits maintenance grouping. In order to evaluate the performance of the proposed maintenance policy, we compare it to the classical maintenance policies. - Highlights: • A new adaptive maintenance policy for grouping maintenance actions is proposed. • The impacts of both economic and stochastic dependences are investigated. • The performance of the proposed maintenance policy is evaluated under different system configurations. • The stochastic dependence is modelled by Lévy copulas. • The proposed maintenance decision rule can take full advantage of economic and stochastic dependences.

  15. Manipulating the Gradient

    Science.gov (United States)

    Gaze, Eric C.

    2005-01-01

    We introduce a cooperative learning, group lab for a Calculus III course to facilitate comprehension of the gradient vector and directional derivative concepts. The lab is a hands-on experience allowing students to manipulate a tangent plane and empirically measure the effect of partial derivatives on the direction of optimal ascent. (Contains 7…

  16. Dynamic analysis of the urban-based low-carbon policy using system dynamics: Focused on housing and green space

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Taehoon, E-mail: hong7@yonsei.ac.kr [Associate Professor, Department of Architectural Engineering, Yonsei University, Seoul, 120-749 (Korea, Republic of); Kim, Jimin, E-mail: cookie6249@yonsei.ac.kr; Jeong, Kwangbok, E-mail: kbjeong7@yonsei.ac.kr [Research Assistant and Ph.D. Student, Department of Architectural Engineering, Yonsei University, Seoul, 120-749 (Korea, Republic of); Koo, Choongwan, E-mail: cwkoo@yonsei.ac.kr [Postdoctoral Fellow, Department of Architectural Engineering, Yonsei University, Seoul, 120-749 (Korea, Republic of)

    2015-02-09

    To systematically manage the energy consumption of existing buildings, the government has to enforce greenhouse gas reduction policies. However, most of the policies are not properly executed because they do not consider various factors from the urban level perspective. Therefore, this study aimed to conduct a dynamic analysis of an urban-based low-carbon policy using system dynamics, with a specific focus on housing and green space. This study was conducted in the following steps: (i) establishing the variables of urban-based greenhouse gases (GHGs) emissions; (ii) creating a stock/flow diagram of urban-based GHGs emissions; (iii) conducting an information analysis using the system dynamics; and (iv) proposing the urban-based low-carbon policy. If a combined energy policy that uses the housing sector (30%) and the green space sector (30%) at the same time is implemented, 2020 CO{sub 2} emissions will be 7.23 million tons (i.e., 30.48% below 2020 business-as-usual), achieving the national carbon emissions reduction target (26.9%). The results of this study could contribute to managing and improving the fundamentals of the urban-based low-carbon policies to reduce greenhouse gas emissions.

  17. Dynamic analysis of the urban-based low-carbon policy using system dynamics: Focused on housing and green space

    International Nuclear Information System (INIS)

    Hong, Taehoon; Kim, Jimin; Jeong, Kwangbok; Koo, Choongwan

    2015-01-01

    To systematically manage the energy consumption of existing buildings, the government has to enforce greenhouse gas reduction policies. However, most of the policies are not properly executed because they do not consider various factors from the urban level perspective. Therefore, this study aimed to conduct a dynamic analysis of an urban-based low-carbon policy using system dynamics, with a specific focus on housing and green space. This study was conducted in the following steps: (i) establishing the variables of urban-based greenhouse gases (GHGs) emissions; (ii) creating a stock/flow diagram of urban-based GHGs emissions; (iii) conducting an information analysis using the system dynamics; and (iv) proposing the urban-based low-carbon policy. If a combined energy policy that uses the housing sector (30%) and the green space sector (30%) at the same time is implemented, 2020 CO 2 emissions will be 7.23 million tons (i.e., 30.48% below 2020 business-as-usual), achieving the national carbon emissions reduction target (26.9%). The results of this study could contribute to managing and improving the fundamentals of the urban-based low-carbon policies to reduce greenhouse gas emissions

  18. Study of Evaluation OSH Management System Policy Based On Safety Culture Dimensions in Construction Project

    Science.gov (United States)

    Latief, Yusuf; Armyn Machfudiyanto, Rossy; Arifuddin, Rosmariani; Mahendra Fira Setiawan, R.; Yogiswara, Yoko

    2017-07-01

    Safety Culture in the construction industry is very influential on the socio economic conditions that resulted in the country’s competitiveness. Based on the data, the accident rate of construction projects in Indonesia is very high. In the era of the Asian Economic Community (AEC) Indonesian contractor is required to improve competitiveness, one of which is the implementation of the project without zero accident. Research using primary and secondary data validated the results of the literature experts and questionnaire respondents were analyzed using methods SmartPLS, obtained pattern of relationships between dimensions of safety culture to improve the performance of Safety. The results showed that the behaviors and Cost of Safety into dimensions that significantly affect the performance of safety. an increase in visible policy-based on Regulation of Public Work and Housing No 5/PRT/M/2014 to improve to lower the accident rate.

  19. Application of theory-based evaluation for the critical analysis of national biofuel policy: A case study in Malaysia.

    Science.gov (United States)

    Abdul-Manan, Amir F N; Baharuddin, Azizan; Chang, Lee Wei

    2015-10-01

    Theory-based evaluation (TBE) is an effectiveness assessment technique that critically analyses the theory underlying an intervention. Whilst its use has been widely reported in the area of social programmes, it is less applied in the field of energy and climate change policy evaluations. This paper reports a recent study that has evaluated the effectiveness of the national biofuel policy (NBP) for the transport sector in Malaysia by adapting a TBE approach. Three evaluation criteria were derived from the official goals of the NBP, those are (i) improve sustainability and environmental friendliness, (ii) reduce fossil fuel dependency, and (iii) enhance stakeholders' welfare. The policy theory underlying the NBP has been reconstructed through critical examination of the policy and regulatory documents followed by a rigorous appraisal of the causal link within the policy theory through the application of scientific knowledge. This study has identified several weaknesses in the policy framework that may engender the policy to be ineffective. Experiences with the use of a TBE approach for policy evaluations are also shared in this report. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Comprehensive optimisation of China’s energy prices, taxes and subsidy policies based on the dynamic computable general equilibrium model

    International Nuclear Information System (INIS)

    He, Y.X.; Liu, Y.Y.; Du, M.; Zhang, J.X.; Pang, Y.X.

    2015-01-01

    Highlights: • Energy policy is defined as a complication of energy price, tax and subsidy policies. • The maximisation of total social benefit is the optimised objective. • A more rational carbon tax ranges from 10 to 20 Yuan/ton under the current situation. • The optimal coefficient pricing is more conducive to maximise total social benefit. - Abstract: Under the condition of increasingly serious environmental pollution, rational energy policy plays an important role in the practical significance of energy conservation and emission reduction. This paper defines energy policies as the compilation of energy prices, taxes and subsidy policies. Moreover, it establishes the optimisation model of China’s energy policy based on the dynamic computable general equilibrium model, which maximises the total social benefit, in order to explore the comprehensive influences of a carbon tax, the sales pricing mechanism and the renewable energy fund policy. The results show that when the change rates of gross domestic product and consumer price index are ±2%, ±5% and the renewable energy supply structure ratio is 7%, the more reasonable carbon tax ranges from 10 to 20 Yuan/ton, and the optimal coefficient pricing mechanism is more conducive to the objective of maximising the total social benefit. From the perspective of optimising the overall energy policies, if the upper limit of change rate in consumer price index is 2.2%, the existing renewable energy fund should be improved

  1. School-Based Obesity-Prevention Policies and Practices and Weight-Control Behaviors among Adolescents.

    Science.gov (United States)

    Larson, Nicole; Davey, Cynthia S; Caspi, Caitlin E; Kubik, Martha Y; Nanney, Marilyn S

    2017-02-01

    The promotion of healthy eating and physical activity within school settings is an important component of population-based strategies to prevent obesity; however, adolescents may be vulnerable to weight-related messages, as rapid development during this life stage often leads to preoccupation with body size and shape. This study examines secular trends in secondary school curricula topics relevant to the prevention of unhealthy weight-control behaviors; describes cross-sectional associations between weight-related curricula content and students' use of weight-control behaviors; and assesses whether implementation of school-based obesity-prevention policies/practices is longitudinally related to students' weight-control behaviors. The Minnesota School Health Profiles and Minnesota Student Survey (grades 9 and 12) data were used along with National Center for Education Statistics data to examine secular trends, cross-sectional associations (n=141 schools), and longitudinal associations (n=42 schools). Students self-reported their height and weight along with past-year use of healthy (eg, exercise), unhealthy (eg, fasting), and extreme (eg, use laxatives) weight-control behaviors. Descriptive statistics, generalized estimating equations, and generalized linear regression models accounting for school-level demographics. There was no observable pattern during the years 2008 to 2014 in the mean number of curricula topics addressing unhealthy weight-control behaviors, despite an increase in the prevalence of curricula addressing acceptance of body-size differences. Including three vs fewer weight-control topics and specifically including the topic of eating disorders in the curricula was related to a lower school-level percent of students using any extreme weight-control behaviors. In contrast, an overall measure of implementing school-based obesity-prevention policies/practices (eg, prohibited advertising) was unrelated to use of unhealthy or extreme behaviors

  2. Struggling between resources-based and sustainable development schemes-An analysis of Egypt's recent energy policy

    International Nuclear Information System (INIS)

    Suding, Paul H.

    2011-01-01

    This paper discusses Egypt's recent energy sector and policy developments against objectives and issues of the energy policy strategy adopted in 2007. It reviews energy supply and demand, pricing and subsidies as well as institutional arrangements and respective reform projects from the perspective of assessing achievements. It identifies the consequences of the policy and the long-term outlook and reports on the internal policy struggle. The policy strategy of 2007 is directed at energy security, social and industrial development. Environmental or climate objectives play no role. Energy efficiency is at best considered an instrument. The implementation of the strategy has been successful on the supply side, but not on the demand side. Price reform, refocusing subsidies and sector reform were not achieved. This has negatively affected energy efficiency and diversification, energy availability and supply security, the State budget and the sector's financial capacity. It causes rising energy import requirements and increasing risks to the current account balance. In spite of that, 'old guard' and industrial establishment favour the resource-based development based on cheap energy and protract price reform, whereas another group of businessmen wants a sustainable development concept and monetize the oil and gas production to invest in Egypt's competitiveness. - Research Highlights: → Egyptian energy policy has not delivered demand side results and institutional reform. → The consequences are disparities in supply, external balance, financing and subsidies. →The prevailing interest groups succeed in protracting the implementation of the policy.

  3. Assessing the impact of policy interventions on the adoption of plug-in electric vehicles: An agent-based model

    International Nuclear Information System (INIS)

    Silvia, Chris; Krause, Rachel M.

    2016-01-01

    Heightened concern regarding climate change and energy independence has increased interest in plug-in electric vehicles as one means to address these challenges and governments at all levels have considered policy interventions to encourage their adoption. This paper develops an agent-based model that simulates the introduction of four policy scenarios aimed at promoting electric vehicle adoption in an urban community and compares them against a baseline. These scenarios include reducing vehicle purchase price via subsidies, expanding the local public charging network, increasing the number and visibility of fully battery electric vehicles (BEVs) on the roadway through government fleet purchases, and a hybrid mix of these three approaches. The results point to the effectiveness of policy options that increased awareness of BEV technology. Specifically, the hybrid policy alternative was the most successful in encouraging BEV adoption. This policy increases the visibility and familiarity of BEV technology in the community and may help counter the idea that BEVs are not a viable alternative to gasoline-powered vehicles. - Highlights: •Various policy interventions to encourage electric vehicle adoption are examined. •An agent based model is used to simulate individual adoption decisions. •Policies that increase the familiarity of electric vehicles are most effective.

  4. Unimodal and crossmodal gradients of spatial attention

    DEFF Research Database (Denmark)

    Föcker, J.; Hötting, K.; Gondan, Matthias

    2010-01-01

    Behavioral and event-related potential (ERP) studies have shown that spatial attention is gradually distributed around the center of the attentional focus. The present study compared uni- and crossmodal gradients of spatial attention to investigate whether the orienting of auditory and visual...... spatial attention is based on modality specific or supramodal representations of space. Auditory and visual stimuli were presented from five speaker locations positioned in the right hemifield. Participants had to attend to the innermost or outmost right position in order to detect either visual...... or auditory deviant stimuli. Detection rates and event-related potentials (ERPs) indicated that spatial attention is distributed as a gradient. Unimodal spatial ERP gradients correlated with the spatial resolution of the modality. Crossmodal spatial gradients were always broader than the corresponding...

  5. Coreless Concept for High Gradient Induction Cell

    International Nuclear Information System (INIS)

    Krasnykh, Anatoly

    2008-01-01

    An induction linac cell for a high gradient is discussed. The proposed solid state coreless approach for the induction linac topology (SLIM(reg s ign)) is based on nanosecond mode operation. This mode may have an acceleration gradient comparable with gradients of rf- accelerator structures. The discussed induction system has the high electric efficiency. The key elements are a solid state semiconductor switch and a high electric density dielectric with a thin section length. The energy in the induction system is storied in the magnetic field. The nanosecond current break-up produces the high voltage. The induced voltage is used for acceleration. This manner of an operation allows the use of low voltage elements in the booster part and achieves a high accelerating gradient. The proposed topology was tested in POP (proof of principle) experiments

  6. Flexoelectricity: strain gradient effects in ferroelectrics

    Energy Technology Data Exchange (ETDEWEB)

    Ma Wenhui [Department of Physics, Shantou Unversity, Shantou, Guangdong 515063 (China)

    2007-12-15

    Mechanical strain gradient induced polarization effect or flexoelectricity in perovskite-type ferroelectric and relaxor ferroelectric ceramics was investigated. The flexoelectric coefficients measured at room temperature ranged from about 1 {mu} C m{sup -1} for lead zirconate titanate to 100 {mu} C m{sup -1} for barium strontium titanate. Flexoelectric effects were discovered to be sensitive to chemical makeup, phase symmetry, and domain structures. Based on phenomenological discussion and experimental data on flexoelectricity, the present study proposed that mechanical strain gradient field could influence polarization responses in a way analogous to electric field. Flexoelectric coefficients were found to be nonlinearly enhanced by dielectric permittivity and strain gradient. Interfacial mismatch in epitaxial thin films can give rise to high strain gradients, enabling flexoelectric effects to make a significant impact in properly engineered ferroelectric heterostructure systems.

  7. Resource management and scheduling policy based on grid for AIoT

    Science.gov (United States)

    Zou, Yiqin; Quan, Li

    2017-07-01

    This paper has a research on resource management and scheduling policy based on grid technology for Agricultural Internet of Things (AIoT). Facing the situation of a variety of complex and heterogeneous agricultural resources in AIoT, it is difficult to represent them in a unified way. But from an abstract perspective, there are some common models which can express their characteristics and features. Based on this, we proposed a high-level model called Agricultural Resource Hierarchy Model (ARHM), which can be used for modeling various resources. It introduces the agricultural resource modeling method based on this model. Compared with traditional application-oriented three-layer model, ARHM can hide the differences of different applications and make all applications have a unified interface layer and be implemented without distinction. Furthermore, it proposes a Web Service Resource Framework (WSRF)-based resource management method and the encapsulation structure for it. Finally, it focuses on the discussion of multi-agent-based AG resource scheduler, which is a collaborative service provider pattern in multiple agricultural production domains.

  8. Providing Evidence-Based, Intelligent Support for Flood Resilient Planning and Policy: The PEARL Knowledge Base

    Directory of Open Access Journals (Sweden)

    George Karavokiros

    2016-09-01

    Full Text Available While flood risk is evolving as one of the most imminent natural hazards and the shift from a reactive decision environment to a proactive one sets the basis of the latest thinking in flood management, the need to equip decision makers with necessary tools to think about and intelligently select options and strategies for flood management is becoming ever more pressing. Within this context, the Preparing for Extreme and Rare Events in Coastal Regions (PEARL intelligent knowledge-base (PEARL KB of resilience strategies is presented here as an environment that allows end-users to navigate from their observed problem to a selection of possible options and interventions worth considering within an intuitive visual web interface assisting advanced interactivity. Incorporation of real case studies within the PEARL KB enables the extraction of (evidence-based lessons from all over the word, while the KB’s collection of methods and tools directly supports the optimal selection of suitable interventions. The Knowledge-Base also gives access to the PEARL KB Flood Resilience Index (FRI tool, which is an online tool for resilience assessment at a city level available to authorities and citizens. We argue that the PEARL KB equips authorities with tangible and operational tools that can improve strategic and operational flood risk management by assessing and eventually increasing resilience, while building towards the strengthening of risk governance. The online tools that the PEARL KB gives access to were demonstrated and tested in the city of Rethymno, Greece.

  9. Knowledge-based changes to health systems: the Thai experience in policy development.

    Science.gov (United States)

    Tangcharoensathien, Viroj; Wibulpholprasert, Suwit; Nitayaramphong, Sanguan

    2004-10-01

    Over the past two decades the government in Thailand has adopted an incremental approach to extending health-care coverage to the population. It first offered coverage to government employees and their dependents, and then introduced a scheme under which low-income people were exempt from charges for health care. This scheme was later extended to include elderly people, children younger than 12 years of age and disabled people. A voluntary public insurance scheme was implemented to cover those who could afford to pay for their own care. Private sector employees were covered by the Social Health Insurance scheme, which was implemented in 1991. Despite these efforts, 30% of the population remained uninsured in 2001. In October of that year, the new government decided to embark on a programme to provide universal health-care coverage. This paper describes how research into health systems and health policy contributed to the move towards universal coverage. Data on health systems financing and functioning had been gathered before and after the founding of the Health Systems Research Institute in early 1990. In 1991, a contract capitation model had been used to launch the Social Health Insurance scheme. The advantages of using a capitation model are that it contains costs and provides an acceptable quality of service as opposed to the cost escalation and inefficiency that occur under fee-for-service reimbursement models, such as the one used to provide medical benefits to civil servants. An analysis of the implementation of universal coverage found that politics moved universal coverage onto the policy agenda during the general election campaign in January 2001. The capacity for research on health systems and policy to generate evidence guided the development of the policy and the design of the system at a later stage. Because the reformists who sought to bring about universal coverage (who were mostly civil servants in the Ministry of Public Health and members of

  10. The science, policy and practice of nature-based solutions: An interdisciplinary perspective.

    Science.gov (United States)

    Nesshöver, Carsten; Assmuth, Timo; Irvine, Katherine N; Rusch, Graciela M; Waylen, Kerry A; Delbaere, Ben; Haase, Dagmar; Jones-Walters, Lawrence; Keune, Hans; Kovacs, Eszter; Krauze, Kinga; Külvik, Mart; Rey, Freddy; van Dijk, Jiska; Vistad, Odd Inge; Wilkinson, Mark E; Wittmer, Heidi

    2017-02-01

    In this paper, we reflect on the implications for science, policy and practice of the recently introduced concept of Nature-Based Solutions (NBS), with a focus on the European context. First, we analyse NBS in relation to similar concepts, and reflect on its relationship to sustainability as an overarching framework. From this, we derive a set of questions to be addressed and propose a general framework for how these might be addressed in NBS projects by funders, researchers, policy-makers and practitioners. We conclude that: To realise their full potential, NBS must be developed by including the experience of all relevant stakeholders such that 'solutions' contribute to achieving all dimensions of sustainability. As NBS are developed, we must also moderate the expectations placed on them since the precedent provided by other initiatives whose aim was to manage nature sustainably demonstrates that we should not expect NBS to be cheap and easy, at least not in the short-term. Copyright © 2016 British Geological Survey, NERC. Published by Elsevier B.V. All rights reserved.

  11. Configuration model of partial repairable spares under batch ordering policy based on inventory state

    Institute of Scientific and Technical Information of China (English)

    Ruan Minzhi; Luo Yi; Li Hua

    2014-01-01

    Rational planning of spares configuration project is an effective approach to improve equipment availability as well as reduce life cycle cost (LCC). With an analysis of various impacts on support system, the spares demand rate forecast model is constructed. According to systemic analysis method, spares support effectiveness evaluation indicators system is built, and then, initial spares configuration and optimization method is researched. To the issue of discarding and con-sumption for incomplete repairable items, its expected backorders function is approximated by Laplace demand distribution. Combining the (s-1, s) and (R, Q) inventory policy, the spares resup-ply model is established under the batch ordering policy based on inventory state, and the optimi-zation analysis flow for spares configuration is proposed. Through application on shipborne equipment spares configuration, the given scenarios are analyzed under two constraint targets:one is the support effectiveness, and the other is the spares cost. Analysis reveals that the result is consistent with practical regulation;therefore, the model’s correctness, method’s validity as well as optimization project’s rationality are proved to a certain extent.

  12. Linking community-based monitoring to water policy: Perceptions of citizen scientists.

    Science.gov (United States)

    Carlson, Tyler; Cohen, Alice

    2018-05-05

    This paper examines the relationships between Community-Based Water Monitoring (CBM) and government-led water initiatives. Drawing on a cross-Canada survey of over one hundred organizations, we explore the reasons why communities undertake CBM, the monitoring protocols they follow, and the extent to which CBM program members feel their findings are incorporated into formal (i.e., government-led) decision-making processes. Our results indicate that despite following standardized and credible monitoring protocols, fewer than half of CBM organizations report that their data is being used to inform water policy at any level of government. Moreover, respondents report higher rates of cooperation and data-sharing between CBM organizations themselves than between CBM organizations and their respective governments. These findings are significant, because many governments continue to express support for CBM. We explore the barriers between CBM data collection and government policy, and suggest that structural barriers include lack of multi-year funding, inconsistent protocols, and poor communication. More broadly, we argue that the distinction between formal and informal programming is unclear, and that addressing known CBM challenges will rely on a change in perception: CBM cannot simply be a less expensive alternative to government-driven data collection. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Measuring the Effect of Gender-Based Policies on Economic Growth

    OpenAIRE

    Pierre-Richard Agénor; Otaviano Canuto

    2012-01-01

    To this day, policy makers, policy advisers, and economists in development institutions do not have any practical tools to help them to assess the impacts of policies aimed at promoting gender equality and quantify the effect of these policies on growth. Yet, there has been limited effort in that direction. This note lays out such a tool, a framework for quantifying the growth effects of g...

  14. Improving anti-bullying laws and policies to protect youth from weight-based victimization: parental support for action.

    Science.gov (United States)

    Puhl, R M; Suh, Y; Li, X

    2017-04-01

    Weight-based bullying is a prevalent problem among youth with overweight and obesity, but remains neglected in existing policy-level strategies to address youth bullying. Parental support is an influential catalyst motivating political will for policy decisions affecting youth, but has received limited research attention. To assess levels of, and predictors of, parental support for school-based policies and state/federal legal measures to address weight-based bullying in 2014 and 2015. Identical online questionnaires were completed by two independent national samples of parents in 2014 and 2015 (N = 1804). Parental support for all policy actions was high (at least 81%) and significantly increased from 2014 to 2015 for legal measures that would a) require state anti-bullying laws to add protections against weight-based bullying, and b) enact a federal anti-bullying law that includes weight-based bullying. These findings can inform policy discourse about remedies for youth bullying, and suggest that parental support for improved legal protections against weight-based bullying is present, consistent, and strong. © 2016 World Obesity Federation.

  15. Bigravity from gradient expansion

    International Nuclear Information System (INIS)

    Yamashita, Yasuho; Tanaka, Takahiro

    2016-01-01

    We discuss how the ghost-free bigravity coupled with a single scalar field can be derived from a braneworld setup. We consider DGP two-brane model without radion stabilization. The bulk configuration is solved for given boundary metrics, and it is substituted back into the action to obtain the effective four-dimensional action. In order to obtain the ghost-free bigravity, we consider the gradient expansion in which the brane separation is supposed to be sufficiently small so that two boundary metrics are almost identical. The obtained effective theory is shown to be ghost free as expected, however, the interaction between two gravitons takes the Fierz-Pauli form at the leading order of the gradient expansion, even though we do not use the approximation of linear perturbation. We also find that the radion remains as a scalar field in the four-dimensional effective theory, but its coupling to the metrics is non-trivial.

  16. Gradient-Index Optics

    Science.gov (United States)

    2010-03-31

    nonimaging design capabilities to incorporate 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 12-04-2011 13. SUPPLEMENTARY NOTES The views, opinions...Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS Imaging Optics, Nonimaging Optics, Gradient Index Optics, Camera, Concentrator...imaging and nonimaging design capabilities to incorporate manufacturable GRIN lenses can provide imaging lens systems that are compact and

  17. Momentum-weighted conjugate gradient descent algorithm for gradient coil optimization.

    Science.gov (United States)

    Lu, Hanbing; Jesmanowicz, Andrzej; Li, Shi-Jiang; Hyde, James S

    2004-01-01

    MRI gradient coil design is a type of nonlinear constrained optimization. A practical problem in transverse gradient coil design using the conjugate gradient descent (CGD) method is that wire elements move at different rates along orthogonal directions (r, phi, z), and tend to cross, breaking the constraints. A momentum-weighted conjugate gradient descent (MW-CGD) method is presented to overcome this problem. This method takes advantage of the efficiency of the CGD method combined with momentum weighting, which is also an intrinsic property of the Levenberg-Marquardt algorithm, to adjust step sizes along the three orthogonal directions. A water-cooled, 12.8 cm inner diameter, three axis torque-balanced gradient coil for rat imaging was developed based on this method, with an efficiency of 2.13, 2.08, and 4.12 mT.m(-1).A(-1) along X, Y, and Z, respectively. Experimental data demonstrate that this method can improve efficiency by 40% and field uniformity by 27%. This method has also been applied to the design of a gradient coil for the human brain, employing remote current return paths. The benefits of this design include improved gradient field uniformity and efficiency, with a shorter length than gradient coil designs using coaxial return paths. Copyright 2003 Wiley-Liss, Inc.

  18. Towards Static Analysis of Policy-Based Self-adaptive Computing Systems

    DEFF Research Database (Denmark)

    Margheri, Andrea; Nielson, Hanne Riis; Nielson, Flemming

    2016-01-01

    For supporting the design of self-adaptive computing systems, the PSCEL language offers a principled approach that relies on declarative definitions of adaptation and authorisation policies enforced at runtime. Policies permit managing system components by regulating their interactions...... and by dynamically introducing new actions to accomplish task-oriented goals. However, the runtime evaluation of policies and their effects on system components make the prediction of system behaviour challenging. In this paper, we introduce the construction of a flow graph that statically points out the policy...... evaluations that can take place at runtime and exploit it to analyse the effects of policy evaluations on the progress of system components....

  19. Applying market-based instruments to environmental policies in China and OECD countries

    International Nuclear Information System (INIS)

    1998-01-01

    China's rapid economic growth since the late 1970s has been a remarkable achievement, and is projected to continue. However, this prospect could be compromised by pollution of air, water, and land, the unsustainable exploitation of natural resources, and the environmental impacts on public health. Air pollution associated with the use of coal for energy and industrial purposes is a particularly serious challenge in China, with important domestic and transboundary implications. This book presents papers from an international workshop co-sponsored by the OECD and China's National Environmental Protection Agency on the application of economic instruments to control air pollution in China and OECD countries. It presents the state-of-the-air in this field, based upon contributions from Chinese and OECD country policy makers and experts

  20. An optimal replacement policy for a repairable system based on its repairman having vacations

    Energy Technology Data Exchange (ETDEWEB)

    Yuan Li [School of Aerospace Engineering and Applied Mechanics, Tongji University, Shanghai 200092 (China); Xu Jian, E-mail: xujian@tongji.edu.c [School of Aerospace Engineering and Applied Mechanics, Tongji University, Shanghai 200092 (China)

    2011-07-15

    This paper studies a cold standby repairable system with two different components and one repairman who can take multiple vacations. If there is a component which fails and the repairman is on vacation, the failed component will wait for repair until the repairman is available. In the system, assume that component 1 has priority in use. After repair, component 1 follows a geometric process repair, while component 2 can be repaired as good as new after failures. Under these assumptions, a replacement policy N based on the failed times of component 1 is studied. The system will be replaced if the failure times of component 1 reach N. The explicit expression of the expected cost rate is given, so that the optimal replacement time N{sup *} is determined. Finally, a numerical example is given to illustrate the theoretical results of the model.