WorldWideScience

Sample records for general base glu-268

  1. Generalized eigenvalue based spectrum sensing

    KAUST Repository

    Shakir, Muhammad

    2012-01-01

    Spectrum sensing is one of the fundamental components in cognitive radio networks. In this chapter, a generalized spectrum sensing framework which is referred to as Generalized Mean Detector (GMD) has been introduced. In this context, we generalize the detectors based on the eigenvalues of the received signal covariance matrix and transform the eigenvalue based spectrum sensing detectors namely: (i) the Eigenvalue Ratio Detector (ERD) and two newly proposed detectors which are referred to as (ii) the GEometric Mean Detector (GEMD) and (iii) the ARithmetic Mean Detector (ARMD) into an unified framework of generalize spectrum sensing. The foundation of the proposed framework is based on the calculation of exact analytical moments of the random variables of the decision threshold of the respective detectors. The decision threshold has been calculated in a closed form which is based on the approximation of Cumulative Distribution Functions (CDFs) of the respective test statistics. In this context, we exchange the analytical moments of the two random variables of the respective test statistics with the moments of the Gaussian (or Gamma) distribution function. The performance of the eigenvalue based detectors is compared with the several traditional detectors including the energy detector (ED) to validate the importance of the eigenvalue based detectors and the performance of the GEMD and the ARMD particularly in realistic wireless cognitive radio network. Analytical and simulation results show that the newly proposed detectors yields considerable performance advantage in realistic spectrum sensing scenarios. Moreover, the presented results based on proposed approximation approaches are in perfect agreement with the empirical results. © 2012 Springer Science+Business Media Dordrecht.

  2. Generalized eigenvalue based spectrum sensing

    KAUST Repository

    Shakir, Muhammad; Alouini, Mohamed-Slim

    2012-01-01

    of the decision threshold of the respective detectors. The decision threshold has been calculated in a closed form which is based on the approximation of Cumulative Distribution Functions (CDFs) of the respective test statistics. In this context, we exchange

  3. Black hole based tests of general relativity

    International Nuclear Information System (INIS)

    Yagi, Kent; Stein, Leo C

    2016-01-01

    General relativity has passed all solar system experiments and neutron star based tests, such as binary pulsar observations, with flying colors. A more exotic arena for testing general relativity is in systems that contain one or more black holes. Black holes are the most compact objects in the Universe, providing probes of the strongest-possible gravitational fields. We are motivated to study strong-field gravity since many theories give large deviations from general relativity only at large field strengths, while recovering the weak-field behavior. In this article, we review how one can probe general relativity and various alternative theories of gravity by using electromagnetic waves from a black hole with an accretion disk, and gravitational waves from black hole binaries. We first review model-independent ways of testing gravity with electromagnetic/gravitational waves from a black hole system. We then focus on selected examples of theories that extend general relativity in rather simple ways. Some important characteristics of general relativity include (but are not limited to) (i) only tensor gravitational degrees of freedom, (ii) the graviton is massless, (iii) no quadratic or higher curvatures in the action, and (iv) the theory is four-dimensional. Altering a characteristic leads to a different extension of general relativity: (i) scalar–tensor theories, (ii) massive gravity theories, (iii) quadratic gravity, and (iv) theories with large extra dimensions. Within each theory, we describe black hole solutions, their properties, and current and projected constraints on each theory using black hole based tests of gravity. We close this review by listing some of the open problems in model-independent tests and within each specific theory. (paper)

  4. Advances in heuristically based generalized perturbation theory

    International Nuclear Information System (INIS)

    Gandini, A.

    1994-01-01

    A distinctive feature of heuristically based generalized perturbation theory methodology consists in the systematic use of importance conservation concepts. As well known, this use leads to fundamental reciprocity relationship. Instead, the alternative variational and differential one approaches make a consistent use of the properties and adjoint functions. The equivalence between the importance and the adjoint functions have been demonstrated in important cases. There are some instances, however, in which the commonly known operator governing the adjoint function are not adequate. In this paper ways proposed to generalize this rules, as adopted with the heuristic generalized perturbation theory methodology, are illustrated. When applied to the neutron/nuclide field characterizing the core evolution in a power reactor system, in which also an intensive control variable (ρ) is defined, these rules leas to an orthogonality relationship connected to this same control variable. A set of ρ-mode eigenfunctions may be correspondingly defined and an extended concept of reactivity (generalizing that commonly associated with the multiplication factor) proposed as more directly indicative of the controllability of a critical reactor system. (author). 25 refs

  5. Behavior Assessment in Children Following Hospital-Based General Anesthesia versus Office-Based General Anesthesia

    Directory of Open Access Journals (Sweden)

    LaQuia A. Vinson

    2016-08-01

    Full Text Available The purpose of this study was to determine if differences in behavior exist following dental treatment under hospital-based general anesthesia (HBGA or office-based general anesthesia (OBGA in the percentage of patients exhibiting positive behavior and in the mean Frankl scores at recall visits. This retrospective study examined records of a pediatric dental office over a 4 year period. Patients presenting before 48 months of age for an initial exam who were diagnosed with early childhood caries were included in the study. Following an initial exam, patients were treated under HBGA or OBGA. Patients were followed to determine their behavior at 6-, 12- and 18-month recall appointments. Fifty-four patients received treatment under HBGA and 26 were treated under OBGA. OBGA patients were significantly more likely to exhibit positive behavior at the 6- and 12-month recall visits p = 0.038 & p = 0.029. Clinicians should consider future behavior when determining general anesthesia treatment modalities in children with early childhood caries presenting to their office.

  6. Non-self-adjoint Hamiltonians defined by generalized Riesz bases

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, H., E-mail: h-inoue@math.kyushu-u.ac.jp [Graduate School of Mathematics, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395 (Japan); Takakura, M., E-mail: mayumi@fukuoka-u.ac.jp [Department of Applied Mathematics, Fukuoka University, Fukuoka 814-0180 (Japan)

    2016-08-15

    Bagarello, Inoue, and Trapani [J. Math. Phys. 55, 033501 (2014)] investigated some operators defined by the Riesz bases. These operators connect with quasi-Hermitian quantum mechanics, and its relatives. In this paper, we introduce a notion of generalized Riesz bases which is a generalization of Riesz bases and investigate some operators defined by the generalized Riesz bases by changing the frameworks of the operators defined in the work of Bagarello, Inoue, and Trapani.

  7. [Generalized neonatal screening based on laboratory tests].

    Science.gov (United States)

    Ardaillou, Raymond; Le Gall, Jean-Yves

    2006-11-01

    Implementation of a generalized screening program for neonatal diseases must obey precise rules. The disease must be severe, recognizable at an early stage, amenable to an effective treatment, detectable with a non expensive and widely applicable test; it must also be a significant public health problem. Subjects with positive results must be offered immediate treatment or prevention. All screening programs must be regularly evaluated. In France, since 1978, a national screening program has been organized by a private association ("Association française pour le dépistage et la prévention des handicaps de l'enfant") and supervised by the "Caisse nationale d'assurance maladie" and "Direction Générale de la Sante". Five diseases are now included in the screening program: phenylketonuria, hypothyroidism, congenital adrenal hyperplasia, cystic fibrosis and sickle cell disease (the latter only in at-risk newborns). Toxoplasmosis is a particular problem because only the children of mothers who were not tested during the pregnancy or who seroconverted are screened. Neonatal screening for phenylketonuria and hypothyrodism is unanimously recommended. Screening for congenital adrenal hyperplasia is approved in most countries. Cases of sickle cell disease and cystic fibrosis are more complex because--not all children who carry the mutations develop severe forms;--there is no curative treatment;--parents may become anxious, even though the phenotype is sometimes mild or even asymptomatic. Supporters of screening stress the benefits of early diagnosis (which extends the life expectancy of these children, particularly in the case of sickle cell disease), the fact that it opens up the possibility of prenatal screening of future pregnancies, and the utility of informing heterozygous carriers identified by familial screening. Neonatal screening for other diseases is under discussion. Indeed, technical advances such as tandem mass spectrometry make it possible to detect about 50

  8. A General Simulator for Acid-Base Titrations

    Science.gov (United States)

    de Levie, Robert

    1999-07-01

    General formal expressions are provided to facilitate the automatic computer calculation of acid-base titration curves of arbitrary mixtures of acids, bases, and salts, without and with activity corrections based on the Davies equation. Explicit relations are also given for the buffer strength of mixtures of acids, bases, and salts.

  9. A general model for membrane-based separation processes

    DEFF Research Database (Denmark)

    Soni, Vipasha; Abildskov, Jens; Jonsson, Gunnar Eigil

    2009-01-01

    behaviour will play an important role. In this paper, modelling of membrane-based processes for separation of gas and liquid mixtures are considered. Two general models, one for membrane-based liquid separation processes (with phase change) and another for membrane-based gas separation are presented....... The separation processes covered are: membrane-based gas separation processes, pervaporation and various types of membrane distillation processes. The specific model for each type of membrane-based process is generated from the two general models by applying the specific system descriptions and the corresponding...

  10. Matroidal Structure of Generalized Rough Sets Based on Tolerance Relations

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-01-01

    of the generalized rough set based on the tolerance relation. The matroid can also induce a new relation. We investigate the connection between the original tolerance relation and the induced relation.

  11. Generalized perturbation theory based on the method of cyclic characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Assawaroongruengchot, M.; Marleau, G. [Institut de Genie Nucleaire, Departement de Genie Physique, Ecole Polytechnique de Montreal, 2900 Boul. Edouard-Montpetit, Montreal, Que. H3T 1J4 (Canada)

    2006-07-01

    A GPT algorithm for estimation of eigenvalues and reaction-rate ratios is developed for the neutron transport problems in 2D fuel assemblies with isotropic scattering. In our study the GPT formulation is based on the integral transport equations. The mathematical relationship between the generalized flux importance and generalized source importance functions is applied to transform the generalized flux importance transport equations into the integro-differential forms. The resulting adjoint and generalized adjoint transport equations are then solved using the method of cyclic characteristics (MOCC). Because of the presence of negative adjoint sources, a biasing/decontamination scheme is applied to make the generalized adjoint functions positive in such a way that it can be used for the multigroup re-balance technique. To demonstrate the efficiency of the algorithms, perturbative calculations are performed on a 17 x 17 PWR lattice. (authors)

  12. Generalized perturbation theory based on the method of cyclic characteristics

    International Nuclear Information System (INIS)

    Assawaroongruengchot, M.; Marleau, G.

    2006-01-01

    A GPT algorithm for estimation of eigenvalues and reaction-rate ratios is developed for the neutron transport problems in 2D fuel assemblies with isotropic scattering. In our study the GPT formulation is based on the integral transport equations. The mathematical relationship between the generalized flux importance and generalized source importance functions is applied to transform the generalized flux importance transport equations into the integro-differential forms. The resulting adjoint and generalized adjoint transport equations are then solved using the method of cyclic characteristics (MOCC). Because of the presence of negative adjoint sources, a biasing/decontamination scheme is applied to make the generalized adjoint functions positive in such a way that it can be used for the multigroup re-balance technique. To demonstrate the efficiency of the algorithms, perturbative calculations are performed on a 17 x 17 PWR lattice. (authors)

  13. Problem-Based Learning in a General Psychology Course.

    Science.gov (United States)

    Willis, Sandra A.

    2002-01-01

    Describes the adoption of problem-based learning (PBL) techniques in a general psychology course. States that the instructor used a combination of techniques, including think-pair-share, lecture/discussion, and PBL. Notes means and standard deviations for graded components of PBL format versus lecture/discussion format. (Contains 18 references.)…

  14. A General Polygon-based Deformable Model for Object Recognition

    DEFF Research Database (Denmark)

    Jensen, Rune Fisker; Carstensen, Jens Michael

    1999-01-01

    We propose a general scheme for object localization and recognition based on a deformable model. The model combines shape and image properties by warping a arbitrary prototype intensity template according to the deformation in shape. The shape deformations are constrained by a probabilistic distr...

  15. Generalized SMO algorithm for SVM-based multitask learning.

    Science.gov (United States)

    Cai, Feng; Cherkassky, Vladimir

    2012-06-01

    Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.

  16. Edge detection methods based on generalized type-2 fuzzy logic

    CERN Document Server

    Gonzalez, Claudia I; Castro, Juan R; Castillo, Oscar

    2017-01-01

    In this book four new methods are proposed. In the first method the generalized type-2 fuzzy logic is combined with the morphological gra-dient technique. The second method combines the general type-2 fuzzy systems (GT2 FSs) and the Sobel operator; in the third approach the me-thodology based on Sobel operator and GT2 FSs is improved to be applied on color images. In the fourth approach, we proposed a novel edge detec-tion method where, a digital image is converted a generalized type-2 fuzzy image. In this book it is also included a comparative study of type-1, inter-val type-2 and generalized type-2 fuzzy systems as tools to enhance edge detection in digital images when used in conjunction with the morphologi-cal gradient and the Sobel operator. The proposed generalized type-2 fuzzy edge detection methods were tested with benchmark images and synthetic images, in a grayscale and color format. Another contribution in this book is that the generalized type-2 fuzzy edge detector method is applied in the preproc...

  17. Complexity analysis based on generalized deviation for financial markets

    Science.gov (United States)

    Li, Chao; Shang, Pengjian

    2018-03-01

    In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.

  18. Managing corneal foreign bodies in office-based general practice.

    Science.gov (United States)

    Fraenkel, Alison; Lee, Lawrence R; Lee, Graham A

    2017-03-01

    Patients with a corneal foreign body may first present to their general practitioner (GP). Safe and efficacious management of these presentations avoids sight-threatening and eye-threatening complications. Removal of a simple, superficial foreign body without a slit lamp is within The Royal Australian College of General Practitioners' (RACGP's) curriculum and scope of practice. Knowing the rele-vant procedural skills and indications for referral is equally important. The objective of this article is to provide an evidence-based and expert-based guide to the management of corneal foreign bodies in the GP's office. History is key to identifying patient characteristics and mechanisms of ocular injury that are red flags for referral. Examination tech-niques and methods of superficial foreign body removal without a slit lamp are outlined, as well as the procedural threshold for referral to an ophthalmologist.

  19. Generalized phase retrieval algorithm based on information measures

    OpenAIRE

    Shioya, Hiroyuki; Gohara, Kazutoshi

    2006-01-01

    An iterative phase retrieval algorithm based on the maximum entropy method (MEM) is presented. Introducing a new generalized information measure, we derive a novel class of algorithms which includes the conventionally used error reduction algorithm and a MEM-type iterative algorithm which is presented for the first time. These different phase retrieval methods are unified on the basis of the framework of information measures used in information theory.

  20. Invariant object recognition based on the generalized discrete radon transform

    Science.gov (United States)

    Easley, Glenn R.; Colonna, Flavia

    2004-04-01

    We introduce a method for classifying objects based on special cases of the generalized discrete Radon transform. We adjust the transform and the corresponding ridgelet transform by means of circular shifting and a singular value decomposition (SVD) to obtain a translation, rotation and scaling invariant set of feature vectors. We then use a back-propagation neural network to classify the input feature vectors. We conclude with experimental results and compare these with other invariant recognition methods.

  1. Multiple access chaotic digital communication based on generalized synchronization

    International Nuclear Information System (INIS)

    Lu Junguo

    2005-01-01

    A novel method for multiple access chaotic digital communication based on the concept of chaos generalized synchronization and the on-line least square method is proposed. This method can be used for transmitting multiple digital information signals concurrently. We illustrate the method using a Lorenz system driving a Chua's circuit and then examine the robustness of the proposed method with respect to noise in communication channel

  2. Generalization of information-based concepts in forecast verification

    Science.gov (United States)

    Tödter, J.; Ahrens, B.

    2012-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification. Recent findings concerning the Ignorance Score are shortly reviewed, then the generalization to continuous forecasts is shown. For ensemble forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are the prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up the natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The applicability and usefulness of the conceptually appealing CRIGN is illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This is also directly applicable to the more traditional CRPS.

  3. A General Accelerated Degradation Model Based on the Wiener Process.

    Science.gov (United States)

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-12-06

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  4. Learning in neural networks based on a generalized fluctuation theorem

    Science.gov (United States)

    Hayakawa, Takashi; Aoyagi, Toshio

    2015-11-01

    Information maximization has been investigated as a possible mechanism of learning governing the self-organization that occurs within the neural systems of animals. Within the general context of models of neural systems bidirectionally interacting with environments, however, the role of information maximization remains to be elucidated. For bidirectionally interacting physical systems, universal laws describing the fluctuation they exhibit and the information they possess have recently been discovered. These laws are termed fluctuation theorems. In the present study, we formulate a theory of learning in neural networks bidirectionally interacting with environments based on the principle of information maximization. Our formulation begins with the introduction of a generalized fluctuation theorem, employing an interpretation appropriate for the present application, which differs from the original thermodynamic interpretation. We analytically and numerically demonstrate that the learning mechanism presented in our theory allows neural networks to efficiently explore their environments and optimally encode information about them.

  5. The SNEDAX Data Base, General Description and Users' Instructions

    International Nuclear Information System (INIS)

    Helm, F.

    1996-09-01

    The SNEDAX Data Base contains information on assemblies built and experiments performed in the fast neutron critical facilities SNEAK (FZK Karlsruhe), MASURCA (CEA Cadarache), ZEBRA (AEA Winfrith) and RRR (Rossendorf Ringzonenreaktor). This report describes the general scope of SNEDAX, the transfer of information from the experimental facilities, and the capabilities to produce graphics and input files for frequently used computer codes. The first part contains general information about the contents and the capabilities of the data base. The second part gives the instructions for persons who actually work with it. The contents in both parts is arranged in a similar way using as far as practical an analogous decimal coding of the sections. In the figures and the annex examples are given in the way in which the data are stored and how they are presented as graphics. The data base is described as it exists in the middle of 1996. It is recognized that there are still many improvements desirable, in particular with respect to a consistent description of the experiments, giving reasonable but not excessive amount of information. Since work in the field of fast critical experiments will be discontinued at FZK, it is planned that the further administration and improvement will be taken over by CEA Cadarache with the support of IPPE Obninsk

  6. Gradient-based adaptation of general gaussian kernels.

    Science.gov (United States)

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  7. Generalized logistic map and its application in chaos based cryptography

    Science.gov (United States)

    Lawnik, M.

    2017-12-01

    The logistic map is commonly used in, for example, chaos based cryptography. However, its properties do not render a safe construction of encryption algorithms. Thus, the scope of the paper is a proposal of generalization of the logistic map by means of a wellrecognized family of chaotic maps. In the next step, an analysis of Lyapunov exponent and the distribution of the iterative variable are studied. The obtained results confirm that the analyzed model can safely and effectively replace a classic logistic map for applications involving chaotic cryptography.

  8. Generalized model for Memristor-based Wien family oscillators

    KAUST Repository

    Talukdar, Abdul Hafiz Ibne

    2012-07-23

    In this paper, we report the unconventional characteristics of Memristor in Wien oscillators. Generalized mathematical models are developed to analyze four members of the Wien family using Memristors. Sustained oscillation is reported for all types though oscillating resistance and time dependent poles are present. We have also proposed an analytical model to estimate the desired amplitude of oscillation before the oscillation starts. These Memristor-based oscillation results, presented for the first time, are in good agreement with simulation results. © 2011 Elsevier Ltd.

  9. General Base-General Acid Catalysis in Human Histone Deacetylase 8.

    Science.gov (United States)

    Gantt, Sister M Lucy; Decroos, Christophe; Lee, Matthew S; Gullett, Laura E; Bowman, Christine M; Christianson, David W; Fierke, Carol A

    2016-02-09

    Histone deacetylases (HDACs) regulate cellular processes such as differentiation and apoptosis and are targeted by anticancer therapeutics in development and in the clinic. HDAC8 is a metal-dependent class I HDAC and is proposed to use a general acid-base catalytic pair in the mechanism of amide bond hydrolysis. Here, we report site-directed mutagenesis and enzymological measurements to elucidate the catalytic mechanism of HDAC8. Specifically, we focus on the catalytic function of Y306 and the histidine-aspartate dyads H142-D176 and H143-D183. Additionally, we report X-ray crystal structures of four representative HDAC8 mutants: D176N, D176N/Y306F, D176A/Y306F, and H142A/Y306F. These structures provide a useful framework for understanding enzymological measurements. The pH dependence of kcat/KM for wild-type Co(II)-HDAC8 is bell-shaped with two pKa values of 7.4 and 10.0. The upper pKa reflects the ionization of the metal-bound water molecule and shifts to 9.1 in Zn(II)-HDAC8. The H142A mutant has activity 230-fold lower than that of wild-type HDAC8, but the pKa1 value is not altered. Y306F HDAC8 is 150-fold less active than the wild-type enzyme; crystal structures show that Y306 hydrogen bonds with the zinc-bound substrate carbonyl, poised for transition state stabilization. The H143A and H142A/H143A mutants exhibit activity that is >80000-fold lower than that of wild-type HDAC8; the buried D176N and D176A mutants have significant catalytic effects, with more subtle effects caused by D183N and D183A. These enzymological and structural studies strongly suggest that H143 functions as a single general base-general acid catalyst, while H142 remains positively charged and serves as an electrostatic catalyst for transition state stabilization.

  10. Ultrawide Bandwidth Receiver Based on a Multivariate Generalized Gaussian Distribution

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2015-04-01

    Multivariate generalized Gaussian density (MGGD) is used to approximate the multiple access interference (MAI) and additive white Gaussian noise in pulse-based ultrawide bandwidth (UWB) system. The MGGD probability density function (pdf) is shown to be a better approximation of a UWB system as compared to multivariate Gaussian, multivariate Laplacian and multivariate Gaussian-Laplacian mixture (GLM). The similarity between the simulated and the approximated pdf is measured with the help of modified Kullback-Leibler distance (KLD). It is also shown that MGGD has the smallest KLD as compared to Gaussian, Laplacian and GLM densities. A receiver based on the principles of minimum bit error rate is designed for the MGGD pdf. As the requirement is stringent, the adaptive implementation of the receiver is also carried out in this paper. Training sequence of the desired user is the only requirement when implementing the detector adaptively. © 2002-2012 IEEE.

  11. Consensus-based training and assessment model for general surgery.

    Science.gov (United States)

    Szasz, P; Louridas, M; de Montbrun, S; Harris, K A; Grantcharov, T P

    2016-05-01

    Surgical education is becoming competency-based with the implementation of in-training milestones. Training guidelines should reflect these changes and determine the specific procedures for such milestone assessments. This study aimed to develop a consensus view regarding operative procedures and tasks considered appropriate for junior and senior trainees, and the procedures that can be used as technical milestone assessments for trainee progression in general surgery. A Delphi process was followed where questionnaires were distributed to all 17 Canadian general surgery programme directors. Items were ranked on a 5-point Likert scale, with consensus defined as Cronbach's α of at least 0·70. Items rated 4 or above on the 5-point Likert scale by 80 per cent of the programme directors were included in the models. Two Delphi rounds were completed, with 14 programme directors taking part in round one and 11 in round two. The overall consensus was high (Cronbach's α = 0·98). The training model included 101 unique procedures and tasks, 24 specific to junior trainees, 68 specific to senior trainees, and nine appropriate to all. The assessment model included four procedures. A system of operative procedures and tasks for junior- and senior-level trainees has been developed along with an assessment model for trainee progression. These can be used as milestones in competency-based assessments. © 2016 BJS Society Ltd Published by John Wiley & Sons Ltd.

  12. Clinical TVA-based studies: a general review

    Directory of Open Access Journals (Sweden)

    Thomas eHabekost

    2015-03-01

    Full Text Available In combination with whole report and partial report tasks, the Theory of Visual Attention (TVA can be used to estimate individual differences in five basic attentional parameters: The visual processing speed, the storage capacity of visual short-term memory, the perceptual threshold, the efficiency of top-down selectivity, and the spatial bias of attentional weighting. TVA-based assessment has been used in about 30 studies to investigate attentional deficits in a range of neurological and psychiatric conditions: (a neglect and simultanagnosia, (b reading disturbances, (c aging and neurodegenerative diseases, and most recently (d neurodevelopmental disorders. The article introduces TVA based assessment, discusses its methodology and psychometric properties, and reviews the progress made in each of the four research fields. The empirical results demonstrate the general usefulness of TVA-based assessment for many types of clinical neuropsychological research. The method’s most important qualities are cognitive specificity and theoretical grounding, but it is also characterized by good reliability and sensitivity to minor deficits. The review concludes by pointing to promising new areas for clinical TVA-based research.

  13. A General Accelerated Degradation Model Based on the Wiener Process

    Directory of Open Access Journals (Sweden)

    Le Liu

    2016-12-01

    Full Text Available Accelerated degradation testing (ADT is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  14. Hydroxide as general base in the saponification of ethyl acetate.

    Science.gov (United States)

    Mata-Segreda, Julio F

    2002-03-13

    The second-order rate constant for the saponification of ethyl acetate at 30.0 degrees C in H(2)O/D(2)O mixtures of deuterium atom fraction n (a proton inventory experiment) obeys the relation k(2)(n) = 0.122 s(-1) M(-1) (1 - n + 1.2n) (1 - n + 0.48n)/(1 - n + 1.4n) (1 - n + 0.68n)(3). This result is interpreted as a process where formation of the tetrahedral intermediate is the rate-determining step and the transition-state complex is formed via nucleophilic interaction of a water molecule with general-base assistance from hydroxide ion, opposite to the direct nucleophilic collision commonly accepted. This mechanistic picture agrees with previous heavy-atom kinetic isotope effect data of Marlier on the alkaline hydrolysis of methyl formate.

  15. A Lab-Based, Lecture-Free General Physics Course

    Science.gov (United States)

    Schneider, Mark B.

    1997-04-01

    The past four years have seen the development of a discovery style, lecture-free, lab-based General Physics course at Grinnell College. Similar in spirit to Priscilla Laws' Workshop Physics (P. Laws, Physics Today, Dec. 1991, p. 24.), this course is a calculus- based, two-semester sequence, which is offered in parallel with more conventional lecture sections, allowing students choice of pedagogical styles. This new course is taught without a text, allowing a somewhat atypical ordering of topics and the early inclusion of a modern introduction to quantum and statistical mechanics. A complete set of laboratory materials was developed at Grinnell for this course, with activities considerably different in most cases than Laws' activities. A quick overview of the pedagogical style and topics covered will be given, and then several specific activities will be described in greater detail. The course has been shown to be a popular and viable alternative to the more conventional sections for majors and non-majors; ongoing efforts to assess the course will be described, especially those that make comparisons between this course and more conventional sections.

  16. A general exergy-based environmental impact index

    International Nuclear Information System (INIS)

    Diaz-Mendez, Sosimo E.; Rodriguez-Lelis, Jose Maria; Hernandez-Guerrero, Abel

    2011-01-01

    An ecosystem is a complex system in which biotic and abiotic factors interact and influence each other both directly and indirectly. Each of these factors has to comply with a specific function in the different processes that occur inside the ecosystem, whether transporting or transforming energy or both. When anthropogenic emissions are produced, part of the useful energy of the ecosystem is used to assimilate or absorb those emissions, and the energy spent, loses its function and becomes lost work in accordance with the Gouy-Stodola theorem. Thus, the work that an ecosystem can carry out varies as a function of the lost work produced by anthropogenic sources. The permanency or loss of the ecosystem depends on how many irreversibilities it can support. The second law of thermodynamics through a systematic use of the exergy and lost work is the basis of this paper where a general environmental impact index, based on exergy, is proposed. For the purpose of this work, the ecosystem is divided in subsystems--water, soil, atmosphere, organisms and society- -all of them inter-related. The ideal work variation can be obtained from each subsystem within the selected ecosystem, and a global index can be determined by adding the partial lost work of each subsystem. This global index is then used to determine the trend followed by the ecosystem from its pristine, original or environmental line base state. This environmental impact index applicability is presented for a simple combustion example

  17. A general framework for sensor-based human activity recognition.

    Science.gov (United States)

    Köping, Lukas; Shirahama, Kimiaki; Grzegorzek, Marcin

    2018-04-01

    Today's wearable devices like smartphones, smartwatches and intelligent glasses collect a large amount of data from their built-in sensors like accelerometers and gyroscopes. These data can be used to identify a person's current activity and in turn can be utilised for applications in the field of personal fitness assistants or elderly care. However, developing such systems is subject to certain restrictions: (i) since more and more new sensors will be available in the future, activity recognition systems should be able to integrate these new sensors with a small amount of manual effort and (ii) such systems should avoid high acquisition costs for computational power. We propose a general framework that achieves an effective data integration based on the following two characteristics: Firstly, a smartphone is used to gather and temporally store data from different sensors and transfer these data to a central server. Thus, various sensors can be integrated into the system as long as they have programming interfaces to communicate with the smartphone. The second characteristic is a codebook-based feature learning approach that can encode data from each sensor into an effective feature vector only by tuning a few intuitive parameters. In the experiments, the framework is realised as a real-time activity recognition system that integrates eight sensors from a smartphone, smartwatch and smartglasses, and its effectiveness is validated from different perspectives such as accuracies, sensor combinations and sampling rates. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. A general framework for regularized, similarity-based image restoration.

    Science.gov (United States)

    Kheradmand, Amin; Milanfar, Peyman

    2014-12-01

    Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.

  19. A damage mechanics based general purpose interface/contact element

    Science.gov (United States)

    Yan, Chengyong

    laboratory test data presented in the literature. The results demonstrate that the proposed element and the damage law perform very well. The most important scientific contribution of this dissertation is the proposed damage criterion based on second law of thermodynamic and entropy of the system. The proposed general purpose interface/contact element is another contribution of this research. Compared to the previous adhoc interface elements proposed in the literature, the new one is, much more powerful and includes creep, plastic deformations, sliding, temperature, damage, cyclic behavior and fatigue life in a unified formulation.

  20. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  1. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  2. Generating inferences from knowledge structures based on general automata

    Energy Technology Data Exchange (ETDEWEB)

    Koenig, E C

    1983-01-01

    The author shows that the model for knowledge structures for computers based on general automata accommodates procedures for establishing inferences. Algorithms are presented which generate inferences as output of a computer when its sentence input names appropriate knowledge elements contained in an associated knowledge structure already stored in the memory of the computer. The inferences are found to have either a single graph tuple or more than one graph tuple of associated knowledge. Six algorithms pertain to a single graph tuple and a seventh pertains to more than one graph tuple of associated knowledge. A named term is either the automaton, environment, auxiliary receptor, principal receptor, auxiliary effector, or principal effector. The algorithm pertaining to more than one graph tuple requires that the input sentence names the automaton, transformation response, and environment of one of the tuples of associated knowledge in a sequence of tuples. Interaction with the computer may be either in a conversation or examination mode. The algorithms are illustrated by an example. 13 references.

  3. Peer Instruction in an Algebra-Based General Physics Course

    Science.gov (United States)

    Listerman, Thomas W.

    1999-10-01

    We have restructured our algebra-based general physics course to increase peer instruction. For the last three years each lecture has been followed by a recitation class. In recitation class students break up into small groups to work on "study guides" concerning the previous lecture. The recitation instructor is available to answer questions and to provide encouragement. The study guides ask qualitative and quantitative questions to lead students step-by-step through the material. Two completed study guides and a homework assignment are submitted each week for grading and the solutions are available later on the internet. Student surveys show the majority of students have a good attitude about the course, like to work in groups with their friends, and like the ready availability of the instructor for help. Both students and faculty seem to like the more frequent one-to-one contact of this format. We have also noticed that one student in each group tends to ask most of the questions and then "translates" the instructor's response into words the others understand. Lest you think "the millenium has arrived," student performance on multiple-choice tests has not improved markedly, some students strongly resist cooperation with others, and many students still think this is the hardest course they have ever taken.

  4. Renewal processes based on generalized Mittag-Leffler waiting times

    Science.gov (United States)

    Cahoy, Dexter O.; Polito, Federico

    2013-03-01

    The fractional Poisson process has recently attracted experts from several fields of study. Its natural generalization of the ordinary Poisson process made the model more appealing for real-world applications. In this paper, we generalized the standard and fractional Poisson processes through the waiting time distribution, and showed their relations to an integral operator with a generalized Mittag-Leffler function in the kernel. The waiting times of the proposed renewal processes have the generalized Mittag-Leffler and stretched-squashed Mittag-Leffler distributions. Note that the generalizations naturally provide greater flexibility in modeling real-life renewal processes. Algorithms to simulate sample paths and to estimate the model parameters are derived. Note also that these procedures are necessary to make these models more usable in practice. State probabilities and other qualitative or quantitative features of the models are also discussed.

  5. Evidence-based treatment of atopic eczema in general practice

    African Journals Online (AJOL)

    banzi

    The exact cause is unknown but a ... Scratching may result in secondary infec- tion and associated ... general practice. Atopic eczema is a common chronic condition ... There was either insufficient or no .... itch and indirectly improving sleep.

  6. Generalized model for Memristor-based Wien family oscillators

    KAUST Repository

    Talukdar, Abdul Hafiz Ibne; Radwan, Ahmed G.; Salama, Khaled N.

    2012-01-01

    In this paper, we report the unconventional characteristics of Memristor in Wien oscillators. Generalized mathematical models are developed to analyze four members of the Wien family using Memristors. Sustained oscillation is reported for all types

  7. Improving Generalization Based on l1-Norm Regularization for EEG-Based Motor Imagery Classification

    Directory of Open Access Journals (Sweden)

    Yuwei Zhao

    2018-05-01

    Full Text Available Multichannel electroencephalography (EEG is widely used in typical brain-computer interface (BCI systems. In general, a number of parameters are essential for a EEG classification algorithm due to redundant features involved in EEG signals. However, the generalization of the EEG method is often adversely affected by the model complexity, considerably coherent with its number of undetermined parameters, further leading to heavy overfitting. To decrease the complexity and improve the generalization of EEG method, we present a novel l1-norm-based approach to combine the decision value obtained from each EEG channel directly. By extracting the information from different channels on independent frequency bands (FB with l1-norm regularization, the method proposed fits the training data with much less parameters compared to common spatial pattern (CSP methods in order to reduce overfitting. Moreover, an effective and efficient solution to minimize the optimization object is proposed. The experimental results on dataset IVa of BCI competition III and dataset I of BCI competition IV show that, the proposed method contributes to high classification accuracy and increases generalization performance for the classification of MI EEG. As the training set ratio decreases from 80 to 20%, the average classification accuracy on the two datasets changes from 85.86 and 86.13% to 84.81 and 76.59%, respectively. The classification performance and generalization of the proposed method contribute to the practical application of MI based BCI systems.

  8. Fuel management optimization based on generalized perturbation theory

    International Nuclear Information System (INIS)

    White, J.R.; Chapman, D.M.; Biswas, D.

    1986-01-01

    A general methodology for optimization of assembly shuffling and burnable poison (BP) loadings for LWR reload design has been developed. The uniqueness of this approach lies in the coupling of Generalized Perturbation Theory (GPT) methods and standard Integer Programming (IP) techniques. An IP algorithm can simulate the discrete nature of the fuel shuffling and BP loading problems, and the use of GPT sensitivity data provides an efficient means for modeling the behavior of the important core performance parameters. The method is extremely flexible since the choice of objective function and the number and mix of constraints depend only on the ability of GPT to determine the appropriate sensitivity functions

  9. Robust estimators based on generalization of trimmed mean

    Czech Academy of Sciences Publication Activity Database

    Adam, Lukáš; Bejda, P.

    (2018) ISSN 0361-0918 Institutional support: RVO:67985556 Keywords : Breakdown point * Estimators * Geometric median * Location * Trimmed mean Subject RIV: BA - General Mathematics Impact factor: 0.457, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/adam-0481224.pdf

  10. Disease prevalence estimations based on contact registrations in general practice

    NARCIS (Netherlands)

    Hoogenveen, Rudolf; Westert, Gert; Dijkgraaf, Marcel; Schellevis, François; de Bakker, Dinny

    2002-01-01

    This paper describes how to estimate the prevalence of chronic diseases in a population using data from contact registrations in general practice with a limited time length. Instead of using only total numbers of observed patients adjusted for the length of the observation period, we propose the use

  11. Robotics in general surgery: an evidence-based review.

    Science.gov (United States)

    Baek, Se-Jin; Kim, Seon-Hahn

    2014-05-01

    Since its introduction, robotic surgery has been rapidly adopted to the extent that it has already assumed an important position in the field of general surgery. This rapid progress is quantitative as well as qualitative. In this review, we focus on the relatively common procedures to which robotic surgery has been applied in several fields of general surgery, including gastric, colorectal, hepato-biliary-pancreatic, and endocrine surgery, and we discuss the results to date and future possibilities. In addition, the advantages and limitations of the current robotic system are reviewed, and the advanced technologies and instruments to be applied in the near future are introduced. Such progress is expected to facilitate the widespread introduction of robotic surgery in additional fields and to solve existing problems.

  12. Identifying multiple influential spreaders based on generalized closeness centrality

    Science.gov (United States)

    Liu, Huan-Li; Ma, Chuang; Xiang, Bing-Bing; Tang, Ming; Zhang, Hai-Feng

    2018-02-01

    To maximize the spreading influence of multiple spreaders in complex networks, one important fact cannot be ignored: the multiple spreaders should be dispersively distributed in networks, which can effectively reduce the redundance of information spreading. For this purpose, we define a generalized closeness centrality (GCC) index by generalizing the closeness centrality index to a set of nodes. The problem converts to how to identify multiple spreaders such that an objective function has the minimal value. By comparing with the K-means clustering algorithm, we find that the optimization problem is very similar to the problem of minimizing the objective function in the K-means method. Therefore, how to find multiple nodes with the highest GCC value can be approximately solved by the K-means method. Two typical transmission dynamics-epidemic spreading process and rumor spreading process are implemented in real networks to verify the good performance of our proposed method.

  13. A generalized complexity measure based on Rényi entropy

    Science.gov (United States)

    Sánchez-Moreno, Pablo; Angulo, Juan Carlos; Dehesa, Jesus S.

    2014-08-01

    The intrinsic statistical complexities of finite many-particle systems (i.e., those defined in terms of the single-particle density) quantify the degree of structure or patterns, far beyond the entropy measures. They are intuitively constructed to be minima at the opposite extremes of perfect order and maximal randomness. Starting from the pioneering LMC measure, which satisfies these requirements, some extensions of LMC-Rényi type have been published in the literature. The latter measures were shown to describe a variety of physical aspects of the internal disorder in atomic and molecular systems (e.g., quantum phase transitions, atomic shell filling) which are not grasped by their mother LMC quantity. However, they are not minimal for maximal randomness in general. In this communication, we propose a generalized LMC-Rényi complexity which overcomes this problem. Some applications which illustrate this fact are given.

  14. Implementing evidence-based medicine in general practice: a focus group based study

    Directory of Open Access Journals (Sweden)

    Aertgeerts Bert

    2005-09-01

    Full Text Available Abstract Background Over the past years concerns are rising about the use of Evidence-Based Medicine (EBM in health care. The calls for an increase in the practice of EBM, seem to be obstructed by many barriers preventing the implementation of evidence-based thinking and acting in general practice. This study aims to explore the barriers of Flemish GPs (General Practitioners to the implementation of EBM in routine clinical work and to identify possible strategies for integrating EBM in daily work. Methods We used a qualitative research strategy to gather and analyse data. We organised focus groups between September 2002 and April 2003. The focus group data were analysed using a combined strategy of 'between-case' analysis and 'grounded theory approach'. Thirty-one general practitioners participated in four focus groups. Purposeful sampling was used to recruit participants. Results A basic classification model documents the influencing factors and actors on a micro-, meso- as well as macro-level. Patients, colleagues, competences, logistics and time were identified on the micro-level (the GPs' individual practice, commercial and consumer organisations on the meso-level (institutions, organisations and health care policy, media and specific characteristics of evidence on the macro-level (policy level and international scientific community. Existing barriers and possible strategies to overcome these barriers were described. Conclusion In order to implement EBM in routine general practice, an integrated approach on different levels needs to be developed.

  15. Generalized flow and determinism in measurement-based quantum computation

    Energy Technology Data Exchange (ETDEWEB)

    Browne, Daniel E [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PU (United Kingdom); Kashefi, Elham [Computing Laboratory and Christ Church College, University of Oxford, Parks Road, Oxford OX1 3QD (United Kingdom); Mhalla, Mehdi [Laboratoire d' Informatique de Grenoble, CNRS - Centre national de la recherche scientifique, Universite de Grenoble (France); Perdrix, Simon [Preuves, Programmes et Systemes (PPS), Universite Paris Diderot, Paris (France)

    2007-08-15

    We extend the notion of quantum information flow defined by Danos and Kashefi (2006 Phys. Rev. A 74 052310) for the one-way model (Raussendorf and Briegel 2001 Phys. Rev. Lett. 86 910) and present a necessary and sufficient condition for the stepwise uniformly deterministic computation in this model. The generalized flow also applied in the extended model with measurements in the (X, Y), (X, Z) and (Y, Z) planes. We apply both measurement calculus and the stabiliser formalism to derive our main theorem which for the first time gives a full characterization of the stepwise uniformly deterministic computation in the one-way model. We present several examples to show how our result improves over the traditional notion of flow, such as geometries (entanglement graph with input and output) with no flow but having generalized flow and we discuss how they lead to an optimal implementation of the unitaries. More importantly one can also obtain a better quantum computation depth with the generalized flow rather than with flow. We believe our characterization result is particularly valuable for the study of the algorithms and complexity in the one-way model.

  16. Generalized flow and determinism in measurement-based quantum computation

    International Nuclear Information System (INIS)

    Browne, Daniel E; Kashefi, Elham; Mhalla, Mehdi; Perdrix, Simon

    2007-01-01

    We extend the notion of quantum information flow defined by Danos and Kashefi (2006 Phys. Rev. A 74 052310) for the one-way model (Raussendorf and Briegel 2001 Phys. Rev. Lett. 86 910) and present a necessary and sufficient condition for the stepwise uniformly deterministic computation in this model. The generalized flow also applied in the extended model with measurements in the (X, Y), (X, Z) and (Y, Z) planes. We apply both measurement calculus and the stabiliser formalism to derive our main theorem which for the first time gives a full characterization of the stepwise uniformly deterministic computation in the one-way model. We present several examples to show how our result improves over the traditional notion of flow, such as geometries (entanglement graph with input and output) with no flow but having generalized flow and we discuss how they lead to an optimal implementation of the unitaries. More importantly one can also obtain a better quantum computation depth with the generalized flow rather than with flow. We believe our characterization result is particularly valuable for the study of the algorithms and complexity in the one-way model

  17. A General Attribute and Rule Based Role-Based Access Control Model

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Growing numbers of users and many access control policies which involve many different resource attributes in service-oriented environments bring various problems in protecting resource. This paper analyzes the relationships of resource attributes to user attributes in all policies, and propose a general attribute and rule based role-based access control(GAR-RBAC) model to meet the security needs. The model can dynamically assign users to roles via rules to meet the need of growing numbers of users. These rules use different attribute expression and permission as a part of authorization constraints, and are defined by analyzing relations of resource attributes to user attributes in many access policies that are defined by the enterprise. The model is a general access control model, and can support many access control policies, and also can be used to wider application for service. The paper also describes how to use the GAR-RBAC model in Web service environments.

  18. Programmatic Environmental Assessment for Base General Plan Development, Schriever Air Force Base, Colorado

    Science.gov (United States)

    2012-06-01

    and is dominated by blue grama (Bouteloua gracilis), buffalo grass (Buchloe dactyloides), three-awned grass (Aristida purpurea), dropseed (Sporobolus...General Plan are to achieve optimal land use planning, protect the natural and human environment, and plan for future mission growth . The Proposed Action...future mission growth , and to improve environmental quality, recreation opportunities, and the safety and medical functions on Base. According to space

  19. General enumeration of RNA secondary structures based on new ...

    African Journals Online (AJOL)

    Crick base pairs between AU and GC. Based on the new representation, this paper also computes the number of various types of constrained secondary structures taking the minimum stack length 1 and minimum size m for each bonding loop as ...

  20. KNOWLEDGE SOCIETY, GENERAL FRAMEWORK FOR KNOWLEDGE BASED ECONOMY

    Directory of Open Access Journals (Sweden)

    Dragos CRISTEA

    2011-03-01

    Full Text Available This paper tries to present the existent relation between knowledge society and knowledge based economy. We will identify the main pillars of knowledge society and present their importance for the development of knowledge societies. Further, we will present two perspectives over knowledge societies, respectively science and learning perspectives, that directly affects knowledge based economies. At the end, we will conclude by identifying some important questions that must be answered regarding this new social paradigm.

  1. The scenario-based generalization of radiation therapy margins

    International Nuclear Information System (INIS)

    Fredriksson, Albin; Bokrantz, Rasmus

    2016-01-01

    We give a scenario-based treatment plan optimization formulation that is equivalent to planning with geometric margins if the scenario doses are calculated using the static dose cloud approximation. If the scenario doses are instead calculated more accurately, then our formulation provides a novel robust planning method that overcomes many of the difficulties associated with previous scenario-based robust planning methods. In particular, our method protects only against uncertainties that can occur in practice, it gives a sharp dose fall-off outside high dose regions, and it avoids underdosage of the target in ‘easy’ scenarios. The method shares the benefits of the previous scenario-based robust planning methods over geometric margins for applications where the static dose cloud approximation is inaccurate, such as irradiation with few fields and irradiation with ion beams. These properties are demonstrated on a suite of phantom cases planned for treatment with scanned proton beams subject to systematic setup uncertainty. (paper)

  2. Fundamentals of sketch-based passwords a general framework

    CERN Document Server

    Riggan, Benjamin S; Wang, Cliff

    2015-01-01

    This SpringerBrief explores graphical password systems and examines novel drawing-based methods in terms of security, usability, and human computer-interactions. It provides a systematic approach for recognizing, comparing, and matching sketch-based passwords in the context of modern computing systems. The book offers both a security and usability analysis of the accumulative framework used for incorporating handwriting biometrics and a human computer-interaction performance analysis. The chapters offer new perspectives and experimental results regarding model uniqueness, recognition tolerance

  3. A sequential hypothesis test based on a generalized Azuma inequality

    NARCIS (Netherlands)

    Reijsbergen, D.P.; Scheinhardt, Willem R.W.; de Boer, Pieter-Tjerk

    We present a new power-one sequential hypothesis test based on a bound for the probability that a bounded zero-mean martingale ever crosses a curve of the form $a(n+k)^b$. The proof of the bound is of independent interest.

  4. Population-based prevention of influenza in Dutch general practice

    NARCIS (Netherlands)

    Hak, E; Hermens, R P; van Essen, G A; Kuyvenhoven, M M; de Melker, R A

    BACKGROUND: Although the effectiveness of influenza vaccination in high-risk groups has been proven, vaccine coverage continues to be less than 50% in The Netherlands. To improve vaccination rates, data on the organizational factors, which should be targeted in population-based prevention of

  5. General enumeration of RNA secondary structures based on new ...

    African Journals Online (AJOL)

    akpobome

    coding, transferring and retrieving genetic information, and in directing cell metabolism. The nucleic acid includes DNA and RNA molecule. RNA molecule is a single-stranded nucleic acid of four different kinds of nucleotides. The four nucleotides only differ by one part, called bases. Hence, one usually identifies nucleotides.

  6. Defining Formats and Corpus- based Examples in the General ...

    African Journals Online (AJOL)

    rbr

    Institute, University of Zimbabwe, Harare, Zimbabwe (langa@arts.uz.ac.zw). Abstract: In this article the writer ... sentative" in terms of size in order to be appropriately used as basis for such corpus-based diction- aries, the ISN editors .... (e) the format should suggest a preference rather than a restriction. For COBUILD, a good ...

  7. MANDIBULAR ASYMMETRY CHARACTERIZATION USING GENERALIZED TENSOR-BASED MORPHOMETRY.

    Science.gov (United States)

    Paniagua, Beatriz; Alhadidi, Abeer; Cevidanes, Lucia; Styner, Martin; Oguz, Ipek

    2011-12-31

    Quantitative assessment of facial asymmetry is crucial for successful planning of corrective surgery. We propose a tensor-based morphometry (TBM) framework to locate and quantify asymmetry using 3D CBCT images. To this end, we compute a rigid transformation between the mandible segmentation and its mirror image, which yields global rotation and translation with respect to the cranial base to guide the surgery's first stage. Next, we nonrigidly register the rigidly aligned images and use TBM methods to locally analyze the deformation field. This yields data on the location, amount and direction of "growth" (or "shrinkage") between the left and right sides. We visualize this data in a volumetric manner and via scalar and vector maps on the mandibular surface to provide the surgeon with optimal understanding of the patient's anatomy. We illustrate the feasibility and strength of our technique on 3 representative patients with a wide range of facial asymmetries.

  8. General-base catalysed hydrolysis and nucleophilic substitution of activated amides in aqueous solutions

    NARCIS (Netherlands)

    Buurma, NJ; Blandamer, MJ; Engberts, JBFN; Buurma, Niklaas J.

    The reactivity of 1-benzoyl-3-phenyl-1,2,4-triazole (1a) was studied in the presence of a range of weak bases in aqueous solution. A change in mechanism is observed from general-base catalysed hydrolysis to nucleophilic substitution and general-base catalysed nucleophilic substitution. A slight

  9. Clonal Selection Based Artificial Immune System for Generalized Pattern Recognition

    Science.gov (United States)

    Huntsberger, Terry

    2011-01-01

    The last two decades has seen a rapid increase in the application of AIS (Artificial Immune Systems) modeled after the human immune system to a wide range of areas including network intrusion detection, job shop scheduling, classification, pattern recognition, and robot control. JPL (Jet Propulsion Laboratory) has developed an integrated pattern recognition/classification system called AISLE (Artificial Immune System for Learning and Exploration) based on biologically inspired models of B-cell dynamics in the immune system. When used for unsupervised or supervised classification, the method scales linearly with the number of dimensions, has performance that is relatively independent of the total size of the dataset, and has been shown to perform as well as traditional clustering methods. When used for pattern recognition, the method efficiently isolates the appropriate matches in the data set. The paper presents the underlying structure of AISLE and the results from a number of experimental studies.

  10. General-purpose microprocessor-based control chassis

    International Nuclear Information System (INIS)

    Halbig, J.K.; Klosterbuer, S.F.; Swenson, D.A.

    1979-12-01

    The objective of the Pion Generation for Medical Irradiations (PIGMI) program at the Los Alamos Scientific Laboratory is to develop the technology to build smaller, less expensive, and more reliable proton linear accelerators for medical applications. For this program, a powerful, simple, inexpensive, and reliable control and data acquisition system was developed. The system has a NOVA 3D computer with a real time disk-operating system (RDOS) that communicates with distributed microprocessor-based controllers which directly control data input/output chassis. At the heart of the controller is a microprocessor crate which was conceived at the Fermi National Accelerator Laboratory. This idea was applied to the design of the hardware and software of the controller

  11. On the Use of Generalized Volume Scattering Models for the Improvement of General Polarimetric Model-Based Decomposition

    Directory of Open Access Journals (Sweden)

    Qinghua Xie

    2017-01-01

    Full Text Available Recently, a general polarimetric model-based decomposition framework was proposed by Chen et al., which addresses several well-known limitations in previous decomposition methods and implements a simultaneous full-parameter inversion by using complete polarimetric information. However, it only employs four typical models to characterize the volume scattering component, which limits the parameter inversion performance. To overcome this issue, this paper presents two general polarimetric model-based decomposition methods by incorporating the generalized volume scattering model (GVSM or simplified adaptive volume scattering model, (SAVSM proposed by Antropov et al. and Huang et al., respectively, into the general decomposition framework proposed by Chen et al. By doing so, the final volume coherency matrix structure is selected from a wide range of volume scattering models within a continuous interval according to the data itself without adding unknowns. Moreover, the new approaches rely on one nonlinear optimization stage instead of four as in the previous method proposed by Chen et al. In addition, the parameter inversion procedure adopts the modified algorithm proposed by Xie et al. which leads to higher accuracy and more physically reliable output parameters. A number of Monte Carlo simulations of polarimetric synthetic aperture radar (PolSAR data are carried out and show that the proposed method with GVSM yields an overall improvement in the final accuracy of estimated parameters and outperforms both the version using SAVSM and the original approach. In addition, C-band Radarsat-2 and L-band AIRSAR fully polarimetric images over the San Francisco region are also used for testing purposes. A detailed comparison and analysis of decomposition results over different land-cover types are conducted. According to this study, the use of general decomposition models leads to a more accurate quantitative retrieval of target parameters. However, there

  12. An FPGA- Based General-Purpose Data Acquisition Controller

    Science.gov (United States)

    Robson, C. C. W.; Bousselham, A.; Bohm

    2006-08-01

    System development in advanced FPGAs allows considerable flexibility, both during development and in production use. A mixed firmware/software solution allows the developer to choose what shall be done in firmware or software, and to make that decision late in the process. However, this flexibility comes at the cost of increased complexity. We have designed a modular development framework to help to overcome these issues of increased complexity. This framework comprises a generic controller that can be adapted for different systems by simply changing the software or firmware parts. The controller can use both soft and hard processors, with or without an RTOS, based on the demands of the system to be developed. The resulting system uses the Internet for both control and data acquisition. In our studies we developed the embedded system in a Xilinx Virtex-II Pro FPGA, where we used both PowerPC and MicroBlaze cores, http, Java, and LabView for control and communication, together with the MicroC/OS-II and OSE operating systems

  13. General Dynamic Equivalent Modeling of Microgrid Based on Physical Background

    Directory of Open Access Journals (Sweden)

    Changchun Cai

    2015-11-01

    Full Text Available Microgrid is a new power system concept consisting of small-scale distributed energy resources; storage devices and loads. It is necessary to employ a simplified model of microgrid in the simulation of a distribution network integrating large-scale microgrids. Based on the detailed model of the components, an equivalent model of microgrid is proposed in this paper. The equivalent model comprises two parts: namely, equivalent machine component and equivalent static component. Equivalent machine component describes the dynamics of synchronous generator, asynchronous wind turbine and induction motor, equivalent static component describes the dynamics of photovoltaic, storage and static load. The trajectory sensitivities of the equivalent model parameters with respect to the output variables are analyzed. The key parameters that play important roles in the dynamics of the output variables of the equivalent model are identified and included in further parameter estimation. Particle Swarm Optimization (PSO is improved for the parameter estimation of the equivalent model. Simulations are performed in different microgrid operation conditions to evaluate the effectiveness of the equivalent model of microgrid.

  14. 49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs

    Science.gov (United States)

    2010-10-01

    ... STANDARDS Pt. 238, App. E Appendix E to Part 238—General Principles of Reliability-Based Maintenance... 49 Transportation 4 2010-10-01 2010-10-01 false General Principles of Reliability-Based... the design level of safety and reliability of the equipment; (2) To restore safety and reliability to...

  15. Speech Intelligibility Potential of General and Specialized Deep Neural Network Based Speech Enhancement Systems

    DEFF Research Database (Denmark)

    Kolbæk, Morten; Tan, Zheng-Hua; Jensen, Jesper

    2017-01-01

    In this paper, we study aspects of single microphone speech enhancement (SE) based on deep neural networks (DNNs). Specifically, we explore the generalizability capabilities of state-of-the-art DNN-based SE systems with respect to the background noise type, the gender of the target speaker...... general. Finally, we compare how a DNN-based SE system trained to be noise type general, speaker general, and SNR general performs relative to a state-of-the-art short-time spectral amplitude minimum mean square error (STSA-MMSE) based SE algorithm. We show that DNN-based SE systems, when trained...... a state-of-the-art STSA-MMSE based SE method, when tested using a range of unseen speakers and noise types. Finally, a listening test using several DNN-based SE systems tested in unseen speaker conditions show that these systems can improve SI for some SNR and noise type configurations but degrade SI...

  16. Generalized frameworks for first-order evolution inclusions based on Yosida approximations

    Directory of Open Access Journals (Sweden)

    Ram U. Verma

    2011-04-01

    Full Text Available First, general frameworks for the first-order evolution inclusions are developed based on the A-maximal relaxed monotonicity, and then using the Yosida approximation the solvability of a general class of first-order nonlinear evolution inclusions is investigated. The role the A-maximal relaxed monotonicity is significant in the sense that it not only empowers the first-order nonlinear evolution inclusions but also generalizes the existing Yosida approximations and its characterizations in the current literature.

  17. Updating and using the international non-neutron experimental nuclear data base in ''Generalized EXFOR'' format

    International Nuclear Information System (INIS)

    Zhuravleva, G.M.; Chukreev, F.E.

    1985-10-01

    A software system for the automatic preparation of non-formalized textual information for the international exchange of nuclear data in the ''Generalized Exchange Format (EXFOR)'' is described. The ''Generalized EXFOR'' format is briefly outlined and data are given on the size of the international non-neutron experimental data base in this format. (author)

  18. Population-Based Pediatric Reference Intervals in General Clinical Chemistry: A Swedish Survey.

    Science.gov (United States)

    Ridefelt, Peter

    2015-01-01

    Very few high quality studies on pediatric reference intervals for general clinical chemistry and hematology analytes have been performed. Three recent prospective community-based projects utilising blood samples from healthy children in Sweden, Denmark and Canada have substantially improved the situation. The Swedish survey included 701 healthy children. Reference intervals for general clinical chemistry and hematology were defined.

  19. Quality aspects of Dutch general practice based data : A conceptual approach

    NARCIS (Netherlands)

    van den Dungen, C.; Hoeymans, N.; Schellevis, F.G.; van Oers, J.A.M.

    2013-01-01

    Background. General practice–based data, collected within general practice registration networks (GPRNs), are widely used in research. The quality of the data is important but the recording criteria about what type of information is collected and how this information should be recorded differ

  20. Effects of conventional and problem-based learning on clinical and general competencies and career development

    NARCIS (Netherlands)

    Cohen-Schotanus, Janke; Muijtjens, Arno M. M.; Schonrock-Adema, Johanna; Geertsma, Jelle; van der Vleuten, Cees P. M.

    OBJECTIVE: To test hypotheses regarding the longitudinal effects of problem-based learning (PBL) and conventional learning relating to students' appreciation of the curriculum, self-assessment of general competencies, summative assessment of clinical competence and indicators of career development.

  1. Uncertainty Analysis of Few Group Cross Sections Based on Generalized Perturbation Theory

    International Nuclear Information System (INIS)

    Han, Tae Young; Lee, Hyun Chul; Noh, Jae Man

    2014-01-01

    In this paper, the methodology of the sensitivity and uncertainty analysis code based on GPT was described and the preliminary verification calculations on the PMR200 pin cell problem were carried out. As a result, they are in a good agreement when compared with the results by TSUNAMI. From this study, it is expected that MUSAD code based on GPT can produce the uncertainty of the homogenized few group microscopic cross sections for a core simulator. For sensitivity and uncertainty analyses for general core responses, a two-step method is available and it utilizes the generalized perturbation theory (GPT) for homogenized few group cross sections in the first step and stochastic sampling method for general core responses in the second step. The uncertainty analysis procedure based on GPT in the first step needs the generalized adjoint solution from a cell or lattice code. For this, the generalized adjoint solver has been integrated into DeCART in our previous work. In this paper, MUSAD (Modues of Uncertainty and Sensitivity Analysis for DeCART) code based on the classical perturbation theory was expanded to the function of the sensitivity and uncertainty analysis for few group cross sections based on GPT. First, the uncertainty analysis method based on GPT was described and, in the next section, the preliminary results of the verification calculation on a VHTR pin cell problem were compared with the results by TSUNAMI of SCALE 6.1

  2. Development and Assessment of Green, Research-Based Instructional Materials for the General Chemistry Laboratory

    Science.gov (United States)

    Cacciatore, Kristen L.

    2010-01-01

    This research entails integrating two novel approaches for enriching student learning in chemistry into the context of the general chemistry laboratory. The first is a pedagogical approach based on research in cognitive science and the second is the green chemistry philosophy. Research has shown that inquiry-based approaches are effective in…

  3. A multivariate family-based association test using generalized estimating equations : FBAT-GEE

    NARCIS (Netherlands)

    Lange, C; Silverman, SK; Xu, [No Value; Weiss, ST; Laird, NM

    In this paper we propose a multivariate extension of family-based association tests based on generalized estimating equations. The test can be applied to multiple phenotypes and to phenotypic data obtained in longitudinal studies without making any distributional assumptions for the phenotypic

  4. A Decoupling Control Method for Shunt Hybrid Active Power Filter Based on Generalized Inverse System

    Directory of Open Access Journals (Sweden)

    Xin Li

    2017-01-01

    Full Text Available In this paper, a novel decoupling control method based on generalized inverse system is presented to solve the problem of SHAPF (Shunt Hybrid Active Power Filter possessing the characteristics of 2-input-2-output nonlinearity and strong coupling. Based on the analysis of operation principle, the mathematical model of SHAPF is firstly built, which is verified to be invertible using interactor algorithm; then the generalized inverse system of SHAPF is obtained to connect in series with the original system so that the composite system is decoupled under the generalized inverse system theory. The PI additional controller is finally designed to control the decoupled 1-order pseudolinear system to make it possible to adjust the performance of the subsystem. The simulation results demonstrated by MATLAB show that the presented generalized inverse system strategy can realise the dynamic decoupling of SHAPF. And the control system has fine dynamic and static performance.

  5. Computer-based, Jeopardy™-like game in general chemistry for engineering majors

    Science.gov (United States)

    Ling, S. S.; Saffre, F.; Kadadha, M.; Gater, D. L.; Isakovic, A. F.

    2013-03-01

    We report on the design of Jeopardy™-like computer game for enhancement of learning of general chemistry for engineering majors. While we examine several parameters of student achievement and attitude, our primary concern is addressing the motivation of students, which tends to be low in a traditionally run chemistry lectures. The effect of the game-playing is tested by comparing paper-based game quiz, which constitutes a control group, and computer-based game quiz, constituting a treatment group. Computer-based game quizzes are Java™-based applications that students run once a week in the second part of the last lecture of the week. Overall effectiveness of the semester-long program is measured through pretest-postest conceptual testing of general chemistry. The objective of this research is to determine to what extent this ``gamification'' of the course delivery and course evaluation processes may be beneficial to the undergraduates' learning of science in general, and chemistry in particular. We present data addressing gender-specific difference in performance, as well as background (pre-college) level of general science and chemistry preparation. We outline the plan how to extend such approach to general physics courses and to modern science driven electives, and we offer live, in-lectures examples of our computer gaming experience. We acknowledge support from Khalifa University, Abu Dhabi

  6. Allocation base of general production costs as optimization of prime costs

    Directory of Open Access Journals (Sweden)

    Levytska I.O.

    2017-03-01

    Full Text Available Qualified management aimed at optimizing financial results is the key factor in today's society. Effective management decisions depend on the necessary information about the costs of production process in all its aspects – their structure, types, accounting policies of reflecting costs. General production costs, the so-called indirect costs that are not directly related to the production process, but provide its functioning in terms of supporting structural divisions and create the necessary conditions of production, play a significant role in calculating prime costs of goods (works, services. However, the accurate estimate of prime costs of goods (works, services should be determined with the value of indirect costs (in other words, general production costs, and properly determined with the base of their allocation. The choice of allocation base of general production costs is the significant moment, depending on the nature of business, which must guarantee fair distribution regarding to the largest share of direct expenses in the total structure of production costs. The study finds the essence of general production costs based on the analysis of key definitions of leading Ukrainian economists. The optimal allocation approach of general production costs is to calculate these costs as direct production costs within each subsidiary division (department separately without selecting a base as the main one to the their total amount.

  7. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  8. A recurrent neural network based on projection operator for extended general variational inequalities.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde

    2010-06-01

    Based on the projection operator, a recurrent neural network is proposed for solving extended general variational inequalities (EGVIs). Sufficient conditions are provided to ensure the global convergence of the proposed neural network based on Lyapunov methods. Compared with the existing neural networks for variational inequalities, the proposed neural network is a modified version of the general projection neural network existing in the literature and capable of solving the EGVI problems. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed neural network.

  9. Adaptive fuzzy bilinear observer based synchronization design for generalized Lorenz system

    International Nuclear Information System (INIS)

    Baek, Jaeho; Lee, Heejin; Kim, Seungwoo; Park, Mignon

    2009-01-01

    This Letter proposes an adaptive fuzzy bilinear observer (FBO) based synchronization design for generalized Lorenz system (GLS). The GLS can be described to TS fuzzy bilinear generalized Lorenz model (FBGLM) with their states immeasurable and their parameters unknown. We design an adaptive FBO based on TS FBGLM for synchronization. Lyapunov theory is employed to guarantee the stability of error dynamic system via linear matrix equalities (LMIs) and to derive the adaptive laws to estimate unknown parameters. Numerical example is given to demonstrate the validity of our proposed adaptive FBO approach for synchronization.

  10. Environmental Assessment: General Plan-Based Environmental Impact Analysis Process, Laughlin Air Force Base

    Science.gov (United States)

    2007-05-01

    BASED ENVIROMENTAL IMPACT ANALYSIS PROCESS LAUGHLIN AIR FORCE BASE, TEXAS AGENCY: 47th Flying Training Wing (FTW), Laughlin Air Force Base (AFB), Texas...m3 micrograms per cubic meter US United States USACE United States Army Corp of Engineers USC United States Code USCB United States Census Bureau...effects and annoyance in that very few flight operations and ground engine runs occur between 2200 hours and 0700 hours. BMPs include restricting the

  11. Quantum image encryption based on generalized affine transform and logistic map

    Science.gov (United States)

    Liang, Hao-Ran; Tao, Xiang-Yang; Zhou, Nan-Run

    2016-07-01

    Quantum circuits of the generalized affine transform are devised based on the novel enhanced quantum representation of digital images. A novel quantum image encryption algorithm combining the generalized affine transform with logistic map is suggested. The gray-level information of the quantum image is encrypted by the XOR operation with a key generator controlled by the logistic map, while the position information of the quantum image is encoded by the generalized affine transform. The encryption keys include the independent control parameters used in the generalized affine transform and the logistic map. Thus, the key space is large enough to frustrate the possible brute-force attack. Numerical simulations and analyses indicate that the proposed algorithm is realizable, robust and has a better performance than its classical counterpart in terms of computational complexity.

  12. A Rule-Based Local Search Algorithm for General Shift Design Problems in Airport Ground Handling

    DEFF Research Database (Denmark)

    Clausen, Tommy

    We consider a generalized version of the shift design problem where shifts are created to cover a multiskilled demand and fit the parameters of the workforce. We present a collection of constraints and objectives for the generalized shift design problem. A local search solution framework with mul......We consider a generalized version of the shift design problem where shifts are created to cover a multiskilled demand and fit the parameters of the workforce. We present a collection of constraints and objectives for the generalized shift design problem. A local search solution framework...... with multiple neighborhoods and a loosely coupled rule engine based on simulated annealing is presented. Computational experiments on real-life data from various airport ground handling organization show the performance and flexibility of the proposed algorithm....

  13. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  14. Generalized Yosida Approximations Based on Relatively A-Maximal m-Relaxed Monotonicity Frameworks

    Directory of Open Access Journals (Sweden)

    Heng-you Lan

    2013-01-01

    Full Text Available We introduce and study a new notion of relatively A-maximal m-relaxed monotonicity framework and discuss some properties of a new class of generalized relatively resolvent operator associated with the relatively A-maximal m-relaxed monotone operator and the new generalized Yosida approximations based on relatively A-maximal m-relaxed monotonicity framework. Furthermore, we give some remarks to show that the theory of the new generalized relatively resolvent operator and Yosida approximations associated with relatively A-maximal m-relaxed monotone operators generalizes most of the existing notions on (relatively maximal monotone mappings in Hilbert as well as Banach space and can be applied to study variational inclusion problems and first-order evolution equations as well as evolution inclusions.

  15. A comparative study on lecture based versus case based education on teaching general surgery to medical students

    Directory of Open Access Journals (Sweden)

    M. Moazeni Bistegani

    2013-06-01

    Full Text Available Introduction : various methods of teaching have different learning outcomes. Using a combination of teaching and training methods of training may boost education. This study compared lecture based and case based teaching as a combined approach in learning general surgery by medical students. Methods: This study was a quasi-experimental performed on two consecutive groups of 33 and 36 students who were studying general surgery course. The two styles of teaching were lecture-based and real case teaching methods. The final exam included twenty multiple choice questions. The mean scores of each group of students were collected and analyzed accordingly with descriptive tests, Fisher’s test and T-test. Results: The mean final mark of students' who received real case based education was 16.8/20 ± 1.8 and for the lecture group was 12.7± 1.7. There was a significant difference between the two groups (P <0.0001. In both groups, there were significant differences in the mean scores of questions with taxonomy two and three, but not in the questions with taxonomy one. Students' evaluation score of the teacher of the real case group increased by 1.7/20 (8.7% in the case based group compared to the lecture group. Conclusions: Case based teaching of general surgery led to a better outcome and students were more satisfied. It is recommended that case based education of surgery be encouraged.

  16. A Memristor-Based Hyperchaotic Complex Lü System and Its Adaptive Complex Generalized Synchronization

    Directory of Open Access Journals (Sweden)

    Shibing Wang

    2016-02-01

    Full Text Available This paper introduces a new memristor-based hyperchaotic complex Lü system (MHCLS and investigates its adaptive complex generalized synchronization (ACGS. Firstly, the complex system is constructed based on a memristor-based hyperchaotic real Lü system, and its properties are analyzed theoretically. Secondly, its dynamical behaviors, including hyperchaos, chaos, transient phenomena, as well as periodic behaviors, are explored numerically by means of bifurcation diagrams, Lyapunov exponents, phase portraits, and time history diagrams. Thirdly, an adaptive controller and a parameter estimator are proposed to realize complex generalized synchronization and parameter identification of two identical MHCLSs with unknown parameters based on Lyapunov stability theory. Finally, the numerical simulation results of ACGS and its applications to secure communication are presented to verify the feasibility and effectiveness of the proposed method.

  17. Hanford general employee training: Computer-based training instructor's manual

    Energy Technology Data Exchange (ETDEWEB)

    1990-10-01

    The Computer-Based Training portion of the Hanford General Employee Training course is designed to be used in a classroom setting with a live instructor. Future references to this course'' refer only to the computer-based portion of the whole. This course covers the basic Safety, Security, and Quality issues that pertain to all employees of Westinghouse Hanford Company. The topics that are covered were taken from the recommendations and requirements for General Employee Training as set forth by the Institute of Nuclear Power Operations (INPO) in INPO 87-004, Guidelines for General Employee Training, applicable US Department of Energy orders, and Westinghouse Hanford Company procedures and policy. Besides presenting fundamental concepts, this course also contains information on resources that are available to assist students. It does this using Interactive Videodisk technology, which combines computer-generated text and graphics with audio and video provided by a videodisk player.

  18. A Bidirectional Generalized Synchronization Theorem-Based Chaotic Pseudo-random Number Generator

    Directory of Open Access Journals (Sweden)

    Han Shuangshuang

    2013-07-01

    Full Text Available Based on a bidirectional generalized synchronization theorem for discrete chaos system, this paper introduces a new 5-dimensional bidirectional generalized chaos synchronization system (BGCSDS, whose prototype is a novel chaotic system introduced in [12]. Numerical simulation showed that two pair variables of the BGCSDS achieve generalized chaos synchronization via a transform H.A chaos-based pseudo-random number generator (CPNG was designed by the new BGCSDS. Using the FIPS-140-2 tests issued by the National Institute of Standard and Technology (NIST verified the randomness of the 1000 binary number sequences generated via the CPNG and the RC4 algorithm respectively. The results showed that all the tested sequences passed the FIPS-140-2 tests. The confidence interval analysis showed the statistical properties of the randomness of the sequences generated via the CPNG and the RC4 algorithm do not have significant differences.

  19. General practitioners, complementary therapies and evidence-based medicine: the defence of clinical autonomy.

    Science.gov (United States)

    Adams, J

    2000-12-01

    Amidst the substantial change currently gripping primary health care are two developments central to contemporary debate regarding the very nature, territory and identity of general practice - the integration of complementary and alternative medicine (CAM) and the rise of evidence-based medicine (EBM). This paper reports findings from a study based upon 25 in-depth interviews with general practitioners (GPs) personally practising complementary therapies alongside more conventional medicine to treat their NHS patients. The paper outlines the GPs' perceptions of EBM, its relationship to their personal development of CAM, and their notions of good clinical practice more generally. Analysis of the GPs' accounts demonstrates how CAM can be seen as a useful resource with which some GPs defend their clinical autonomy from what they perceive to be the threat of EBM. Copyright 2000 Harcourt Publishers Ltd.

  20. Effect of electromagnetic radiations from mobile phone base stations on general health and salivary function

    OpenAIRE

    Singh, Kushpal; Nagaraj, Anup; Yousuf, Asif; Ganta, Shravani; Pareek, Sonia; Vishnani, Preeti

    2016-01-01

    Objective: Cell phones use electromagnetic, nonionizing radiations in the microwave range, which some believe may be harmful to human health. The present study aimed to determine the effect of electromagnetic radiations (EMRs) on unstimulated/stimulated salivary flow rate and other health-related problems between the general populations residing in proximity to and far away from mobile phone base stations. Materials and Methods: A total of four mobile base stations were randomly selected from...

  1. Inference for exponentiated general class of distributions based on record values

    Directory of Open Access Journals (Sweden)

    Samah N. Sindi

    2017-09-01

    Full Text Available The main objective of this paper is to suggest and study a new exponentiated general class (EGC of distributions. Maximum likelihood, Bayesian and empirical Bayesian estimators of the parameter of the EGC of distributions based on lower record values are obtained. Furthermore, Bayesian prediction of future records is considered. Based on lower record values, the exponentiated Weibull distribution, its special cases of distributions and exponentiated Gompertz distribution are applied to the EGC of distributions.  

  2. Generalizing Perspective-based Inspection to handle Object-Oriented Development Artifacts

    OpenAIRE

    Laitenberger, O.; Atkinson, C.

    1998-01-01

    The value of software inspection for uncovering defects early in the development lifecycle has been well documented. Of the various types of inspection methods published to date, experiments have shown perspective-based inspection to be one of the most effective, because of its enhanced coverage of the defect space. However, inspections in general, and perspective-based inspections in particular, have so far been applied predominantly in the context of conventional structured development meth...

  3. A residual life prediction model based on the generalized σ -N curved surface

    OpenAIRE

    Zongwen AN; Xuezong BAI; Jianxiong GAO

    2016-01-01

    In order to investigate change rule of the residual life of structure under random repeated load, firstly, starting from the statistic meaning of random repeated load, the joint probability density function of maximum stress and minimum stress is derived based on the characteristics of order statistic (maximum order statistic and minimum order statistic); then, based on the equation of generalized σ -N curved surface, considering the influence of load cycles number on fatigue life, a relation...

  4. General practice-based clinical trials in Germany - a problem analysis

    Directory of Open Access Journals (Sweden)

    Hummers-Pradier Eva

    2012-11-01

    Full Text Available Abstract Background In Germany, clinical trials and comparative effectiveness studies in primary care are still very rare, while their usefulness has been recognised in many other countries. A network of researchers from German academic general practice has explored the reasons for this discrepancy. Methods Based on a comprehensive literature review and expert group discussions, problem analyses as well as structural and procedural prerequisites for a better implementation of clinical trials in German primary care are presented. Results In Germany, basic biomedical science and technology is more reputed than clinical or health services research. Clinical trials are funded by industry or a single national programme, which is highly competitive, specialist-dominated, exclusive of pilot studies, and usually favours innovation rather than comparative effectiveness studies. Academic general practice is still not fully implemented, and existing departments are small. Most general practitioners (GPs work in a market-based, competitive setting of small private practices, with a high case load. They have no protected time or funding for research, and mostly no research training or experience. Good Clinical Practice (GCP training is compulsory for participation in clinical trials. The group defined three work packages to be addressed regarding clinical trials in German general practice: (1 problem analysis, and definition of (2 structural prerequisites and (3 procedural prerequisites. Structural prerequisites comprise specific support facilities for general practice-based research networks that could provide practices with a point of contact. Procedural prerequisites consist, for example, of a summary of specific relevant key measures, for example on a web platform. The platform should contain standard operating procedures (SOPs, templates, checklists and other supporting materials for researchers. Conclusion All in all, our problem analyses revealed that

  5. General Chemistry Students' Conceptual Understanding and Language Fluency: Acid-Base Neutralization and Conductometry

    Science.gov (United States)

    Nyachwaya, James M.

    2016-01-01

    The objective of this study was to examine college general chemistry students' conceptual understanding and language fluency in the context of the topic of acids and bases. 115 students worked in groups of 2-4 to complete an activity on conductometry, where they were given a scenario in which a titration of sodium hydroxide solution and dilute…

  6. Algorithm/Architecture Co-design of the Generalized Sampling Theorem Based De-Interlacer.

    NARCIS (Netherlands)

    Beric, A.; Haan, de G.; Sethuraman, R.; Meerbergen, van J.

    2005-01-01

    De-interlacing is a major determinant of image quality in a modern display processing chain. The de-interlacing method based on the generalized sampling theorem (GST)applied to motion estimation and motion compensation provides the best de-interlacing results. With HDTV interlaced input material

  7. Robust Position Tracking for Electro-Hydraulic Drives Based on Generalized Feedforward Compensation Approach

    DEFF Research Database (Denmark)

    Schmidt, Lasse; Andersen, Torben Ole; Pedersen, Henrik C.

    2012-01-01

    This paper presents a robust tracking control concept based on accurate feedforward compensation for hydraulic valve-cylinder drives. The proposed feedforward compensator is obtained utilizing a generalized description of the valve flow that takes into account any asymmetry of valves and...... constant gain type feedforward compensator, when subjected to strong perturbations in supply pressure and coulomb friction....

  8. Renormalization of total sets of states into generalized bases with a resolution of the identity

    International Nuclear Information System (INIS)

    Vourdas, A

    2017-01-01

    A total set of states for which we have no resolution of the identity (a ‘pre-basis’), is considered in a finite dimensional Hilbert space. A dressing formalism renormalizes them into density matrices which resolve the identity, and makes them a ‘generalized basis’, which is practically useful. The dresssing mechanism is inspired by Shapley’s methodology in cooperative game theory, and it uses Möbius transforms. There is non-independence and redundancy in these generalized bases, which is quantified with a Shannon type of entropy. Due to this redundancy, calculations based on generalized bases are sensitive to physical changes and robust in the presence of noise. For example, the representation of an arbitrary vector in such generalized bases, is robust when noise is inserted in the coefficients. Also in a physical system with a ground state which changes abruptly at some value of the coupling constant, the proposed methodology detects such changes, even when noise is added to the parameters in the Hamiltonian of the system. (paper)

  9. Asymptotic theory of generalized estimating equations based on jack-knife pseudo-observations

    DEFF Research Database (Denmark)

    Overgaard, Morten; Parner, Erik Thorlund; Pedersen, Jan

    2017-01-01

    A general asymptotic theory of estimates from estimating functions based on jack-knife pseudo-observations is established by requiring that the underlying estimator can be expressed as a smooth functional of the empirical distribution. Using results in p-variation norms, the theory is applied...

  10. Infrared and Raman Spectroscopy: A Discovery-Based Activity for the General Chemistry Curriculum

    Science.gov (United States)

    Borgsmiller, Karen L.; O'Connell, Dylan J.; Klauenberg, Kathryn M.; Wilson, Peter M.; Stromberg, Christopher J.

    2012-01-01

    A discovery-based method is described for incorporating the concepts of IR and Raman spectroscopy into the general chemistry curriculum. Students use three sets of springs to model the properties of single, double, and triple covalent bonds. Then, Gaussian 03W molecular modeling software is used to illustrate the relationship between bond…

  11. An Improved Second-Order Generalized Integrator Based Quadrature Signal Generator

    DEFF Research Database (Denmark)

    Xin, Zhen; Wang, Xiongfei; Qin, Zian

    2016-01-01

    The second-order generalized integrator based quadrature signal generator (SOGI-QSG) is able to produce in-quadrature signals for many applications, such as frequency estimation, grid synchronization, and harmonic extraction. However, the SOGI-QSG is sensitive to input dc and harmonic components...

  12. The Inequivalence of an Online and Classroom Based General Psychology Course

    Science.gov (United States)

    Edmonds, Christopher L.

    2006-01-01

    One-hundred seventy-five students enrolled in either a traditional classroom lecture section of General Psychology or in an online section of the same course were compared on exam performance. When covariates of high school grade point average and SAT composite scores were entered into the analysis, students enrolled in the classroom based lecture…

  13. Intrinsic Functional Connectivity of Amygdala-Based Networks in Adolescent Generalized Anxiety Disorder

    Science.gov (United States)

    Roy, Amy K.; Fudge, Julie L.; Kelly, Clare; Perry, Justin S. A.; Daniele, Teresa; Carlisi, Christina; Benson, Brenda; Castellanos, F. Xavier; Milham, Michael P.; Pine, Daniel S.; Ernst, Monique

    2013-01-01

    Objective: Generalized anxiety disorder (GAD) typically begins during adolescence and can persist into adulthood. The pathophysiological mechanisms underlying this disorder remain unclear. Recent evidence from resting state functional magnetic resonance imaging (R-fMRI) studies in adults suggests disruptions in amygdala-based circuitry; the…

  14. Transitioning from Expository Laboratory Experiments to Course-Based Undergraduate Research in General Chemistry

    Science.gov (United States)

    Clark, Ted M.; Ricciardo, Rebecca; Weaver, Tyler

    2016-01-01

    General chemistry courses predominantly use expository experiments that shape student expectations of what a laboratory activity entails. Shifting within a semester to course-based undergraduate research activities that include greater decision-making, collaborative work, and "messy" real-world data necessitates a change in student…

  15. Team-Based Learning Reduces Attrition in a First-Semester General Chemistry Course

    Science.gov (United States)

    Comeford, Lorrie

    2016-01-01

    Team-based learning (TBL) is an instructional method that has been shown to reduce attrition and increase student learning in a number of disciplines. TBL was implemented in a first-semester general chemistry course, and its effect on attrition was assessed. Attrition from sections before implementing TBL (fall 2008 to fall 2009) was compared with…

  16. A projection-based approach to general-form Tikhonov regularization

    DEFF Research Database (Denmark)

    Kilmer, Misha E.; Hansen, Per Christian; Espanol, Malena I.

    2007-01-01

    We present a projection-based iterative algorithm for computing general-form Tikhonov regularized solutions to the problem minx| Ax-b |2^2+lambda2| Lx |2^2, where the regularization matrix L is not the identity. Our algorithm is designed for the common case where lambda is not known a priori...

  17. SACRD: a data base for fast reactor safety computer codes, general description

    International Nuclear Information System (INIS)

    Greene, N.M.; Forsberg, V.M.; Raiford, G.B.; Arwood, J.W.; Simpson, D.B.; Flanagan, G.F.

    1979-01-01

    SACRD is a data base of material properties and other handbook data needed in computer codes used for fast reactor safety studies. Data are available in the thermodynamics, heat transfer, fluid mechanics, structural mechanics, aerosol transport, meteorology, neutronics, and dosimetry areas. Tabular, graphical and parameterized data are provided in many cases. A general description of the SACRD system is presented in the report

  18. Feasibility of a Web-Based Cross-Over Paleolithic Diet Intervention in the General Population

    NARCIS (Netherlands)

    Nederhof, Esther; Bikker, Esther

    2017-01-01

    Introduction: The primary aim was to investigate feasibility of a web-based cross-over Paleolithic diet intervention in the general population. The secondary aim was to calculate the sample size needed to reach a statistically significant difference in effect of a Paleolithic-like diet on

  19. When are emotions related to group-based appraisals? : A comparison between group-based emotions and general group emotions

    NARCIS (Netherlands)

    Kuppens, Toon; Yzerbyt, Vincent Y.

    2014-01-01

    In the literature on emotions in intergroup relations, it is not always clear how exactly emotions are group-related. Here, we distinguish between emotions that involve appraisals of immediate group concerns (i.e., group-based emotions) and emotions that do not. Recently, general group emotions,

  20. Quantitative generalized ratiometric fluorescence spectroscopy for turbid media based on probe encapsulated by biologically localized embedding

    International Nuclear Information System (INIS)

    Yan, Xiu-Fang; Chen, Zeng-Ping; Cui, Yin-Yin; Hu, Yuan-Liang; Yu, Ru-Qin

    2016-01-01

    PEBBLE (probe encapsulated by biologically localized embedding) nanosensor encapsulating an intensity-based fluorescence indicator and an inert reference fluorescence dye inside the pores of stable matrix can be used as a generalized wavelength-ratiometric probe. However, the lack of an efficient quantitative model render the choices of inert reference dyes and intensity-based fluorescence indicators used in PEBBLEs based generalized wavelength-ratiometric probes rather limited. In this contribution, an extended quantitative fluorescence model was derived specifically for generalized wavelength-ratiometric probes based on PEBBLE technique (QFM GRP ) with a view to simplify the design of PEBBLEs and hence further extend their application potentials. The effectiveness of QFM GRP has been tested on the quantitative determination of free Ca 2+ in both simulated and real turbid media using a Ca 2+ sensitive PEBBLE nanosensor encapsulating Rhod-2 and eosin B inside the micropores of stable polyacrylamide matrix. Experimental results demonstrated that QFM GRP could realize precise and accurate quantification of free Ca 2+ in turbid samples, even though there is serious overlapping between the fluorescence excitation peaks of eosin B and Ca 2+ bound Rhod-2. The average relative predictive error value of QFM GRP for the test simulated turbid samples was 5.9%, about 2–4 times lower than the corresponding values of partial least squares calibration model and the empirical ratiometric model based on the ratio of fluorescence intensities at the excitation peaks of Ca 2+ bound Rhod-2 and eosin B. The recovery rates of QFM GRP for the real and spiked turbid samples varied from 93.1% to 101%, comparable to the corresponding results of atomic absorption spectrometry. - Highlights: • An advanced model was derived for generalized wavelength-ratiometric PEBBLEs. • The model can simplify the design of generalized wavelength-ratiometric PEBBLEs. • The model realized accurate

  1. Generalized double-humped logistic map-based medical image encryption

    Directory of Open Access Journals (Sweden)

    Samar M. Ismail

    2018-03-01

    Full Text Available This paper presents the design of the generalized Double Humped (DH logistic map, used for pseudo-random number key generation (PRNG. The generalized parameter added to the map provides more control on the map chaotic range. A new special map with a zooming effect of the bifurcation diagram is obtained by manipulating the generalization parameter value. The dynamic behavior of the generalized map is analyzed, including the study of the fixed points and stability ranges, Lyapunov exponent, and the complete bifurcation diagram. The option of designing any specific map is made possible through changing the general parameter increasing the randomness and controllability of the map. An image encryption algorithm is introduced based on pseudo-random sequence generation using the proposed generalized DH map offering secure communication transfer of medical MRI and X-ray images. Security analyses are carried out to consolidate system efficiency including: key sensitivity and key-space analyses, histogram analysis, correlation coefficients, MAE, NPCR and UACI calculations. System robustness against noise attacks has been proved along with the NIST test ensuring the system efficiency. A comparison between the proposed system with respect to previous works is presented.

  2. Entropy-based derivation of generalized distributions for hydrometeorological frequency analysis

    Science.gov (United States)

    Chen, Lu; Singh, Vijay P.

    2018-02-01

    Frequency analysis of hydrometeorological and hydrological extremes is needed for the design of hydraulic and civil infrastructure facilities as well as water resources management. A multitude of distributions have been employed for frequency analysis of these extremes. However, no single distribution has been accepted as a global standard. Employing the entropy theory, this study derived five generalized distributions for frequency analysis that used different kinds of information encoded as constraints. These distributions were the generalized gamma (GG), the generalized beta distribution of the second kind (GB2), and the Halphen type A distribution (Hal-A), Halphen type B distribution (Hal-B) and Halphen type inverse B distribution (Hal-IB), among which the GG and GB2 distribution were previously derived by Papalexiou and Koutsoyiannis (2012) and the Halphen family was first derived using entropy theory in this paper. The entropy theory allowed to estimate parameters of the distributions in terms of the constraints used for their derivation. The distributions were tested using extreme daily and hourly rainfall data. Results show that the root mean square error (RMSE) values were very small, which indicated that the five generalized distributions fitted the extreme rainfall data well. Among them, according to the Akaike information criterion (AIC) values, generally the GB2 and Halphen family gave a better fit. Therefore, those general distributions are one of the best choices for frequency analysis. The entropy-based derivation led to a new way for frequency analysis of hydrometeorological extremes.

  3. A General Provincial Situation Visualization System Based on iPhone Operating System of Shandong Province

    Science.gov (United States)

    Ye, Z.; Xiang, H.

    2014-04-01

    The paper discusses the basic principles and the problem solutions during the design and implementation of the mobile GIS system, and base on the research result, we developed the General Provincial Situation Visualization System Based on iOS of Shandong Province. The system is developed in the Objective-C programming language, and use the ArcGIS Runtime SDK for IOS as the development tool to call the "World-map Shandong" services to implement the development of the General Provincial Situation Visualization System Based on iOS devices. The system is currently available for download in the Appstore and is chosen as the typical application case of ESRI China ArcGIS API for iOS.

  4. Design method of general-purpose driving circuit for CCD based on CPLD

    International Nuclear Information System (INIS)

    Zhang Yong; Tang Benqi; Xiao Zhigang; Wang Zujun; Huang Shaoyan

    2005-01-01

    It is very important for studying the radiation damage effects and mechanism systematically about CCD to develop a general-purpose test platform. The paper discusses the design method of general-purpose driving circuit for CCD based on CPLD and the realization approach. A main controller has being designed to read the data file from the outer memory, setup the correlative parameter registers and produce the driving pulses according with parameter request strictly, which is based on MAX7000S by using MAX-PLUS II software. The basic driving circuit module has being finished based on this method. The output waveform of the module is the same figure as the simulation waveform. The result indicates that the design method is feasible. (authors)

  5. QUANTITATIVE EVALUATION METHOD OF ELEMENTS PRIORITY OF CARTOGRAPHIC GENERALIZATION BASED ON TAXI TRAJECTORY DATA

    Directory of Open Access Journals (Sweden)

    Z. Long

    2017-09-01

    Full Text Available Considering the lack of quantitative criteria for the selection of elements in cartographic generalization, this study divided the hotspot areas of passengers into parts at three levels, gave them different weights, and then classified the elements from the different hotspots. On this basis, a method was proposed to quantify the priority of elements selection. Subsequently, the quantitative priority of different cartographic elements was summarized based on this method. In cartographic generalization, the method can be preferred to select the significant elements and discard those that are relatively non-significant.

  6. Effects of a team-based assessment and intervention on patient safety culture in general practice

    DEFF Research Database (Denmark)

    Hoffmann, B; Müller, V; Rochon, J

    2014-01-01

    Background: The measurement of safety culture in healthcare is generally regarded as a first step towards improvement. Based on a self-assessment of safety culture, the Frankfurt Patient Safety Matrix (FraTrix) aims to enable healthcare teams to improve safety culture in their organisations....... In this study we assessed the effects of FraTrix on safety culture in general practice. Methods: We conducted an open randomised controlled trial in 60 general practices. FraTrix was applied over a period of 9 months during three facilitated team sessions in intervention practices. At baseline and after 12...... months, scores were allocated for safety culture as expressed in practice structure and processes (indicators), in safety climate and in patient safety incident reporting. The primary outcome was the indicator error management. Results: During the team sessions, practice teams reflected on their safety...

  7. Design and implementation of a general and automatic test platform base on NI PXI system

    Science.gov (United States)

    Shi, Long

    2018-05-01

    Aiming at some difficulties of test equipment such as the short product life, poor generality and high development cost, a general and automatic test platform base on NI PXI system is designed in this paper, which is able to meet most test requirements of circuit boards. The test platform is devided into 5 layers, every layer is introduced in detail except for the "Equipment Under Test" layer. An output board of a track-side equipment, which is an important part of high speed train control system, is taken as an example to make the functional circuit test by the test platform. The results show that the test platform is easy to realize add-on functions development, automatic test, wide compatibility and strong generality.

  8. Generalization bounds of ERM-based learning processes for continuous-time Markov chains.

    Science.gov (United States)

    Zhang, Chao; Tao, Dacheng

    2012-12-01

    Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.

  9. Effect of electromagnetic radiations from mobile phone base stations on general health and salivary function.

    Science.gov (United States)

    Singh, Kushpal; Nagaraj, Anup; Yousuf, Asif; Ganta, Shravani; Pareek, Sonia; Vishnani, Preeti

    2016-01-01

    Cell phones use electromagnetic, nonionizing radiations in the microwave range, which some believe may be harmful to human health. The present study aimed to determine the effect of electromagnetic radiations (EMRs) on unstimulated/stimulated salivary flow rate and other health-related problems between the general populations residing in proximity to and far away from mobile phone base stations. A total of four mobile base stations were randomly selected from four zones of Jaipur, Rajasthan, India. Twenty individuals who were residing in proximity to the selected mobile phone towers were taken as the case group and the other 20 individuals (control group) who were living nearly 1 km away in the periphery were selected for salivary analysis. Questions related to sleep disturbances were measured using Pittsburgh Sleep Quality Index (PSQI) and other health problems were included in the questionnaire. Chi-square test was used for statistical analysis. It was unveiled that a majority of the subjects who were residing near the mobile base station complained of sleep disturbances, headache, dizziness, irritability, concentration difficulties, and hypertension. A majority of the study subjects had significantly lesser stimulated salivary secretion (P base stations on the health and well-being of the general population cannot be ruled out. Further studies are warranted to evaluate the effect of electromagnetic fields (EMFs) on general health and more specifically on oral health.

  10. Generic functional requirements for a NASA general-purpose data base management system

    Science.gov (United States)

    Lohman, G. M.

    1981-01-01

    Generic functional requirements for a general-purpose, multi-mission data base management system (DBMS) for application to remotely sensed scientific data bases are detailed. The motivation for utilizing DBMS technology in this environment is explained. The major requirements include: (1) a DBMS for scientific observational data; (2) a multi-mission capability; (3) user-friendly; (4) extensive and integrated information about data; (5) robust languages for defining data structures and formats; (6) scientific data types and structures; (7) flexible physical access mechanisms; (8) ways of representing spatial relationships; (9) a high level nonprocedural interactive query and data manipulation language; (10) data base maintenance utilities; (11) high rate input/output and large data volume storage; and adaptability to a distributed data base and/or data base machine configuration. Detailed functions are specified in a top-down hierarchic fashion. Implementation, performance, and support requirements are also given.

  11. Design and validation of general biology learning program based on scientific inquiry skills

    Science.gov (United States)

    Cahyani, R.; Mardiana, D.; Noviantoro, N.

    2018-03-01

    Scientific inquiry is highly recommended to teach science. The reality in the schools and colleges is that many educators still have not implemented inquiry learning because of their lack of understanding. The study aims to1) analyze students’ difficulties in learning General Biology, 2) design General Biology learning program based on multimedia-assisted scientific inquiry learning, and 3) validate the proposed design. The method used was Research and Development. The subjects of the study were 27 pre-service students of general elementary school/Islamic elementary schools. The workflow of program design includes identifying learning difficulties of General Biology, designing course programs, and designing instruments and assessment rubrics. The program design is made for four lecture sessions. Validation of all learning tools were performed by expert judge. The results showed that: 1) there are some problems identified in General Biology lectures; 2) the designed products include learning programs, multimedia characteristics, worksheet characteristics, and, scientific attitudes; and 3) expert validation shows that all program designs are valid and can be used with minor revisions. The first section in your paper.

  12. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings.

    Science.gov (United States)

    Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun

    2017-05-18

    The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features' information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components.

  13. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings

    Directory of Open Access Journals (Sweden)

    Jie Liu

    2017-05-01

    Full Text Available The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD. Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components.

  14. Generalized One-Band Model Based on Zhang-Rice Singlets for Tetragonal CuO

    Science.gov (United States)

    Hamad, I. J.; Manuel, L. O.; Aligia, A. A.

    2018-04-01

    Tetragonal CuO (T-CuO) has attracted attention because of its structure similar to that of the cuprates. It has been recently proposed as a compound whose study can give an end to the long debate about the proper microscopic modeling for cuprates. In this work, we rigorously derive an effective one-band generalized t -J model for T-CuO, based on orthogonalized Zhang-Rice singlets, and make an estimative calculation of its parameters, based on previous ab initio calculations. By means of the self-consistent Born approximation, we then evaluate the spectral function and the quasiparticle dispersion for a single hole doped in antiferromagnetically ordered half filled T-CuO. Our predictions show very good agreement with angle-resolved photoemission spectra and with theoretical multiband results. We conclude that a generalized t -J model remains the minimal Hamiltonian for a correct description of single-hole dynamics in cuprates.

  15. Comparison of nonstationary generalized logistic models based on Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    S. Kim

    2015-06-01

    Full Text Available Recently, the evidences of climate change have been observed in hydrologic data such as rainfall and flow data. The time-dependent characteristics of statistics in hydrologic data are widely defined as nonstationarity. Therefore, various nonstationary GEV and generalized Pareto models have been suggested for frequency analysis of nonstationary annual maximum and POT (peak-over-threshold data, respectively. However, the alternative models are required for nonstatinoary frequency analysis because of analyzing the complex characteristics of nonstationary data based on climate change. This study proposed the nonstationary generalized logistic model including time-dependent parameters. The parameters of proposed model are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed model is compared by Monte Carlo simulation to investigate the characteristics of models and applicability.

  16. Hesitant fuzzy linguistic multicriteria decision-making method based on generalized prioritized aggregation operator.

    Science.gov (United States)

    Wu, Jia-ting; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong

    2014-01-01

    Based on linguistic term sets and hesitant fuzzy sets, the concept of hesitant fuzzy linguistic sets was introduced. The focus of this paper is the multicriteria decision-making (MCDM) problems in which the criteria are in different priority levels and the criteria values take the form of hesitant fuzzy linguistic numbers (HFLNs). A new approach to solving these problems is proposed, which is based on the generalized prioritized aggregation operator of HFLNs. Firstly, the new operations and comparison method for HFLNs are provided and some linguistic scale functions are applied. Subsequently, two prioritized aggregation operators and a generalized prioritized aggregation operator of HFLNs are developed and applied to MCDM problems. Finally, an illustrative example is given to illustrate the effectiveness and feasibility of the proposed method, which are then compared to the existing approach.

  17. The effectiveness of a semi-tailored facilitator-based intervention to optimise chronic care management in general practice

    DEFF Research Database (Denmark)

    Due, Tina Drud; Thorsen, Thorkil; Kousgaard, Marius Brostrøm

    2014-01-01

    BACKGROUND: The Danish health care sector is reorganising based on disease management programmes designed to secure integrated and high quality chronic care across hospitals, general practitioners and municipalities. The disease management programmes assign a central role to general practice; and...

  18. The Effectiveness of Abstinence-Based/Faith-Based Addiction Quitting Courses on General and Coping Self-Efficacy

    Directory of Open Access Journals (Sweden)

    Hosin Nazari, Sh

    2010-05-01

    Full Text Available Aim: One of the influential elements in the life of an individual is his or her level of self efficacy. This research aimed to study the effectiveness of abstinence-based/faith-based addiction quitting courses on general and coping self efficacy of the people who want to quit opium addiction through these courses in Tehran city. Method: In semi experimental research design 80 people who referred to abstinence-based/faith-based addiction quitting courses were selected by census method. General self efficacy questionnaire of Jerusalem and Schwartzer (1981 and coping self-efficacy questionnaire of Chesney (2006 administered among selected sample before and after treatment. Results: The results of paired t-test indicated that abstinence-based/faith-based addiction quitting courses have a significant influence on the skills of impeding negative thoughts and excitements and gaining friends’ and colleagues’ support. Conclusion: The findings of this research concur with the findings of similar researches, and indicated with appropriate strategies of training self-efficacy beliefs can be improved and boosted.

  19. When are emotions related to group-based appraisals? A comparison between group-based emotions and general group emotions.

    Science.gov (United States)

    Kuppens, Toon; Yzerbyt, Vincent Y

    2014-12-01

    In the literature on emotions in intergroup relations, it is not always clear how exactly emotions are group-related. Here, we distinguish between emotions that involve appraisals of immediate group concerns (i.e., group-based emotions) and emotions that do not. Recently, general group emotions, measured by asking people how they feel "as a group member" but without specifying an object for these emotions, have been conceptualized as reflecting appraisals of group concerns. In contrast, we propose that general group emotions are best seen as emotions about belonging to a group. In two studies, general group emotions were closely related to emotions that are explicitly measured as belonging emotions. Two further studies showed that general group emotions were not related to appraisals of immediate group concerns, whereas group-based emotions were. We argue for more specificity regarding the group-level aspects of emotion that are tapped by emotion measures. © 2014 by the Society for Personality and Social Psychology, Inc.

  20. General specifications for the development of a PC-based simulator of the NASA RECON system

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1984-01-01

    The general specifications for the design and implementation of an IBM PC/XT-based simulator of the NASA RECON system, including record designs, file structure designs, command language analysis, program design issues, error recovery considerations, and usage monitoring facilities are discussed. Once implemented, such a simulator will be utilized to evaluate the effectiveness of simulated information system access in addition to actual system usage as part of the total educational programs being developed within the NASA contract.

  1. State of the art in HGPT (Heuristically Based Generalized Perturbation) methodology

    International Nuclear Information System (INIS)

    Gandini, A.

    1993-01-01

    A distinctive feature of heuristically based generalized perturbation theory (HGPT) methodology consists in the systematic use of importance conservation concepts. As well known, this use leads to fundamental reciprocity relationships from which perturbation, or sensitivity, expressions can be derived. The state of the art of the HGPT methodology is here illustrated. The application to a number of specific nonlinear fields of interest is commented. (author)

  2. Environmental Assessment for the General Plan and Maintenance of Patrick Air Force Base, Florida

    Science.gov (United States)

    2005-05-01

    considered viable. P A F B G e n e r a l P l a n E A 2 Environmental Effects The General Plan EA evaluated the environmental impacts of...year cycle. The potential environmental effects were assessed for the following environmental resource areas: air quality, water quality, geology...ADP Area Development Plan AF Air Force AFB Air Force Base AFETR Air Force Eastern Test Range AFI Air Force Instruction AFMAN Air Force Manual

  3. A Matrix Method Based on the Fibonacci Polynomials to the Generalized Pantograph Equations with Functional Arguments

    Directory of Open Access Journals (Sweden)

    Ayşe Betül Koç

    2014-01-01

    Full Text Available A pseudospectral method based on the Fibonacci operational matrix is proposed to solve generalized pantograph equations with linear functional arguments. By using this method, approximate solutions of the problems are easily obtained in form of the truncated Fibonacci series. Some illustrative examples are given to verify the efficiency and effectiveness of the proposed method. Then, the numerical results are compared with other methods.

  4. Does an activity based remuneration system attract young doctors to general practice?

    Directory of Open Access Journals (Sweden)

    Abelsen Birgit

    2012-03-01

    Full Text Available Abstract Background The use of increasingly complex payment schemes in primary care may represent a barrier to recruiting general practitioners (GP. The existing Norwegian remuneration system is fully activity based - 2/3 fee-for-service and 1/3 capitation. Given that the system has been designed and revised in close collaborations with the medical association, it is likely to correspond - at least to some degree - with the preferences of current GPs (men in majority. The objective of this paper was to study which preferences that young doctors (women in majority, who are the potential entrants to general practice have for activity based vs. salary based payment systems. Methods In November-December 2010 all last year medical students and all interns in Norway (n = 1.562 were invited to participate in an online survey. The respondents were asked their opinion on systems of remuneration for GPs; inclination to work as a GP; risk attitude; income preferences; work pace tolerance. The data was analysed using one-way ANOVA and multinomial logistic regression analysis. Results A total of 831 (53% responded. Nearly half the sample (47% did not consider the remuneration system to be important for their inclination to work as GP; 36% considered the current system to make general practice more attractive, while 17% considered it to make general practice less attractive. Those who are attracted by the existing system were men and those who think high income is important, while those who are deterred by the system are risk averse and less happy with a high work pace. On the question of preferred remuneration system, half the sample preferred a mix of salary and activity based remuneration (the median respondent would prefer a 50/50 mix. Only 20% preferred a fully activity based system like the existing one. A salary system was preferred by women, and those less concerned with high income, while a fully activity based system was preferred by men, and those

  5. Does an activity based remuneration system attract young doctors to general practice?

    Science.gov (United States)

    2012-01-01

    Background The use of increasingly complex payment schemes in primary care may represent a barrier to recruiting general practitioners (GP). The existing Norwegian remuneration system is fully activity based - 2/3 fee-for-service and 1/3 capitation. Given that the system has been designed and revised in close collaborations with the medical association, it is likely to correspond - at least to some degree - with the preferences of current GPs (men in majority). The objective of this paper was to study which preferences that young doctors (women in majority), who are the potential entrants to general practice have for activity based vs. salary based payment systems. Methods In November-December 2010 all last year medical students and all interns in Norway (n = 1.562) were invited to participate in an online survey. The respondents were asked their opinion on systems of remuneration for GPs; inclination to work as a GP; risk attitude; income preferences; work pace tolerance. The data was analysed using one-way ANOVA and multinomial logistic regression analysis. Results A total of 831 (53%) responded. Nearly half the sample (47%) did not consider the remuneration system to be important for their inclination to work as GP; 36% considered the current system to make general practice more attractive, while 17% considered it to make general practice less attractive. Those who are attracted by the existing system were men and those who think high income is important, while those who are deterred by the system are risk averse and less happy with a high work pace. On the question of preferred remuneration system, half the sample preferred a mix of salary and activity based remuneration (the median respondent would prefer a 50/50 mix). Only 20% preferred a fully activity based system like the existing one. A salary system was preferred by women, and those less concerned with high income, while a fully activity based system was preferred by men, and those happy with a high work

  6. Does an activity based remuneration system attract young doctors to general practice?

    Science.gov (United States)

    Abelsen, Birgit; Olsen, Jan Abel

    2012-03-20

    The use of increasingly complex payment schemes in primary care may represent a barrier to recruiting general practitioners (GP). The existing Norwegian remuneration system is fully activity based - 2/3 fee-for-service and 1/3 capitation. Given that the system has been designed and revised in close collaborations with the medical association, it is likely to correspond - at least to some degree - with the preferences of current GPs (men in majority). The objective of this paper was to study which preferences that young doctors (women in majority), who are the potential entrants to general practice have for activity based vs. salary based payment systems. In November-December 2010 all last year medical students and all interns in Norway (n = 1.562) were invited to participate in an online survey. The respondents were asked their opinion on systems of remuneration for GPs; inclination to work as a GP; risk attitude; income preferences; work pace tolerance. The data was analysed using one-way ANOVA and multinomial logistic regression analysis. A total of 831 (53%) responded. Nearly half the sample (47%) did not consider the remuneration system to be important for their inclination to work as GP; 36% considered the current system to make general practice more attractive, while 17% considered it to make general practice less attractive. Those who are attracted by the existing system were men and those who think high income is important, while those who are deterred by the system are risk averse and less happy with a high work pace. On the question of preferred remuneration system, half the sample preferred a mix of salary and activity based remuneration (the median respondent would prefer a 50/50 mix). Only 20% preferred a fully activity based system like the existing one. A salary system was preferred by women, and those less concerned with high income, while a fully activity based system was preferred by men, and those happy with a high work pace. Given a concern

  7. Re-Investigation of Generalized Integrator Based Filters From a First-Order-System Perspective

    DEFF Research Database (Denmark)

    Xin, Zhen; Zhao, Rende; Mattavelli, Paolo

    2016-01-01

    The generalized integrator (GI)-based filters can be categorized into two types: one is related to quadrature signal generator (QSG), and the other is related to sequence filter (SF). The QSG is used for generating the in-quadrature sinusoidal signals and the SF works for extracting the symmetrical...... extended structures and thus restrict their applications. To overcome the drawback, this paper uses the first-order-system concept to re-investigate the GI-based filters, with which their working principles can be intuitively understood and their structure correlations can be easily discovered. Moreover...

  8. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

    Science.gov (United States)

    Williams, C.J.; Heglund, P.J.

    2009-01-01

    Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

  9. Is the Generally Held View That Intravenous Dihydroergotamine Is Effective in Migraine Based on Wrong "General Consensus" of One Trial?

    DEFF Research Database (Denmark)

    Bekan, Goran; Tfelt-Hansen, Peer

    2016-01-01

    BACKGROUND: The claim that parenteral dihydroergotamine (DHE) is effective in migraine is based on one randomized, placebo-controlled, crossover trial from 1986. The aim of this review was to critically evaluate the original article. It was also found to be of interest to review quotes concerning...

  10. SIMULATION AND PREDICTION OF THE PROCESS BASED ON THE GENERAL LOGISTIC MAPPING

    Directory of Open Access Journals (Sweden)

    V. V. Skalozub

    2013-11-01

    Full Text Available Purpose. The aim of the research is to build a model of the generalzed logistic mapping and assessment of the possibilities of its use for the formation of the mathematical description, as well as operational forecasts of parameters of complex dynamic processes described by the time series. Methodology. The research results are obtained on the basis of mathematical modeling and simulation of nonlinear systems using the tools of chaotic dynamics. Findings. A model of the generalized logistic mapping, which is used to interpret the characteristics of dynamic processes was proposed. We consider some examples of representations of processes based on enhanced logistic mapping varying the values of model parameters. The procedures of modeling and interpretation of the data on the investigated processes, represented by the time series, as well as the operational forecasting of parameters using the generalized model of logistic mapping were proposed. Originality. The paper proposes an improved mathematical model, generalized logistic mapping, designed for the study of nonlinear discrete dynamic processes. Practical value. The carried out research using the generalized logistic mapping of railway transport processes, in particular, according to assessment of the parameters of traffic volumes, indicate the great potential of its application in practice for solving problems of analysis, modeling and forecasting complex nonlinear discrete dynamical processes. The proposed model can be used, taking into account the conditions of uncertainty, irregularity, the manifestations of the chaotic nature of the technical, economic and other processes, including the railway ones.

  11. On the Generalization of the Timoshenko Beam Model Based on the Micropolar Linear Theory: Static Case

    Directory of Open Access Journals (Sweden)

    Andrea Nobili

    2015-01-01

    Full Text Available Three generalizations of the Timoshenko beam model according to the linear theory of micropolar elasticity or its special cases, that is, the couple stress theory or the modified couple stress theory, recently developed in the literature, are investigated and compared. The analysis is carried out in a variational setting, making use of Hamilton’s principle. It is shown that both the Timoshenko and the (possibly modified couple stress models are based on a microstructural kinematics which is governed by kinosthenic (ignorable terms in the Lagrangian. Despite their difference, all models bring in a beam-plane theory only one microstructural material parameter. Besides, the micropolar model formally reduces to the couple stress model upon introducing the proper constraint on the microstructure kinematics, although the material parameter is generally different. Line loading on the microstructure results in a nonconservative force potential. Finally, the Hamiltonian form of the micropolar beam model is derived and the canonical equations are presented along with their general solution. The latter exhibits a general oscillatory pattern for the microstructure rotation and stress, whose behavior matches the numerical findings.

  12. Generalized query-based active learning to identify differentially methylated regions in DNA.

    Science.gov (United States)

    Haque, Md Muksitul; Holder, Lawrence B; Skinner, Michael K; Cook, Diane J

    2013-01-01

    Active learning is a supervised learning technique that reduces the number of examples required for building a successful classifier, because it can choose the data it learns from. This technique holds promise for many biological domains in which classified examples are expensive and time-consuming to obtain. Most traditional active learning methods ask very specific queries to the Oracle (e.g., a human expert) to label an unlabeled example. The example may consist of numerous features, many of which are irrelevant. Removing such features will create a shorter query with only relevant features, and it will be easier for the Oracle to answer. We propose a generalized query-based active learning (GQAL) approach that constructs generalized queries based on multiple instances. By constructing appropriately generalized queries, we can achieve higher accuracy compared to traditional active learning methods. We apply our active learning method to find differentially DNA methylated regions (DMRs). DMRs are DNA locations in the genome that are known to be involved in tissue differentiation, epigenetic regulation, and disease. We also apply our method on 13 other data sets and show that our method is better than another popular active learning technique.

  13. Cooperation dynamics of generalized reciprocity in state-based social dilemmas

    Science.gov (United States)

    Stojkoski, Viktor; Utkovski, Zoran; Basnarkov, Lasko; Kocarev, Ljupco

    2018-05-01

    We introduce a framework for studying social dilemmas in networked societies where individuals follow a simple state-based behavioral mechanism based on generalized reciprocity, which is rooted in the principle "help anyone if helped by someone." Within this general framework, which applies to a wide range of social dilemmas including, among others, public goods, donation, and snowdrift games, we study the cooperation dynamics on a variety of complex network examples. By interpreting the studied model through the lenses of nonlinear dynamical systems, we show that cooperation through generalized reciprocity always emerges as the unique attractor in which the overall level of cooperation is maximized, while simultaneously exploitation of the participating individuals is prevented. The analysis elucidates the role of the network structure, here captured by a local centrality measure which uniquely quantifies the propensity of the network structure to cooperation by dictating the degree of cooperation displayed both at the microscopic and macroscopic level. We demonstrate the applicability of the analysis on a practical example by considering an interaction structure that couples a donation process with a public goods game.

  14. The role of the control variable in the heuristically based generalized perturbation theory (HGPT)

    International Nuclear Information System (INIS)

    Gandini, A.

    1995-01-01

    The heuristically based generalized perturbation theory (HGPT) applied to the neutron field of a reactor system is discussed in relation to the criticality reset procedure. This procedure is implicit within the GPT methodology, corresponding to the so called filtering of the importance function relevant to the neutron field from the fundamental mode contamination. It is common practice to use the so called γ-mode filter. In order to account for any possible reset option, a general definition is introduced of an intensive control variable (ρ) entering into the governing equations, and correspondingly a fundamental ρ-mode filtering of the importance function is defined, relevant to the real criticality reset mechanism (control) adopted. A simple example illustrates the need in many circumstances of interest of taking into proper account the correct filtering so to avoid significant inaccuracies in the sensitivity calculation results

  15. A Wearable Respiratory Biofeedback System Based on Generalized Body Sensor Network

    Science.gov (United States)

    Liu, Guan-Zheng; Huang, Bang-Yu

    2011-01-01

    Abstract Wearable medical devices have enabled unobtrusive monitoring of vital signs and emerging biofeedback services in a pervasive manner. This article describes a wearable respiratory biofeedback system based on a generalized body sensor network (BSN) platform. The compact BSN platform was tailored for the strong requirements of overall system optimizations. A waist-worn biofeedback device was designed using the BSN. Extensive bench tests have shown that the generalized BSN worked as intended. In-situ experiments with 22 subjects indicated that the biofeedback device was discreet, easy to wear, and capable of offering wearable respiratory trainings. Pilot studies on wearable training patterns and resultant heart rate variability suggested that paced respirations at abdominal level and with identical inhaling/exhaling ratio were more appropriate for decreasing sympathetic arousal and increasing parasympathetic activities. PMID:21545293

  16. Generalized synchronization-based multiparameter estimation in modulated time-delayed systems

    Science.gov (United States)

    Ghosh, Dibakar; Bhattacharyya, Bidyut K.

    2011-09-01

    We propose a nonlinear active observer based generalized synchronization scheme for multiparameter estimation in time-delayed systems with periodic time delay. A sufficient condition for parameter estimation is derived using Krasovskii-Lyapunov theory. The suggested tool proves to be globally and asymptotically stable by means of Krasovskii-Lyapunov method. With this effective method, parameter identification and generalized synchronization of modulated time-delayed systems with all the system parameters unknown, can be achieved simultaneously. We restrict our study for multiple parameter estimation in modulated time-delayed systems with single state variable only. Theoretical proof and numerical simulation demonstrate the effectiveness and feasibility of the proposed technique. The block diagram of electronic circuit for multiple time delay system shows that the method is easily applicable in practical communication problems.

  17. A general framework for global asymptotic stability analysis of delayed neural networks based on LMI approach

    International Nuclear Information System (INIS)

    Cao Jinde; Ho, Daniel W.C.

    2005-01-01

    In this paper, global asymptotic stability is discussed for neural networks with time-varying delay. Several new criteria in matrix inequality form are given to ascertain the uniqueness and global asymptotic stability of equilibrium point for neural networks with time-varying delay based on Lyapunov method and Linear Matrix Inequality (LMI) technique. The proposed LMI approach has the advantage of considering the difference of neuronal excitatory and inhibitory efforts, which is also computationally efficient as it can be solved numerically using recently developed interior-point algorithm. In addition, the proposed results generalize and improve previous works. The obtained criteria also combine two existing conditions into one generalized condition in matrix form. An illustrative example is also given to demonstrate the effectiveness of the proposed results

  18. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans.

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-07

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients' CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  19. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  20. Predicting microRNA precursors with a generalized Gaussian components based density estimation algorithm

    Directory of Open Access Journals (Sweden)

    Wu Chi-Yeh

    2010-01-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G

  1. Admission rates in a general practitioner-based versus a hospital specialist based, hospital-at-home model

    DEFF Research Database (Denmark)

    Mogensen, Christian Backer; Ankersen, Ejnar Skytte; Lindberg, Mats J

    2018-01-01

    . CONCLUSIONS: The GP based HaH model was more effective than the hospital specialist model in avoiding hospital admissions within 7 days among elderly patients with an acute medical condition with no differences in mental or physical recovery rates or deaths between the two models. REGISTRATION: No. NCT......BACKGROUND: Hospital at home (HaH) is an alternative to acute admission for elderly patients. It is unclear if should be cared for a primarily by a hospital intern specialist or by the patient's own general practitioner (GP). The study assessed whether a GP based model was more effective than...... Denmark, including + 65 years old patients with an acute medical condition that required acute hospital in-patient care. The patients were randomly assigned to hospital specialist based model or GP model of HaH care. Five physical and cognitive performance tests were performed at inclusion and after 7...

  2. An Exemplar-Based Multi-View Domain Generalization Framework for Visual Recognition.

    Science.gov (United States)

    Niu, Li; Li, Wen; Xu, Dong; Cai, Jianfei

    2018-02-01

    In this paper, we propose a new exemplar-based multi-view domain generalization (EMVDG) framework for visual recognition by learning robust classifier that are able to generalize well to arbitrary target domain based on the training samples with multiple types of features (i.e., multi-view features). In this framework, we aim to address two issues simultaneously. First, the distribution of training samples (i.e., the source domain) is often considerably different from that of testing samples (i.e., the target domain), so the performance of the classifiers learnt on the source domain may drop significantly on the target domain. Moreover, the testing data are often unseen during the training procedure. Second, when the training data are associated with multi-view features, the recognition performance can be further improved by exploiting the relation among multiple types of features. To address the first issue, considering that it has been shown that fusing multiple SVM classifiers can enhance the domain generalization ability, we build our EMVDG framework upon exemplar SVMs (ESVMs), in which a set of ESVM classifiers are learnt with each one trained based on one positive training sample and all the negative training samples. When the source domain contains multiple latent domains, the learnt ESVM classifiers are expected to be grouped into multiple clusters. To address the second issue, we propose two approaches under the EMVDG framework based on the consensus principle and the complementary principle, respectively. Specifically, we propose an EMVDG_CO method by adding a co-regularizer to enforce the cluster structures of ESVM classifiers on different views to be consistent based on the consensus principle. Inspired by multiple kernel learning, we also propose another EMVDG_MK method by fusing the ESVM classifiers from different views based on the complementary principle. In addition, we further extend our EMVDG framework to exemplar-based multi-view domain

  3. Person-Based Versus Generalized Impulsivity Disinhibition in Frontotemporal Dementia and Alzheimer Disease.

    Science.gov (United States)

    Paholpak, Pongsatorn; Carr, Andrew R; Barsuglia, Joseph P; Barrows, Robin J; Jimenez, Elvira; Lee, Grace J; Mendez, Mario F

    2016-09-19

    While much disinhibition in dementia results from generalized impulsivity, in behavioral variant frontotemporal dementia (bvFTD) disinhibition may also result from impaired social cognition. To deconstruct disinhibition and its neural correlates in bvFTD vs. early-onset Alzheimer's disease (eAD). Caregivers of 16 bvFTD and 21 matched-eAD patients completed the Frontal Systems Behavior Scale disinhibition items. The disinhibition items were further categorized into (1) "person-based" subscale which predominantly associated with violating social propriety and personal boundary and (2) "generalized-impulsivity" subscale which included nonspecific impulsive acts. Subscale scores were correlated with grey matter volumes from tensor-based morphometry on magnetic resonance images. In comparison to the eAD patients, the bvFTD patients developed greater person-based disinhibition (P dementia, violations of social propriety and personal boundaries involved fronto-parieto-temporal network of Theory of Mind, whereas nonspecific disinhibition involved the OFC and aTL. © The Author(s) 2016.

  4. Verification of FPGA-based NPP I and C systems. General approach and techniques

    International Nuclear Information System (INIS)

    Andrashov, Anton; Kharchenko, Vyacheslav; Sklyar, Volodymir; Reva, Lubov; Siora, Alexander

    2011-01-01

    This paper presents a general approach and techniques for design and verification of Field Programmable Gates Arrays (FPGA)-based Instrumentation and Control (I and C) systems for Nuclear Power Plants (NPP). Appropriate regulatory documents used for I and C systems design, development, verification and validation (V and V) are discussed considering the latest international standards and guidelines. Typical development and V and V processes of FPGA electronic design for FPGA-based NPP I and C systems are presented. Some safety-related features of implementation process are discussed. Corresponding development artifacts, related to design and implementation activities are outlined. An approach to test-based verification of FPGA electronic design algorithms, used in FPGA-based reactor trip systems is proposed. The results of application of test-based techniques for assessment of FPGA electronic design algorithms for reactor trip system (RTS) produced by Research and Production Corporation (RPC) 'Radiy' are presented. Some principles of invariant-oriented verification for FPGA-based safety-critical systems are outlined. (author)

  5. Modulation Transfer Function of a Gaussian Beam Based on the Generalized Modified Atmospheric Spectrum

    Directory of Open Access Journals (Sweden)

    Chao Gao

    2016-01-01

    Full Text Available This paper investigates the modulation transfer function of a Gaussian beam propagating through a horizontal path in weak-fluctuation non-Kolmogorov turbulence. Mathematical expressions are obtained based on the generalized modified atmospheric spectrum, which includes the spectral power law value of non-Kolmogorov turbulence, the finite inner and outer scales of turbulence, and other optical parameters of the Gaussian beam. The numerical results indicate that the atmospheric turbulence would produce less negative effects on the wireless optical communication system with an increase in the inner scale of turbulence. Additionally, the increased outer scale of turbulence makes a Gaussian beam influenced more seriously by the atmospheric turbulence.

  6. Classic tests of General Relativity described by brane-based spherically symmetric solutions

    Energy Technology Data Exchange (ETDEWEB)

    Cuzinatto, R.R. [Universidade Federal de Alfenas, Instituto de Ciencia e Tecnologia, Pocos de Caldas, MG (Brazil); Pompeia, P.J. [Departamento de Ciencia e Tecnologia Aeroespacial, Instituto de Fomento e Coordenacao Industrial, Sao Jose dos Campos, SP (Brazil); Departamento de Ciencia e Tecnologia Aeroespacial, Instituto Tecnologico de Aeronautica, Sao Jose dos Campos, SP (Brazil); De Montigny, M. [University of Alberta, Theoretical Physics Institute, Edmonton, AB (Canada); University of Alberta, Campus Saint-Jean, Edmonton, AB (Canada); Khanna, F.C. [University of Alberta, Theoretical Physics Institute, Edmonton, AB (Canada); TRIUMF, Vancouver, BC (Canada); University of Victoria, Department of Physics and Astronomy, PO box 1700, Victoria, BC (Canada); Silva, J.M.H. da [Universidade Estadual Paulista, Departamento de Fisica e Quimica, Guaratingueta, SP (Brazil)

    2014-08-15

    We discuss a way to obtain information about higher dimensions from observations by studying a brane-based spherically symmetric solution. The three classic tests of General Relativity are analyzed in detail: the perihelion shift of the planet Mercury, the deflection of light by the Sun, and the gravitational redshift of atomic spectral lines. The braneworld version of these tests exhibits an additional parameter b related to the fifth-coordinate. This constant b can be constrained by comparison with observational data for massive and massless particles. (orig.)

  7. A New Second-Order Generalized Integrator Based Quadrature Signal Generator With Enhanced Performance

    DEFF Research Database (Denmark)

    Xin, Zhen; Qin, Zian; Lu, Minghui

    2016-01-01

    Due to the simplicity and flexibility of the structure of the Second-Order Generalized Integrator based Quadrature Signal Generator (SOGI-QSG), it has been widely used over the past decade for many applications such as frequency estimation, grid synchronization, and harmonic extraction. However......, the SOGI-QSG will produce errors when its input signal contains a dc component or harmonic components with unknown frequencies. The accuracy of the signal detection methods using it may hence be compromised. To overcome the drawback, the First-Order System (FOS) concept is first used to illustrate...

  8. A data acquisition system based on general VME system in WinXP

    International Nuclear Information System (INIS)

    Ning Zhe; Qian Sen; Wang Yifang; Heng Yuekun; Zhang Jiawei; Fu Zaiwei; Qi Ming; Zheng Yangheng

    2010-01-01

    The compilation and encapsulation of a general data acquisition system based on VME board in WinXP environment was developed using LabVIEW with graphics interface. By integrating the emulational instrument panel of LabVIEW and calling the Dynamic Link Libraries (DLLs) of crate controller, the VME modules were encapsulated into function modules independently, for convenience of use. The BLT, MBLT and CBLT readout modes for different VME boards were studied. The modules can be selected and modified easily according to the requirements of different tests. Finally, successful applications of the high resolution data acquisition software (DAQ) in several experiment environments are reported.(authors)

  9. An Application to the Prediction of LOD Change Based on General Regression Neural Network

    Science.gov (United States)

    Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.

    2011-07-01

    Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.

  10. Reactive power and voltage control based on general quantum genetic algorithms

    DEFF Research Database (Denmark)

    Vlachogiannis, Ioannis (John); Østergaard, Jacob

    2009-01-01

    This paper presents an improved evolutionary algorithm based on quantum computing for optima l steady-state performance of power systems. However, the proposed general quantum genetic algorithm (GQ-GA) can be applied in various combinatorial optimization problems. In this study the GQ-GA determines...... techniques such as enhanced GA, multi-objective evolutionary algorithm and particle swarm optimization algorithms, as well as the classical primal-dual interior-point optimal power flow algorithm. The comparison demonstrates the ability of the GQ-GA in reaching more optimal solutions....

  11. The FPGA realization of the general cellular automata based cryptographic hash functions: Performance and effectiveness

    Directory of Open Access Journals (Sweden)

    P. G. Klyucharev

    2014-01-01

    Full Text Available In the paper the author considers hardware implementation of the GRACE-H family general cellular automata based cryptographic hash functions. VHDL is used as a language and Altera FPGA as a platform for hardware implementation. Performance and effectiveness of the FPGA implementations of GRACE-H hash functions were compared with Keccak (SHA-3, SHA-256, BLAKE, Groestl, JH, Skein hash functions. According to the performed tests, performance of the hardware implementation of GRACE-H family hash functions significantly (up to 12 times exceeded performance of the hardware implementation of previously known hash functions, and effectiveness of that hardware implementation was also better (up to 4 times.

  12. Switch Based Opportunistic Spectrum Access for General Primary User Traffic Model

    KAUST Repository

    Gaaloul, Fakhreddine

    2012-06-18

    This letter studies cognitive radio transceiver that can opportunistically use the available channels of primary user (PU). Specifically, we investigate and compare two different opportunistic channel access schemes. The first scheme applies when the secondary user (SU) has access to only one channel. The second scheme, based on channel switching mechanism, applies when the SU has access to multiple channels but can at a given time monitor and access only one channel. For these access schemes, we derive the exact analytical results for the novel performance metrics of average access time and average waiting time under general PU traffic models.

  13. Switch Based Opportunistic Spectrum Access for General Primary User Traffic Model

    KAUST Repository

    Gaaloul, Fakhreddine; Alouini, Mohamed-Slim; Radaydeh, Redha M.; Yang, Hong-Chuan

    2012-01-01

    This letter studies cognitive radio transceiver that can opportunistically use the available channels of primary user (PU). Specifically, we investigate and compare two different opportunistic channel access schemes. The first scheme applies when the secondary user (SU) has access to only one channel. The second scheme, based on channel switching mechanism, applies when the SU has access to multiple channels but can at a given time monitor and access only one channel. For these access schemes, we derive the exact analytical results for the novel performance metrics of average access time and average waiting time under general PU traffic models.

  14. General analytical procedure for determination of acidity parameters of weak acids and bases.

    Science.gov (United States)

    Pilarski, Bogusław; Kaliszan, Roman; Wyrzykowski, Dariusz; Młodzianowski, Janusz; Balińska, Agata

    2015-01-01

    The paper presents a new convenient, inexpensive, and reagent-saving general methodology for the determination of pK a values for components of the mixture of diverse chemical classes weak organic acids and bases in water solution, without the need to separate individual analytes. The data obtained from simple pH-metric microtitrations are numerically processed into reliable pK a values for each component of the mixture. Excellent agreement has been obtained between the determined pK a values and the reference literature data for compounds studied.

  15. The influence of population characteristics on variation in general practice based morbidity estimations

    Directory of Open Access Journals (Sweden)

    van den Dungen C

    2011-11-01

    Full Text Available Abstract Background General practice based registration networks (GPRNs provide information on morbidity rates in the population. Morbidity rate estimates from different GPRNs, however, reveal considerable, unexplained differences. We studied the range and variation in morbidity estimates, as well as the extent to which the differences in morbidity rates between general practices and networks change if socio-demographic characteristics of the listed patient populations are taken into account. Methods The variation in incidence and prevalence rates of thirteen diseases among six Dutch GPRNs and the influence of age, gender, socio economic status (SES, urbanization level, and ethnicity are analyzed using multilevel logistic regression analysis. Results are expressed in median odds ratios (MOR. Results We observed large differences in morbidity rate estimates both on the level of general practices as on the level of networks. The differences in SES, urbanization level and ethnicity distribution among the networks' practice populations are substantial. The variation in morbidity rate estimates among networks did not decrease after adjusting for these socio-demographic characteristics. Conclusion Socio-demographic characteristics of populations do not explain the differences in morbidity estimations among GPRNs.

  16. Probabilistic endowment appraisal system based upon the formalization of geologic decisions. General description

    International Nuclear Information System (INIS)

    Harris, D.P.; Carrigan, F.J.

    1980-04-01

    The objectives of this study include the design of an appraisal system which has the following features: estimates uranium endowment, not resources; formalizes the geologist's geoscience and assists the geologist in the exercise of his geoscience; describes the probability distribution for uranium endowment; diminishes or at least does not contribute to psychometric biases; provides for anonymous exchange among multiple experts of tenets of geoscience, but not the exchange of endowment estimates; provides an endowment estimate based upon geoscience only; is not easily gamed or manipulated; and provides for a quick and easy review of geoscience and resource information. This report is reflective of its title, a general description. The appraisal system resulting from this research is complex in the detail of its design and use. However, the major concepts which are reflected by the system are simple. The purpose of this report is to establish clearly these major concepts and the manner in which the system applies these concepts. Many details, refinements, and caveats are purposefully suppressed in order to provide this general description. While this suppression is a loss to some readers, it is a benefit to a wider spectrum of readers. Those interested in the nuts and bolts of the system will also want to read the user's manual which accompanies this general description

  17. Generalized outcome-based strategy classification: comparing deterministic and probabilistic choice models.

    Science.gov (United States)

    Hilbig, Benjamin E; Moshagen, Morten

    2014-12-01

    Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.

  18. Optimization of power plants management structure based on the generalized criteria of the efficiency

    Directory of Open Access Journals (Sweden)

    Salov Aleksey

    2017-01-01

    Full Text Available In the article, the analysis of the operation of power plants in the conditions of economic restructuring to ensure successful entry into the market is carried out. The analysis of the five management structures, including current, typical structure and re-designed by the authors is presented. There are developed the partial efficiency criteria of the management structures that characterize the most important properties - the balance, integrity, controllability and stability. Local criteria of the analyzed structures do not allow to make a definite conclusion about the effectiveness of one of the structures analyzed, formulated global efficiency criterion. There is developed the global criterion of the comparative effectiveness of the management systems based on the DEA method (Data envelopment analysis, taking into account the complex of the proposed local criteria. The considered management structures are ranked based on the generalized criterion of efficiency.

  19. Construction of road network vulnerability evaluation index based on general travel cost

    Science.gov (United States)

    Leng, Jun-qiang; Zhai, Jing; Li, Qian-wen; Zhao, Lin

    2018-03-01

    With the development of China's economy and the continuous improvement of her urban road network, the vulnerability of the urban road network has attracted increasing attention. Based on general travel cost, this work constructs the vulnerability evaluation index for the urban road network, and evaluates the vulnerability of the urban road network from the perspective of user generalised travel cost. Firstly, the generalised travel cost model is constructed based on vehicle cost, travel time, and traveller comfort. Then, the network efficiency index is selected as an evaluation index of vulnerability: the network efficiency index is composed of the traffic volume and the generalised travel cost, which are obtained from the equilibrium state of the network. In addition, the research analyses the influence of traffic capacity decrease, road section attribute value, and location of road section, on vulnerability. Finally, the vulnerability index is used to analyse the local area network of Harbin and verify its applicability.

  20. Fundamentals of the fuzzy logic-based generalized theory of decisions

    CERN Document Server

    Aliev, Rafik Aziz

    2013-01-01

    Every day decision making and decision making in complex human-centric systems are characterized by imperfect decision-relevant information. Main drawback of the existing decision theories is namely incapability to deal with imperfect information and modeling vague preferences. Actually, a paradigm of non-numerical probabilities in decision making has a long history and arose also in Keynes’s analysis of uncertainty. There is a need for further generalization – a move to decision theories with perception-based imperfect information described in NL. The languages of new decision models for human-centric systems should be not languages based on binary logic but human-centric computational schemes able to operate on NL-described information. Development of new theories is now possible due to an increased computational power of information processing systems which allows for computations with imperfect information, particularly, imprecise and partially true information, which are much more complex than comput...

  1. A Generalized Weight-Based Particle-In-Cell Simulation Scheme

    International Nuclear Information System (INIS)

    Lee, W.W.; Jenkins, T.G.; Ethier, S.

    2010-01-01

    A generalized weight-based particle simulation scheme suitable for simulating magnetized plasmas, where the zeroth-order inhomogeneity is important, is presented. The scheme is an extension of the perturbative simulation schemes developed earlier for particle-in-cell (PIC) simulations. The new scheme is designed to simulate both the perturbed distribution ((delta)f) and the full distribution (full-F) within the same code. The development is based on the concept of multiscale expansion, which separates the scale lengths of the background inhomogeneity from those associated with the perturbed distributions. The potential advantage for such an arrangement is to minimize the particle noise by using (delta)f in the linear stage stage of the simulation, while retaining the flexibility of a full-F capability in the fully nonlinear stage of the development when signals associated with plasma turbulence are at a much higher level than those from the intrinsic particle noise.

  2. Real-time traffic sign recognition based on a general purpose GPU and deep-learning.

    Science.gov (United States)

    Lim, Kwangyong; Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran

    2017-01-01

    We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea).

  3. Testing General Relativity with Low-Frequency, Space-Based Gravitational-Wave Detectors

    Directory of Open Access Journals (Sweden)

    John G. Baker

    2013-09-01

    Full Text Available We review the tests of general relativity that will become possible with space-based gravitational-wave detectors operating in the ∼ 10^{-5} – 1 Hz low-frequency band. The fundamental aspects of gravitation that can be tested include the presence of additional gravitational fields other than the metric; the number and tensorial nature of gravitational-wave polarization states; the velocity of propagation of gravitational waves; the binding energy and gravitational-wave radiation of binaries, and therefore the time evolution of binary inspirals; the strength and shape of the waves emitted from binary mergers and ringdowns; the true nature of astrophysical black holes; and much more. The strength of this science alone calls for the swift implementation of a space-based detector; the remarkable richness of astrophysics, astronomy, and cosmology in the low-frequency gravitational-wave band make the case even stronger.

  4. Internet-based remote consultations - general practitioner experience and attitudes in Norway and Germany.

    Science.gov (United States)

    Kampik, Timotheus; Larsen, Frank; Bellika, Johan Gustav

    2015-01-01

    The objective of the study was to identify experiences and attitudes of German and Norwegian general practitioners (GPs) towards Internet-based remote consultation solutions supporting communication between GPs and patients in the context of the German and Norwegian healthcare systems. Interviews with four German and five Norwegian GPs were conducted. The results were qualitatively analyzed. All interviewed GPs stated they would like to make use of Internet-based remote consultations in the future. Current experiences with remote consultations are existent to a limited degree. No GP reported to use a comprehensive remote consultation solution. The main features GPs would like to see in a remote consultation solution include asynchronous exchange of text messages, video conferencing with text chat, scheduling of remote consultation appointments, secure login and data transfer and the integration of the remote consultation solution into the GP's EHR system.

  5. Elder abuse: The role of general practitioners in community-based screening and multidisciplinary action

    Science.gov (United States)

    Ries, Nola M; Mansfield, Elise

    2018-04-01

    There are growing calls for elder abuse screening to be conducted by a range of community-based service providers, including general practitioners (GPs), practice nurses, home care workers and lawyers. Improved screening may be a valuable first step towards improving elder abuse detection and response; however, practitioners need evidence-based strategies for screening and follow-up. This article summarises several brief screening tools for various forms of elder abuse. Screening tool properties and evidence gaps are noted. As elder abuse often requires multidisciplinary responses, initiatives to connect health, legal and other service providers are highlighted. GPs are trusted professionals who are well placed to identify older patients at risk of, or experiencing, various forms of abuse. They should be aware of available screening tools and consider how best to incorporate them into their own practice. They also play an important role in multidisciplinary action to address elder abuse.  .

  6. Multilayer quantum secret sharing based on GHZ state and generalized Bell basis measurement in multiparty agents

    Science.gov (United States)

    Wang, Xiao-Jun; An, Long-Xi; Yu, Xu-Tao; Zhang, Zai-Chen

    2017-10-01

    A multilayer quantum secret sharing protocol based on GHZ state is proposed. Alice has the secret carried by quantum state and wants to distribute this secret to multiple agent nodes in the network. In this protocol, the secret is transmitted and shared layer by layer from root Alice to layered agents. The number of agents in each layer is a geometric sequence with a specific common ratio. By sharing GHZ maximally entangled states and making generalized Bell basis measurement, one qubit state can be distributed to multiparty agents and the secret is shared. Only when all agents at the last layer cooperate together, the secret can be recovered. Compared with other protocols based on the entangled state, this protocol adopts layered construction so that secret can be distributed to more agents with fewer particles GHZ state. This quantum secret sharing protocol can be used in wireless network to ensure the security of information delivery.

  7. Testing General Relativity with Low-Frequency, Space-Based Gravitational-Wave Detectors.

    Science.gov (United States)

    Gair, Jonathan R; Vallisneri, Michele; Larson, Shane L; Baker, John G

    2013-01-01

    We review the tests of general relativity that will become possible with space-based gravitational-wave detectors operating in the ∼ 10 -5 - 1 Hz low-frequency band. The fundamental aspects of gravitation that can be tested include the presence of additional gravitational fields other than the metric; the number and tensorial nature of gravitational-wave polarization states; the velocity of propagation of gravitational waves; the binding energy and gravitational-wave radiation of binaries, and therefore the time evolution of binary inspirals; the strength and shape of the waves emitted from binary mergers and ringdowns; the true nature of astrophysical black holes; and much more. The strength of this science alone calls for the swift implementation of a space-based detector; the remarkable richness of astrophysics, astronomy, and cosmology in the low-frequency gravitational-wave band make the case even stronger.

  8. A residual life prediction model based on the generalized σ -N curved surface

    Directory of Open Access Journals (Sweden)

    Zongwen AN

    2016-06-01

    Full Text Available In order to investigate change rule of the residual life of structure under random repeated load, firstly, starting from the statistic meaning of random repeated load, the joint probability density function of maximum stress and minimum stress is derived based on the characteristics of order statistic (maximum order statistic and minimum order statistic; then, based on the equation of generalized σ -N curved surface, considering the influence of load cycles number on fatigue life, a relationship among minimum stress, maximum stress and residual life, that is the σmin(n- σmax(n-Nr(n curved surface model, is established; finally, the validity of the proposed model is demonstrated by a practical case. The result shows that the proposed model can reflect the influence of maximum stress and minimum stress on residual life of structure under random repeated load, which can provide a theoretical basis for life prediction and reliability assessment of structure.

  9. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    Science.gov (United States)

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current

  10. Optimization of the dressing parameters in cylindrical grinding based on a generalized utility function

    Science.gov (United States)

    Aleksandrova, Irina

    2016-01-01

    The existing studies, concerning the dressing process, focus on the major influence of the dressing conditions on the grinding response variables. However, the choice of the dressing conditions is often made, based on the experience of the qualified staff or using data from reference books. The optimal dressing parameters, which are only valid for the particular methods and dressing and grinding conditions, are also used. The paper presents a methodology for optimization of the dressing parameters in cylindrical grinding. The generalized utility function has been chosen as an optimization parameter. It is a complex indicator determining the economic, dynamic and manufacturing characteristics of the grinding process. The developed methodology is implemented for the dressing of aluminium oxide grinding wheels by using experimental diamond roller dressers with different grit sizes made of medium- and high-strength synthetic diamonds type ??32 and ??80. To solve the optimization problem, a model of the generalized utility function is created which reflects the complex impact of dressing parameters. The model is built based on the results from the conducted complex study and modeling of the grinding wheel lifetime, cutting ability, production rate and cutting forces during grinding. They are closely related to the dressing conditions (dressing speed ratio, radial in-feed of the diamond roller dresser and dress-out time), the diamond roller dresser grit size/grinding wheel grit size ratio, the type of synthetic diamonds and the direction of dressing. Some dressing parameters are determined for which the generalized utility function has a maximum and which guarantee an optimum combination of the following: the lifetime and cutting ability of the abrasive wheels, the tangential cutting force magnitude and the production rate of the grinding process. The results obtained prove the possibility of control and optimization of grinding by selecting particular dressing

  11. The glmS ribozyme cofactor is a general acid-base catalyst.

    Science.gov (United States)

    Viladoms, Júlia; Fedor, Martha J

    2012-11-21

    The glmS ribozyme is the first natural self-cleaving ribozyme known to require a cofactor. The d-glucosamine-6-phosphate (GlcN6P) cofactor has been proposed to serve as a general acid, but its role in the catalytic mechanism has not been established conclusively. We surveyed GlcN6P-like molecules for their ability to support self-cleavage of the glmS ribozyme and found a strong correlation between the pH dependence of the cleavage reaction and the intrinsic acidity of the cofactors. For cofactors with low binding affinities, the contribution to rate enhancement was proportional to their intrinsic acidity. This linear free-energy relationship between cofactor efficiency and acid dissociation constants is consistent with a mechanism in which the cofactors participate directly in the reaction as general acid-base catalysts. A high value for the Brønsted coefficient (β ~ 0.7) indicates that a significant amount of proton transfer has already occurred in the transition state. The glmS ribozyme is the first self-cleaving RNA to use an exogenous acid-base catalyst.

  12. Fast digital envelope detector based on generalized harmonic wavelet transform for BOTDR performance improvement

    International Nuclear Information System (INIS)

    Yang, Wei; Yang, Yuanhong; Yang, Mingwei

    2014-01-01

    We propose a fast digital envelope detector (DED) based on the generalized harmonic wavelet transform to improve the performance of coherent heterodyne Brillouin optical time domain reflectometry. The proposed DED can obtain undistorted envelopes due to the zero phase-shift ideal bandpass filter (BPF) characteristics of the generalized harmonic wavelet (GHW). Its envelope average ability benefits from the passband designing flexibility of the GHW, and its demodulation speed can be accelerated by using a fast algorithm that only analyses signals of interest within the passband of the GHW with reduced computational complexity. The feasibility and advantage of the proposed DED are verified by simulations and experiments. With an optimized bandwidth, Brillouin frequency shift accuracy improvements of 19.4% and 11.14%, as well as envelope demodulation speed increases of 39.1% and 24.9%, are experimentally attained by the proposed DED over Hilbert transform (HT) and Morlet wavelet transform (MWT) based DEDs, respectively. Spatial resolution by the proposed DED is undegraded, which is identical to the undegraded value by HT-DED with an allpass filter characteristic and better than the degraded value by MWT-DED with a Gaussian BPF characteristic. (paper)

  13. DNA Processing and Reassembly on General Purpose FPGA-based Development Boards

    Directory of Open Access Journals (Sweden)

    SZÁSZ Csaba

    2017-05-01

    Full Text Available The great majority of researchers involved in microelectronics generally agree that many scientific challenges in life sciences have associated with them a powerful computational requirement that must be solved before scientific progress can be made. The current trend in Deoxyribonucleic Acid (DNA computing technologies is to develop special hardware platforms capable to provide the needed processing performance at lower cost. In this endeavor the FPGA-based (Field Programmable Gate Arrays configurations aimed to accelerate genome sequencing and reassembly plays a leading role. This paper emphasizes benefits and advantages using general purpose FPGA-based development boards in DNA reassembly applications beside the special hardware architecture solutions. An original approach is unfolded which outlines the versatility of high performance ready-to-use manufacturer development platforms endowed with powerful hardware resources fully optimized for high speed processing applications. The theoretical arguments are supported via an intuitive implementation example where the designer it is discharged from any hardware development effort and completely assisted in exclusive concentration only on software design issues providing greatly reduced application development cycles. The experiments prove that such boards available on the market are suitable to fulfill in all a wide range of DNA sequencing and reassembly applications.

  14. Progressive Amalgamation of Building Clusters for Map Generalization Based on Scaling Subgroups

    Directory of Open Access Journals (Sweden)

    Xianjin He

    2018-03-01

    Full Text Available Map generalization utilizes transformation operations to derive smaller-scale maps from larger-scale maps, and is a key procedure for the modelling and understanding of geographic space. Studies to date have largely applied a fixed tolerance to aggregate clustered buildings into a single object, resulting in the loss of details that meet cartographic constraints and may be of importance for users. This study aims to develop a method that amalgamates clustered buildings gradually without significant modification of geometry, while preserving the map details as much as possible under cartographic constraints. The amalgamation process consists of three key steps. First, individual buildings are grouped into distinct clusters by using the graph-based spatial clustering application with random forest (GSCARF method. Second, building clusters are decomposed into scaling subgroups according to homogeneity with regard to the mean distance of subgroups. Thus, hierarchies of building clusters can be derived based on scaling subgroups. Finally, an amalgamation operation is progressively performed from the bottom-level subgroups to the top-level subgroups using the maximum distance of each subgroup as the amalgamating tolerance instead of using a fixed tolerance. As a consequence of this step, generalized intermediate scaling results are available, which can form the multi-scale representation of buildings. The experimental results show that the proposed method can generate amalgams with correct details, statistical area balance and orthogonal shape while satisfying cartographic constraints (e.g., minimum distance and minimum area.

  15. Generalized formulation of an encryption system based on a joint transform correlator and fractional Fourier transform

    International Nuclear Information System (INIS)

    Vilardy, Juan M; Millán, María S; Pérez-Cabré, Elisabet; Torres, Yezid

    2014-01-01

    We propose a generalization of the encryption system based on double random phase encoding (DRPE) and a joint transform correlator (JTC), from the Fourier domain to the fractional Fourier domain (FrFD) by using the fractional Fourier operators, such as the fractional Fourier transform (FrFT), fractional traslation, fractional convolution and fractional correlation. Image encryption systems based on a JTC architecture in the FrFD usually produce low quality decrypted images. In this work, we present two approaches to improve the quality of the decrypted images, which are based on nonlinear processing applied to the encrypted function (that contains the joint fractional power spectrum, JFPS) and the nonzero-order JTC in the FrFD. When the two approaches are combined, the quality of the decrypted image is higher. In addition to the advantages introduced by the implementation of the DRPE using a JTC, we demonstrate that the proposed encryption system in the FrFD preserves the shift-invariance property of the JTC-based encryption system in the Fourier domain, with respect to the lateral displacement of both the key random mask in the decryption process and the retrieval of the primary image. The feasibility of this encryption system is verified and analyzed by computer simulations. (paper)

  16. Spatial downscaling of soil prediction models based on weighted generalized additive models in smallholder farm settings.

    Science.gov (United States)

    Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P; Nair, Vimala D

    2017-09-11

    Digital soil mapping (DSM) is gaining momentum as a technique to help smallholder farmers secure soil security and food security in developing regions. However, communications of the digital soil mapping information between diverse audiences become problematic due to the inconsistent scale of DSM information. Spatial downscaling can make use of accessible soil information at relatively coarse spatial resolution to provide valuable soil information at relatively fine spatial resolution. The objective of this research was to disaggregate the coarse spatial resolution soil exchangeable potassium (K ex ) and soil total nitrogen (TN) base map into fine spatial resolution soil downscaled map using weighted generalized additive models (GAMs) in two smallholder villages in South India. By incorporating fine spatial resolution spectral indices in the downscaling process, the soil downscaled maps not only conserve the spatial information of coarse spatial resolution soil maps but also depict the spatial details of soil properties at fine spatial resolution. The results of this study demonstrated difference between the fine spatial resolution downscaled maps and fine spatial resolution base maps is smaller than the difference between coarse spatial resolution base maps and fine spatial resolution base maps. The appropriate and economical strategy to promote the DSM technique in smallholder farms is to develop the relatively coarse spatial resolution soil prediction maps or utilize available coarse spatial resolution soil maps at the regional scale and to disaggregate these maps to the fine spatial resolution downscaled soil maps at farm scale.

  17. Nitrous oxide-based techniques versus nitrous oxide-free techniques for general anaesthesia.

    Science.gov (United States)

    Sun, Rao; Jia, Wen Qin; Zhang, Peng; Yang, KeHu; Tian, Jin Hui; Ma, Bin; Liu, Yali; Jia, Run H; Luo, Xiao F; Kuriyama, Akira

    2015-11-06

    Nitrous oxide has been used for over 160 years for the induction and maintenance of general anaesthesia. It has been used as a sole agent but is most often employed as part of a technique using other anaesthetic gases, intravenous agents, or both. Its low tissue solubility (and therefore rapid kinetics), low cost, and low rate of cardiorespiratory complications have made nitrous oxide by far the most commonly used general anaesthetic. The accumulating evidence regarding adverse effects of nitrous oxide administration has led many anaesthetists to question its continued routine use in a variety of operating room settings. Adverse events may result from both the biological actions of nitrous oxide and the fact that to deliver an effective dose, nitrous oxide, which is a relatively weak anaesthetic agent, needs to be given in high concentrations that restrict oxygen delivery (for example, a common mixture is 30% oxygen with 70% nitrous oxide). As well as the risk of low blood oxygen levels, concerns have also been raised regarding the risk of compromising the immune system, impaired cognition, postoperative cardiovascular complications, bowel obstruction from distention, and possible respiratory compromise. To determine if nitrous oxide-based anaesthesia results in similar outcomes to nitrous oxide-free anaesthesia in adults undergoing surgery. We searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2014 Issue 10); MEDLINE (1966 to 17 October 2014); EMBASE (1974 to 17 October 2014); and ISI Web of Science (1974 to 17 October 2014). We also searched the reference lists of relevant articles, conference proceedings, and ongoing trials up to 17 October 2014 on specific websites (http://clinicaltrials.gov/, http://controlled-trials.com/, and http://www.centerwatch.com). We included randomized controlled trials (RCTs) comparing general anaesthesia where nitrous oxide was part of the anaesthetic technique used for the induction or maintenance of general

  18. General active space commutator-based coupled cluster theory of general excitation rank for electronically excited states: implementation and application to ScH.

    Science.gov (United States)

    Hubert, Mickaël; Olsen, Jeppe; Loras, Jessica; Fleig, Timo

    2013-11-21

    We present a new implementation of general excitation rank coupled cluster theory for electronically excited states based on the single-reference multi-reference formalism. The method may include active-space selected and/or general higher excitations by means of the general active space concept. It may employ molecular integrals over the four-component Lévy-Leblond Hamiltonian or the relativistic spin-orbit-free four-component Hamiltonian of Dyall. In an initial application to ground- and excited states of the scandium monohydride molecule we report spectroscopic constants using basis sets of up to quadruple-zeta quality and up to full iterative triple excitations in the cluster operators. Effects due to spin-orbit interaction are evaluated using two-component multi-reference configuration interaction for assessing the accuracy of the coupled cluster results.

  19. Molecular basis sets - a general similarity-based approach for representing chemical spaces.

    Science.gov (United States)

    Raghavendra, Akshay S; Maggiora, Gerald M

    2007-01-01

    A new method, based on generalized Fourier analysis, is described that utilizes the concept of "molecular basis sets" to represent chemical space within an abstract vector space. The basis vectors in this space are abstract molecular vectors. Inner products among the basis vectors are determined using an ansatz that associates molecular similarities between pairs of molecules with their corresponding inner products. Moreover, the fact that similarities between pairs of molecules are, in essentially all cases, nonzero implies that the abstract molecular basis vectors are nonorthogonal, but since the similarity of a molecule with itself is unity, the molecular vectors are normalized to unity. A symmetric orthogonalization procedure, which optimally preserves the character of the original set of molecular basis vectors, is used to construct appropriate orthonormal basis sets. Molecules can then be represented, in general, by sets of orthonormal "molecule-like" basis vectors within a proper Euclidean vector space. However, the dimension of the space can become quite large. Thus, the work presented here assesses the effect of basis set size on a number of properties including the average squared error and average norm of molecular vectors represented in the space-the results clearly show the expected reduction in average squared error and increase in average norm as the basis set size is increased. Several distance-based statistics are also considered. These include the distribution of distances and their differences with respect to basis sets of differing size and several comparative distance measures such as Spearman rank correlation and Kruscal stress. All of the measures show that, even though the dimension can be high, the chemical spaces they represent, nonetheless, behave in a well-controlled and reasonable manner. Other abstract vector spaces analogous to that described here can also be constructed providing that the appropriate inner products can be directly

  20. Method for mapping population-based case-control studies: an application using generalized additive models

    Directory of Open Access Journals (Sweden)

    Aschengrau Ann

    2006-06-01

    Full Text Available Abstract Background Mapping spatial distributions of disease occurrence and risk can serve as a useful tool for identifying exposures of public health concern. Disease registry data are often mapped by town or county of diagnosis and contain limited data on covariates. These maps often possess poor spatial resolution, the potential for spatial confounding, and the inability to consider latency. Population-based case-control studies can provide detailed information on residential history and covariates. Results Generalized additive models (GAMs provide a useful framework for mapping point-based epidemiologic data. Smoothing on location while controlling for covariates produces adjusted maps. We generate maps of odds ratios using the entire study area as a reference. We smooth using a locally weighted regression smoother (loess, a method that combines the advantages of nearest neighbor and kernel methods. We choose an optimal degree of smoothing by minimizing Akaike's Information Criterion. We use a deviance-based test to assess the overall importance of location in the model and pointwise permutation tests to locate regions of significantly increased or decreased risk. The method is illustrated with synthetic data and data from a population-based case-control study, using S-Plus and ArcView software. Conclusion Our goal is to develop practical methods for mapping population-based case-control and cohort studies. The method described here performs well for our synthetic data, reproducing important features of the data and adequately controlling the covariate. When applied to the population-based case-control data set, the method suggests spatial confounding and identifies statistically significant areas of increased and decreased odds ratios.

  1. Implementation of evidence-based knowledge in general practice.

    Science.gov (United States)

    Le, Jette Videbæk

    2017-12-01

    Background Keeping up with the evidence and implementing it into the daily care for patients are fundamental prerequisites for delivering a high quality of care in general practice. However, despite many years of research into dissemination and implementation of evidence-based recommendations, significant challenges remain. In recent years, organisational factors have become widely acknowledged as vitally important for ensuring successful implementation. Further knowledge is needed to understand more about which factors affect the seeking and implementation of evidence-based knowledge in general practice. Aim The overall aim was to investigate how evidence-based knowledge is sought and implemented in general practice and to analyse associations with GP characteristics and quality of care. Three separate studies, each covering a specific part of the overall aim, were undertaken: I. To examine how GPs implement clinical practice guidelines in everyday clinical practice, and how implementation approaches differ between practices. II. To assess GPs’ information seeking behaviour with regard to the use and perceived importance of scientific medical information sources and to investigate associations with GP characteristics. III. To investigate if there are associations between specific formalised implementation activities within general practice and quality of care – exemplified by the use of spirometry testing among first-time users of medication against obstructive lung diseases. Methods The study was designed as a mixed methods study combining qualitative interviews, questionnaire and register data. Study I was a qualitative interview study that involved purposefully selected GPs representing seven different practices. The interviews were analysed using systematic text condensation, and results were used to qualify the development of a national survey of general practitioners regarding their seeking and implementation of evidence-based knowledge. This survey was

  2. Are general surgery residents adequately prepared for hepatopancreatobiliary fellowships? A questionnaire-based study

    Science.gov (United States)

    Osman, Houssam; Parikh, Janak; Patel, Shirali; Jeyarajah, D Rohan

    2015-01-01

    Background The present study was conducted to assess the preparedness of hepatopancreatobiliary (HPB) fellows upon entering fellowship, identify challenges encountered by HPB fellows during the initial part of their HPB training, and identify potential solutions to these challenges that can be applied during residency training. Methods A questionnaire was distributed to all HPB fellows in accredited HPB fellowship programmes in two consecutive academic years (n = 42). Reponses were then analysed. Results A total of 19 (45%) fellows responded. Prior to their fellowship, 10 (53%) were in surgical residency and the rest were in other surgical fellowships or surgical practice. Thirteen (68%) were graduates of university-based residency programmes. All fellows felt comfortable in performing basic laparoscopic procedures independently at the completion of residency and less comfortable in performing advanced laparoscopy. Eight (42%) fellows cited a combination of inadequate case volume and lack of autonomy during residency as the reasons for this lack of comfort. Thirteen (68%) identified inadequate preoperative workup and management as their biggest fear upon entering practice after general surgery training. A total of 17 (89%) fellows felt they were adequately prepared to enter HPB fellowship. Extra rotations in transplant, vascular or minimally invasive surgery were believed to be most helpful in preparing general surgery residents pursing HPB fellowships. Conclusions Overall, HPB fellows felt themselves to be adequately prepared for fellowship. Advanced laparoscopic procedures and the perioperative management of complex patients are two of the challenges facing HPB fellows. General surgery residents who plan to pursue an HPB fellowship may benefit from spending extra rotations on certain subspecialties. Focus on perioperative workup and management should be an integral part of residency and fellowship training. PMID:25387852

  3. A neutron spectrum unfolding code based on generalized regression artificial neural networks

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Martinez B, M. R.; Castaneda M, R.; Solis S, L. O.; Vega C, H. R.

    2015-10-01

    The most delicate part of neutron spectrometry, is the unfolding process. Then derivation of the spectral information is not simple because the unknown is not given directly as result of the measurements. Novel methods based on Artificial Neural Networks have been widely investigated. In prior works, back propagation neural networks (BPNN) have been used to solve the neutron spectrometry problem, however, some drawbacks still exist using this kind of neural nets, as the optimum selection of the network topology and the long training time. Compared to BPNN, is usually much faster to train a generalized regression neural network (GRNN). That is mainly because spread constant is the only parameter used in GRNN. Another feature is that the network will converge to a global minimum. In addition, often are more accurate than BPNN in prediction. These characteristics make GRNN be of great interest in the neutron spectrometry domain. In this work is presented a computational tool based on GRNN, capable to solve the neutron spectrometry problem. This computational code, automates the pre-processing, training and testing stages, the statistical analysis and the post-processing of the information, using 7 Bonner spheres rate counts as only entrance data. The code was designed for a Bonner Spheres System based on a 6 LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. (Author)

  4. Generalized image contrast enhancement technique based on the Heinemann contrast discrimination model

    Science.gov (United States)

    Liu, Hong; Nodine, Calvin F.

    1996-07-01

    This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.

  5. A neutron spectrum unfolding code based on generalized regression artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M.; Martinez B, M. R.; Castaneda M, R.; Solis S, L. O. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Av. Ramon Lopez Velarde 801, Col. Centro, 98000 Zacatecas, Zac. (Mexico); Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    The most delicate part of neutron spectrometry, is the unfolding process. Then derivation of the spectral information is not simple because the unknown is not given directly as result of the measurements. Novel methods based on Artificial Neural Networks have been widely investigated. In prior works, back propagation neural networks (BPNN) have been used to solve the neutron spectrometry problem, however, some drawbacks still exist using this kind of neural nets, as the optimum selection of the network topology and the long training time. Compared to BPNN, is usually much faster to train a generalized regression neural network (GRNN). That is mainly because spread constant is the only parameter used in GRNN. Another feature is that the network will converge to a global minimum. In addition, often are more accurate than BPNN in prediction. These characteristics make GRNN be of great interest in the neutron spectrometry domain. In this work is presented a computational tool based on GRNN, capable to solve the neutron spectrometry problem. This computational code, automates the pre-processing, training and testing stages, the statistical analysis and the post-processing of the information, using 7 Bonner spheres rate counts as only entrance data. The code was designed for a Bonner Spheres System based on a {sup 6}LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. (Author)

  6. Prevalence of constipation among the general population: a community-based survey from India.

    Science.gov (United States)

    Rajput, Mamta; Saini, Sushma Kumari

    2014-01-01

    Constipation is a frequent health problem leading to great discomfort to the person and affects his or her quality of life. It is considered to be highly prevalent in the general population, but there is little data supporting the findings. This study was undertaken with an objective to assess the prevalence of constipation and its associated factors among the general population of Dadu Majra Colony, UT, Chandigarh, India. A total of 505 individuals were interviewed through structured questionnaire based on ROME II criteria for constipation. Results revealed that the prevalence of self-reported constipation within the last 1 year was 24.8% whereas 16.8% of participants had constipation according to the Rome II criteria. Most of the subjects (83%) were within the age group of 18-59 years with mean age (years) of 38.64 ± 15.57. Constipation was significantly more frequent in females than in males (20% vs. 13%) and in nonworking population than in working population (20% vs. 12%). Poor dietary habits, lesser fluid intake per day, and lesser physical activity were found to be significant factors leading to the constipation. About 18% of constipated subjects reported physicians' consultation, whereas 8% reported the use of laxatives to relieve their constipation.

  7. General health influences episodes of xerostomia: a prospective population-based study.

    Science.gov (United States)

    da Silva, Luciana; Kupek, Emil; Peres, Karen G

    2017-04-01

    The aim of this study was to investigate the associated factors of changes in symptoms of xerostomia (SOX) in adults aged 20-59. A prospective population-based study was conducted in 2009 (n = 1720) and 2012 (n = 1222) in the urban area of Florianópolis, SC, Brazil. Information on SOX was collected in both years together with age, family income, years of schooling, smoking habit, alcohol consumption, changes in the body mass index (BMI; kg/m²), medicine use, self-reported diagnosis of chronic diseases, change in hypertension status and in the use and need for dentures, and number of remaining teeth. Associated factors with changes in SOX were investigated using multinomial logistic regression, considering those who had never reported this symptom as the reference. Prevalence of regular SOX was equal to 3.8% (95% CI: 2.9-5.1) and irregular (one period only) equal to 12.2% (95% CI: 10.2-14.5). Age, smoking habit, medicine use, self-reported diagnosis of depression, and weight gain increased the probability of regular SOX, whereas highest schooling level was associated with lower probability of this symptom. General and psychosocial health influenced the number of episodes of xerostomia symptoms, calling for multidisciplinary actions to prevent common risk behaviors for oral and general diseases. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Inferring topologies via driving-based generalized synchronization of two-layer networks

    Science.gov (United States)

    Wang, Yingfei; Wu, Xiaoqun; Feng, Hui; Lu, Jun-an; Xu, Yuhua

    2016-05-01

    The interaction topology among the constituents of a complex network plays a crucial role in the network’s evolutionary mechanisms and functional behaviors. However, some network topologies are usually unknown or uncertain. Meanwhile, coupling delays are ubiquitous in various man-made and natural networks. Hence, it is necessary to gain knowledge of the whole or partial topology of a complex dynamical network by taking into consideration communication delay. In this paper, topology identification of complex dynamical networks is investigated via generalized synchronization of a two-layer network. Particularly, based on the LaSalle-type invariance principle of stochastic differential delay equations, an adaptive control technique is proposed by constructing an auxiliary layer and designing proper control input and updating laws so that the unknown topology can be recovered upon successful generalized synchronization. Numerical simulations are provided to illustrate the effectiveness of the proposed method. The technique provides a certain theoretical basis for topology inference of complex networks. In particular, when the considered network is composed of systems with high-dimension or complicated dynamics, a simpler response layer can be constructed, which is conducive to circuit design. Moreover, it is practical to take into consideration perturbations caused by control input. Finally, the method is applicable to infer topology of a subnetwork embedded within a complex system and locate hidden sources. We hope the results can provide basic insight into further research endeavors on understanding practical and economical topology inference of networks.

  9. First and higher order, heuristically based generalized perturbation theory (HGPT) with optional control reset variable

    International Nuclear Information System (INIS)

    Gandini, A.

    1996-01-01

    The heuristically based generalized perturbation theory (HGPT), to first and higher order, applied to the neutron field of a reactor system, is discussed in relation to the criticality reset procedure. This procedure is implicit within the GPT methodology, corresponding to the so called filtering of the importance function relevant to the neutron field from the fundamental mode contamination. It is common practice to use the so called ''lambda''-mode filter. In order to account for any possible reset option, a general definition is introduced of an intensive control variable (ρ) entering into the governing equations, and correspondingly a fundamental ρ-mode filtering of the importance function is defined, relevant to the real criticality reset (control) mechanism adopted. A simple example illustrates the need to take into account the correct filtering, so as to avoid significant inaccuracies in the sensitivity calculation results. The extension of this filtering technique to other functions entering into the GPT perturbative formulations at first and higher order is also discussed. (author)

  10. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    International Nuclear Information System (INIS)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee; Lee, Minuk; Choi, Jong-su; Hong, Sup

    2015-01-01

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF

  11. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Minuk; Choi, Jong-su; Hong, Sup [Korea Research Insitute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-02-15

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF.

  12. Monte Carlo closure for moment-based transport schemes in general relativistic radiation hydrodynamic simulations

    Science.gov (United States)

    Foucart, Francois

    2018-04-01

    General relativistic radiation hydrodynamic simulations are necessary to accurately model a number of astrophysical systems involving black holes and neutron stars. Photon transport plays a crucial role in radiatively dominated accretion discs, while neutrino transport is critical to core-collapse supernovae and to the modelling of electromagnetic transients and nucleosynthesis in neutron star mergers. However, evolving the full Boltzmann equations of radiative transport is extremely expensive. Here, we describe the implementation in the general relativistic SPEC code of a cheaper radiation hydrodynamic method that theoretically converges to a solution of Boltzmann's equation in the limit of infinite numerical resources. The algorithm is based on a grey two-moment scheme, in which we evolve the energy density and momentum density of the radiation. Two-moment schemes require a closure that fills in missing information about the energy spectrum and higher order moments of the radiation. Instead of the approximate analytical closure currently used in core-collapse and merger simulations, we complement the two-moment scheme with a low-accuracy Monte Carlo evolution. The Monte Carlo results can provide any or all of the missing information in the evolution of the moments, as desired by the user. As a first test of our methods, we study a set of idealized problems demonstrating that our algorithm performs significantly better than existing analytical closures. We also discuss the current limitations of our method, in particular open questions regarding the stability of the fully coupled scheme.

  13. Generalized image contrast enhancement technique based on Heinemann contrast discrimination model

    Science.gov (United States)

    Liu, Hong; Nodine, Calvin F.

    1994-03-01

    This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.

  14. Reference Information Based Remote Sensing Image Reconstruction with Generalized Nonconvex Low-Rank Approximation

    Directory of Open Access Journals (Sweden)

    Hongyang Lu

    2016-06-01

    Full Text Available Because of the contradiction between the spatial and temporal resolution of remote sensing images (RSI and quality loss in the process of acquisition, it is of great significance to reconstruct RSI in remote sensing applications. Recent studies have demonstrated that reference image-based reconstruction methods have great potential for higher reconstruction performance, while lacking accuracy and quality of reconstruction. For this application, a new compressed sensing objective function incorporating a reference image as prior information is developed. We resort to the reference prior information inherent in interior and exterior data simultaneously to build a new generalized nonconvex low-rank approximation framework for RSI reconstruction. Specifically, the innovation of this paper consists of the following three respects: (1 we propose a nonconvex low-rank approximation for reconstructing RSI; (2 we inject reference prior information to overcome over smoothed edges and texture detail losses; (3 on this basis, we combine conjugate gradient algorithms and a single-value threshold (SVT simultaneously to solve the proposed algorithm. The performance of the algorithm is evaluated both qualitatively and quantitatively. Experimental results demonstrate that the proposed algorithm improves several dBs in terms of peak signal to noise ratio (PSNR and preserves image details significantly compared to most of the current approaches without reference images as priors. In addition, the generalized nonconvex low-rank approximation of our approach is naturally robust to noise, and therefore, the proposed algorithm can handle low resolution with noisy inputs in a more unified framework.

  15. Generalized Switched-Inductor Based Buck-Boost Z-H Converter

    Directory of Open Access Journals (Sweden)

    E. Babaei

    2017-12-01

    Full Text Available In this paper, a generalized buck-boost Z-H converter based on switched inductors is proposed. This structure consists of a set of series connected switched-inductor cells. The voltage conversion ratio of the proposed structure is adjusted by changing the number of cells and the duty cycle. Like the conventional Z-H converter, the shoot-through switching state and the diode before LC network are eliminated. The proposed converter can provide high voltage gain in low duty cycles. Considering different values for duty cycle, the proposed structure works in two operating zones. In the first operating zone, it works as a buck-boost converter and in the second operating zone, it works as a boost converter. In this paper, a complete analysis of the proposed converter is presented. In order to confirm the accuracy of mathematic calculations, the simulations results by using PSCAD/EMTDC software are given.

  16. Wang-Landau Reaction Ensemble Method: Simulation of Weak Polyelectrolytes and General Acid-Base Reactions.

    Science.gov (United States)

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-02-14

    We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.

  17. Capturing hammerhead ribozyme structures in action by modulating general base catalysis.

    Directory of Open Access Journals (Sweden)

    Young-In Chi

    2008-09-01

    Full Text Available We have obtained precatalytic (enzyme-substrate complex and postcatalytic (enzyme-product complex crystal structures of an active full-length hammerhead RNA that cleaves in the crystal. Using the natural satellite tobacco ringspot virus hammerhead RNA sequence, the self-cleavage reaction was modulated by substituting the general base of the ribozyme, G12, with A12, a purine variant with a much lower pKa that does not significantly perturb the ribozyme's atomic structure. The active, but slowly cleaving, ribozyme thus permitted isolation of enzyme-substrate and enzyme-product complexes without modifying the nucleophile or leaving group of the cleavage reaction, nor any other aspect of the substrate. The predissociation enzyme-product complex structure reveals RNA and metal ion interactions potentially relevant to transition-state stabilization that are absent in precatalytic structures.

  18. BangA: An Efficient and Flexible Generalization-Based Algorithm for Privacy Preserving Data Publication

    Directory of Open Access Journals (Sweden)

    Adeel Anjum

    2017-01-01

    Full Text Available Privacy-Preserving Data Publishing (PPDP has become a critical issue for companies and organizations that would release their data. k-Anonymization was proposed as a first generalization model to guarantee against identity disclosure of individual records in a data set. Point access methods (PAMs are not well studied for the problem of data anonymization. In this article, we propose yet another approximation algorithm for anonymization, coined BangA, that combines useful features from Point Access Methods (PAMs and clustering. Hence, it achieves fast computation and scalability as a PAM, and very high quality thanks to its density-based clustering step. Extensive experiments show the efficiency and effectiveness of our approach. Furthermore, we provide guidelines for extending BangA to achieve a relaxed form of differential privacy which provides stronger privacy guarantees as compared to traditional privacy definitions.

  19. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    Science.gov (United States)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

  20. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    Science.gov (United States)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  1. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  2. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material for potential space project applications requires an in-depth understanding of their reliability. A general reliability model for Ni-BaTiO3 MLCC is developed and discussed. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitor's reliability life responds to the external stresses, and an empirical function that defines contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  3. Implementation of Finite Volume based Navier Stokes Algorithm Within General Purpose Flow Network Code

    Science.gov (United States)

    Schallhorn, Paul; Majumdar, Alok

    2012-01-01

    This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.

  4. Prediction and reconstruction of future and missing unobservable modified Weibull lifetime based on generalized order statistics

    Directory of Open Access Journals (Sweden)

    Amany E. Aly

    2016-04-01

    Full Text Available When a system consisting of independent components of the same type, some appropriate actions may be done as soon as a portion of them have failed. It is, therefore, important to be able to predict later failure times from earlier ones. One of the well-known failure distributions commonly used to model component life, is the modified Weibull distribution (MWD. In this paper, two pivotal quantities are proposed to construct prediction intervals for future unobservable lifetimes based on generalized order statistics (gos from MWD. Moreover, a pivotal quantity is developed to reconstruct missing observations at the beginning of experiment. Furthermore, Monte Carlo simulation studies are conducted and numerical computations are carried out to investigate the efficiency of presented results. Finally, two illustrative examples for real data sets are analyzed.

  5. Generalized sample entropy analysis for traffic signals based on similarity measure

    Science.gov (United States)

    Shang, Du; Xu, Mengjia; Shang, Pengjian

    2017-05-01

    Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.

  6. A Non-symmetric Digital Image Secure Communication Scheme Based on Generalized Chaos Synchronization System

    International Nuclear Information System (INIS)

    Zhang Xiaohong; Min Lequan

    2005-01-01

    Based on a generalized chaos synchronization system and a discrete Sinai map, a non-symmetric true color (RGB) digital image secure communication scheme is proposed. The scheme first changes an ordinary RGB digital image with 8 bits into unrecognizable disorder codes and then transforms the disorder codes into an RGB digital image with 16 bits for transmitting. A receiver uses a non-symmetric key to verify the authentication of the received data origin, and decrypts the ciphertext. The scheme can encrypt and decrypt most formatted digital RGB images recognized by computers, and recover the plaintext almost without any errors. The scheme is suitable to be applied in network image communications. The analysis of the key space, sensitivity of key parameters, and correlation of encrypted images imply that this scheme has sound security.

  7. Partial fingerprint identification algorithm based on the modified generalized Hough transform on mobile device

    Science.gov (United States)

    Qin, Jin; Tang, Siqi; Han, Congying; Guo, Tiande

    2018-04-01

    Partial fingerprint identification technology which is mainly used in device with small sensor area like cellphone, U disk and computer, has taken more attention in recent years with its unique advantages. However, owing to the lack of sufficient minutiae points, the conventional method do not perform well in the above situation. We propose a new fingerprint matching technique which utilizes ridges as features to deal with partial fingerprint images and combines the modified generalized Hough transform and scoring strategy based on machine learning. The algorithm can effectively meet the real-time and space-saving requirements of the resource constrained devices. Experiments on in-house database indicate that the proposed algorithm have an excellent performance.

  8. Intermetallic nickel silicide nanocatalyst-A non-noble metal-based general hydrogenation catalyst.

    Science.gov (United States)

    Ryabchuk, Pavel; Agostini, Giovanni; Pohl, Marga-Martina; Lund, Henrik; Agapova, Anastasiya; Junge, Henrik; Junge, Kathrin; Beller, Matthias

    2018-06-01

    Hydrogenation reactions are essential processes in the chemical industry, giving access to a variety of valuable compounds including fine chemicals, agrochemicals, and pharmachemicals. On an industrial scale, hydrogenations are typically performed with precious metal catalysts or with base metal catalysts, such as Raney nickel, which requires special handling due to its pyrophoric nature. We report a stable and highly active intermetallic nickel silicide catalyst that can be used for hydrogenations of a wide range of unsaturated compounds. The catalyst is prepared via a straightforward procedure using SiO 2 as the silicon atom source. The process involves thermal reduction of Si-O bonds in the presence of Ni nanoparticles at temperatures below 1000°C. The presence of silicon as a secondary component in the nickel metal lattice plays the key role in its properties and is of crucial importance for improved catalytic activity. This novel catalyst allows for efficient reduction of nitroarenes, carbonyls, nitriles, N-containing heterocycles, and unsaturated carbon-carbon bonds. Moreover, the reported catalyst can be used for oxidation reactions in the presence of molecular oxygen and is capable of promoting acceptorless dehydrogenation of unsaturated N-containing heterocycles, opening avenues for H 2 storage in organic compounds. The generality of the nickel silicide catalyst is demonstrated in the hydrogenation of over a hundred of structurally diverse unsaturated compounds. The wide application scope and high catalytic activity of this novel catalyst make it a nice alternative to known general hydrogenation catalysts, such as Raney nickel and noble metal-based catalysts.

  9. A novel cadaver-based educational program in general surgery training.

    Science.gov (United States)

    Lewis, Catherine E; Peacock, Warwick J; Tillou, Areti; Hines, O Joe; Hiatt, Jonathan R

    2012-01-01

    To describe the development of a cadaver-based educational program and report our residents' assessment of the new program. An anatomy-based educational program was developed using fresh frozen cadavers to teach surgical anatomy and operative skills to general surgery (GS) trainees. Residents were asked to complete a voluntary, anonymous survey evaluating perceptions of the program (6 questions formulated on a 5-point Likert scale) and comparing cadaver sessions to other types of learning (4 rank order questions). Large university teaching hospital. Medical students, residents, and faculty members were participants in the cadaver programs. Only GS residents were asked to complete the survey. Since its implementation, 150 residents of all levels participated in 13 sessions. A total of 40 surveys were returned for a response rate of 89%. Overall, respondents held a positive view of the cadaver sessions and believed them to be useful for learning anatomy (94% agree or strongly agree), learning the steps of an operation (76% agree or strongly agree), and increasing confidence in doing an operation (53% agree or strongly agree). Trainees wanted to have more sessions (87% agree or strongly agree), and believed they would spend free time in the cadaver laboratory (58% agree or strongly agree). Compared with other learning modalities, cadaver sessions were ranked first for learning surgical anatomy, followed by textbooks, simulators, web sites, animate laboratories, and lectures. Respondents also ranked cadaver sessions first for increasing confidence in performing a procedure and for learning the steps of an operation. Cost of cadavers represented the major expense of the program. Fresh cadaver dissections represent a solution to the challenges of efficient, safe, and effective general surgery education. Residents have a positive attitude toward these teaching sessions and found them to be more effective than other learning modalities. Copyright © 2012 Association of

  10. The unclosing premature mortality gap in gout: a general population-based study.

    Science.gov (United States)

    Fisher, Mark C; Rai, Sharan K; Lu, Na; Zhang, Yuqing; Choi, Hyon K

    2017-07-01

    Gout, the most common inflammatory arthritis, is associated with premature mortality. Whether this mortality gap has improved over time, as observed in rheumatoid arthritis (RA), is unknown. Using an electronic medical record database representative of the UK general population, we identified incident gout cases and controls between 1999 and 2014. The gout cohort was divided based on year of diagnosis into early (1999-2006) and late (2007-2014) cohorts. We compared the mortality rates and HRs, adjusting for potential confounders between the cohorts. We conducted sensitivity analyses among patients with gout who received at least one prescription for urate-lowering therapy, which has been found to have a validity of 90%. In both cohorts, patients with gout showed similar levels of excess mortality compared with their corresponding comparison cohort (ie, 29.1 vs 23.5 deaths/1000 person-years and 23.0 vs 18.8 deaths/1000 person-years in the early and late cohorts, respectively). The corresponding mortality HRs were 1.25 (95% CI 1.21 to 1.30) and 1.24 (95% CI 1.20 to 1.29), and the multivariable HRs were 1.10 (95% CI 1.06 to 1.15) and 1.09 (95% CI 1.05 to 1.13), respectively (both p values for interaction >0.72). Our sensitivity analyses showed similar findings (both p values for interaction >0.88). This general population-based cohort study indicates that the level of premature mortality among patients with gout remains unimproved over the past 16 years, unlike RA during the same period. This unclosing premature mortality gap calls for improved management of gout and its comorbidities. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  11. Design of the SLAC RCE Platform: A General Purpose ATCA Based Data Acquisition System

    International Nuclear Information System (INIS)

    Herbst, R.; Claus, R.; Freytag, M.; Haller, G.; Huffer, M.; Maldonado, S.; Nishimura, K.; O'Grady, C.; Panetta, J.; Perazzo, A.; Reese, B.; Ruckman, L.; Thayer, J.G.; Weaver, M.

    2015-01-01

    The SLAC RCE platform is a general purpose clustered data acquisition system implemented on a custom ATCA compliant blade, called the Cluster On Board (COB). The core of the system is the Reconfigurable Cluster Element (RCE), which is a system-on-chip design based upon the Xilinx Zynq family of FPGAs, mounted on custom COB daughter-boards. The Zynq architecture couples a dual core ARM Cortex A9 based processor with a high performance 28nm FPGA. The RCE has 12 external general purpose bi-directional high speed links, each supporting serial rates of up to 12Gbps. 8 RCE nodes are included on a COB, each with a 10Gbps connection to an on-board 24-port Ethernet switch integrated circuit. The COB is designed to be used with a standard full-mesh ATCA backplane allowing multiple RCE nodes to be tightly interconnected with minimal interconnect latency. Multiple shelves can be clustered using the front panel 10-gbps connections. The COB also supports local and inter-blade timing and trigger distribution. An experiment specific Rear Transition Module adapts the 96 high speed serial links to specific experiments and allows an experiment-specific timing and busy feedback connection. This coupling of processors with a high performance FPGA fabric in a low latency, multiple node cluster allows high speed data processing that can be easily adapted to any physics experiment. RTEMS and Linux are both ported to the module. The RCE has been used or is the baseline for several current and proposed experiments (LCLS, HPS, LSST, ATLAS-CSC, LBNE, DarkSide, ILC-SiD, etc).

  12. A generalized formulation for noise-based seismic velocity change measurements

    Science.gov (United States)

    Gómez-García, C.; Brenguier, F.; Boué, P.; Shapiro, N.; Droznin, D.; Droznina, S.; Senyukov, S.; Gordeev, E.

    2017-12-01

    The observation of continuous seismic velocity changes is a powerful tool for detecting seasonal variations in crustal structure, volcanic unrest, co- and post-seismic evolution of stress in fault areas or the effects of fluid injection. The standard approach for measuring such velocity changes relies on comparison of travel times in the coda of a set of seismic signals, usually noise-based cross-correlations retrieved at different dates, and a reference trace, usually a averaged function over dates. A good stability in both space and time of the noise sources is then the main assumption for reliable measurements. Unfortunately, these conditions are often not fulfilled, as it happens when ambient-noise sources are non-stationary, such as the emissions of low-frequency volcanic tremors.We propose a generalized formulation for retrieving continuous time series of noise-based seismic velocity changes without any arbitrary reference cross-correlation function. We set up a general framework for future applications of this technique performing synthetic tests. In particular, we study the reliability of the retrieved velocity changes in case of seasonal-type trends, transient effects (similar to those produced as a result of an earthquake or a volcanic eruption) and sudden velocity drops and recoveries as the effects of transient local source emissions. Finally, we apply this approach to a real dataset of noise cross-correlations. We choose the Klyuchevskoy volcanic group (Kamchatka) as a case study where the recorded wavefield is hampered by loss of data and dominated by strongly localized volcanic tremor sources. Despite the mentioned wavefield contaminations, we retrieve clear seismic velocity drops associated with the eruptions of the Klyuchevskoy an the Tolbachik volcanoes in 2010 and 2012, respectively.

  13. Over 600 mobile base station measurements in Buenos Aires City, how far are general public limits?

    International Nuclear Information System (INIS)

    Aguirre, Anibal; Dalmas Di Giovanni, Norberto; Garcia Diaz, Javier; Douthat, Analia; Munoz, Claudio; Saint Nom, Roxana

    2008-01-01

    Full text: The general public worries about non ionizing radiation (NIR), produced by mobile base station aren't new, mainly in big cities like Buenos Aires where the amount of antennas of wide communication services (mobile, data links, FM Broadcasting, TV and others), are so considerable. Buenos Aires City has 3 million inhabitants in a surface of 203 km 2 and has about 700 mobile base stations between macro and micro cells. When public demand arrived to local government authorities, the first logical step was to promote a big measurement campaign to do a radiation map of the city. This measurement campaign has been done for two measurement teams, one with CITEFA NIR specialists and the other with ITBA NIR specialists, during 6 months. Local NRI measurement guidelines established that has to be taken between 12 and 16 points around NRI source in a 50 m or 100 m radius, depending on the neighbourhood buildings features (buildings located in the centre of the city aren't the same as a quiet house district). The obtained measured values had to be compared with Argentinean exposure limits, which are the same that ICNIRP limits. With over 7500 measured points in the entire city with wide band instruments (200 kHz-40 GHz), we have started our analysis. This information has been used by local authorities to upgrade a radiation map that can be consulted on the local government internet site. We have compared the obtained values with some international general public exposure limits. From this comparison we found that over 90% of measured points were under general public ICNIRP cell phone frequency limits, but the surprise was that, about 80% of them were below Russian limits (6 V/m) that are the strictest of the world. According to the obtained results and to provide a conclusion we could say that, if in a big city with a very high NIR sources density, we find a majority of lower limits values, this situation will be better in a little town with few NIR sources. However

  14. Equihash: Asymmetric Proof-of-Work Based on the Generalized Birthday Problem

    Directory of Open Access Journals (Sweden)

    Alex Biryukov

    2017-04-01

    Full Text Available Proof-of-work is a central concept in modern cryptocurrencies and denial-ofservice protection tools, but the requirement for fast verification so far has made it an easy prey for GPU-, ASIC-, and botnet-equipped users. The attempts to rely on memory-intensive computations in order to remedy the disparity between architectures have resulted in slow or broken schemes. In this paper we solve this open problem and show how to construct an asymmetric proof-of-work (PoW based on a computationally-hard problem, which requires a great deal of memory to generate a proof (called a ”memory-hardness” feature but is instant to verify. Our primary proposal, Equihash, is a PoW based on the generalized birthday problem and enhanced Wagner’s algorithm for it. We introduce the new technique of algorithm binding to prevent cost amortization and demonstrate that possible parallel implementations are constrained by memory bandwidth. Our scheme has tunable and steep time-space tradeoffs, which impose large computational penalties if less memory is used. Our solution is practical and ready to deploy: a reference implementation of a proof-of-work requiring 700 MB of RAM runs in 15 seconds on a 2.1 GHz CPU, increases the computations by a factor of 1000 if memory is halved, and presents a proof of just 120 bytes long.

  15. Fermion unification model based on the intrinsic SU(8 symmetry of a generalized Dirac equation

    Directory of Open Access Journals (Sweden)

    Eckart eMarsch

    2015-10-01

    Full Text Available A natural generalization of the original Dirac spinor into a multi-component spinor is achieved, which corresponds to the single lepton and the three quarks of the first family of the standard model of elementary particle physics. Different fermions result from similarity transformations of the Dirac equation, but apparently there can be no more fermions according to the maximal multiplicity revealed in this study. Rotations in the fermion state space are achieved by the unitary generators of the U(1 and the SU(3 groups, corresponding to quantum electrodynamics (QED based on electric charge and chromodynamics (QCD based on colour charge. In addition to hypercharge the dual degree of freedom of hyperspin emerges, which occurs due to the duplicity implied by the two related (Weyl and Dirac representations of the Dirac equation. This yields the SU(2 symmetry of the weak interaction, which can be married to U(1 to generate the unified electroweak interaction as in the standard model. Therefore, the symmetry group encompassing all the three groups mentioned above is SU(8, which can accommodate and unify the observed eight basic stable fermions.

  16. A secure communication scheme based generalized function projective synchronization of a new 5D hyperchaotic system

    International Nuclear Information System (INIS)

    Wu, Xiangjun; Fu, Zhengye; Kurths, Jürgen

    2015-01-01

    In this paper, a new five-dimensional hyperchaotic system is proposed based on the Lü hyperchaotic system. Some of its basic dynamical properties, such as equilibria, Lyapunov exponents, bifurcations and various attractors are investigated. Furthermore, a new secure communication scheme based on generalized function projective synchronization (GFPS) of this hyperchaotic system with an uncertain parameter is presented. The communication scheme is composed of the modulation, the chaotic receiver, the chaotic transmitter and the demodulation. The modulation mechanism is to modulate the message signal into the system parameter. Then the chaotic signals are sent to the receiver via a public channel. In the receiver end, by designing the controllers and the parameter update rule, GFPS between the transmitter and receiver systems is achieved and the unknown parameter is estimated simultaneously. The message signal can be finally recovered by the identified parameter and the corresponding demodulation method. There is no any limitation on the message size. Numerical simulations are performed to show the validity and feasibility of the presented secure communication scheme. (paper)

  17. Generalized composite multiscale permutation entropy and Laplacian score based rolling bearing fault diagnosis

    Science.gov (United States)

    Zheng, Jinde; Pan, Haiyang; Yang, Shubao; Cheng, Junsheng

    2018-01-01

    Multiscale permutation entropy (MPE) is a recently proposed nonlinear dynamic method for measuring the randomness and detecting the nonlinear dynamic change of time series and can be used effectively to extract the nonlinear dynamic fault feature from vibration signals of rolling bearing. To solve the drawback of coarse graining process in MPE, an improved MPE method called generalized composite multiscale permutation entropy (GCMPE) was proposed in this paper. Also the influence of parameters on GCMPE and its comparison with the MPE are studied by analyzing simulation data. GCMPE was applied to the fault feature extraction from vibration signal of rolling bearing and then based on the GCMPE, Laplacian score for feature selection and the Particle swarm optimization based support vector machine, a new fault diagnosis method for rolling bearing was put forward in this paper. Finally, the proposed method was applied to analyze the experimental data of rolling bearing. The analysis results show that the proposed method can effectively realize the fault diagnosis of rolling bearing and has a higher fault recognition rate than the existing methods.

  18. Degradation data analysis based on a generalized Wiener process subject to measurement error

    Science.gov (United States)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  19. A general science-based framework for dynamical spatio-temporal models

    Science.gov (United States)

    Wikle, C.K.; Hooten, M.B.

    2010-01-01

    Spatio-temporal statistical models are increasingly being used across a wide variety of scientific disciplines to describe and predict spatially-explicit processes that evolve over time. Correspondingly, in recent years there has been a significant amount of research on new statistical methodology for such models. Although descriptive models that approach the problem from the second-order (covariance) perspective are important, and innovative work is being done in this regard, many real-world processes are dynamic, and it can be more efficient in some cases to characterize the associated spatio-temporal dependence by the use of dynamical models. The chief challenge with the specification of such dynamical models has been related to the curse of dimensionality. Even in fairly simple linear, first-order Markovian, Gaussian error settings, statistical models are often over parameterized. Hierarchical models have proven invaluable in their ability to deal to some extent with this issue by allowing dependency among groups of parameters. In addition, this framework has allowed for the specification of science based parameterizations (and associated prior distributions) in which classes of deterministic dynamical models (e. g., partial differential equations (PDEs), integro-difference equations (IDEs), matrix models, and agent-based models) are used to guide specific parameterizations. Most of the focus for the application of such models in statistics has been in the linear case. The problems mentioned above with linear dynamic models are compounded in the case of nonlinear models. In this sense, the need for coherent and sensible model parameterizations is not only helpful, it is essential. Here, we present an overview of a framework for incorporating scientific information to motivate dynamical spatio-temporal models. First, we illustrate the methodology with the linear case. We then develop a general nonlinear spatio-temporal framework that we call general quadratic

  20. 77 FR 13367 - General Electric-Hitachi Global Laser Enrichment, LLC, Proposed Laser-Based Uranium Enrichment...

    Science.gov (United States)

    2012-03-06

    ... NUCLEAR REGULATORY COMMISSION [NRC-2009-0157] General Electric-Hitachi Global Laser Enrichment, LLC, Proposed Laser-Based Uranium Enrichment Facility, Wilmington, NC AGENCY: Nuclear Regulatory... Impact Statement (EIS) for the proposed General Electric- Hitachi Global Laser Enrichment, LLC (GLE...

  1. How to Mutually Advance General Education and Major-Based Education: A Grounded Theory Study on the Course Level

    Science.gov (United States)

    Fang, Hualiang

    2018-01-01

    The author employs grounded theory to investigate the teaching process of an interdisciplinary general education course at A University as a case. The author finds that under the condition of rather concrete relations between the subject of a major-based course and that of an elected general education course, if the major course is taught with a…

  2. Development and Preliminary Impacts of the Implementation of an Authentic Research-Based Experiment in General Chemistry

    Science.gov (United States)

    Tomasik, Janice Hall; Cottone, Katelyn E.; Heethuis, Mitchell T.; Mueller, Anja

    2013-01-01

    Incorporating research-based lab activities into general chemistry at a large university can be challenging, considering the high enrollments and costs typically associated with the courses. Performing sweeping curricular overhauls of the general chemistry laboratory can be difficult, and in some cases discouraged, as many would rather maintain…

  3. Study on general design of dual-DMD based infrared two-band scene simulation system

    Science.gov (United States)

    Pan, Yue; Qiao, Yang; Xu, Xi-ping

    2017-02-01

    Mid-wave infrared(MWIR) and long-wave infrared(LWIR) two-band scene simulation system is a kind of testing equipment that used for infrared two-band imaging seeker. Not only it would be qualified for working waveband, but also realize the essence requests that infrared radiation characteristics should correspond to the real scene. Past single-digital micromirror device (DMD) based infrared scene simulation system does not take the huge difference between targets and background radiation into account, and it cannot realize the separated modulation to two-band light beam. Consequently, single-DMD based infrared scene simulation system cannot accurately express the thermal scene model that upper-computer built, and it is not that practical. To solve the problem, we design a dual-DMD based, dual-channel, co-aperture, compact-structure infrared two-band scene simulation system. The operating principle of the system is introduced in detail, and energy transfer process of the hardware-in-the-loop simulation experiment is analyzed as well. Also, it builds the equation about the signal-to-noise ratio of infrared detector in the seeker, directing the system overall design. The general design scheme of system is given, including the creation of infrared scene model, overall control, optical-mechanical structure design and image registration. By analyzing and comparing the past designs, we discuss the arrangement of optical engine framework in the system. According to the main content of working principle and overall design, we summarize each key techniques in the system.

  4. The generalized lewis acid-base titration of palladium and niobium

    Science.gov (United States)

    Cima, M.; Brewer, L.

    1988-12-01

    The high thermodynamic stability of alloys composed of platinum group metals and group IVB and VB metals has been explained by an electronic interaction analogous to the Lewis acid-base concept for nontransition elements. The analogy is further demonstrated by the titration of palladium by addition of niobium. The activity of niobium in solid palladium was measured as a function of concentration by solid-state galvanic cells and study of the ternary oxide phase diagram. The galvanic cells were of the type Pt/NbO2,Nb2O4.8/YDTJNbOy,Nbpd/Pt where the solid electrolyte is yttria-doped thoria (YDT). Ternary phase diagrams for the Pd-Nb-0 and Rh-Nb-0 systems were obtained by characterizing samples equilibrated at 1000 °C. The phase relationships found in the ternary diagrams were also used to derive thermochemical data for the alloys. Thermochemical quantities for other acid-base stabilized alloys such as Nb-Rh, Ti-Pd, and Ti-Rh were also measured. The excess partial molar ΔGxs/R of niobium at infinite dilution was determined to be -31 kilo-Kelvin at 1000 °C, and the AG°JR of formation of a mole of NbPd3.55 is —21 kilo-Kelvin. These results and those for the other systems are used to assess the importance of valence electron configuration, nuclear charge, and crystal field effects in the context of generalized Lewis acid-base theory. It is concluded that both the nuclear charge of the atom and crystal field splitting of the valence orbitals significantly affect the basicity of the platinum group metals.

  5. Changes in Alpha Frequency and Power of the Electroencephalogram during Volatile-Based General Anesthesia

    Directory of Open Access Journals (Sweden)

    Darren Hight

    2017-05-01

    Full Text Available Oscillations in the electroencephalogram (EEG at the alpha frequency (8–12 Hz are thought to be ubiquitous during surgical anesthesia, but the details of how this oscillation responds to ongoing changes in volatile anesthetic concentration have not been well characterized. It is not known how often alpha oscillations are absent in the clinical context, how sensitively alpha frequency and power respond to changes in anesthetic concentration, and what effect increased age has on alpha frequency. Bipolar EEG was recorded frontally from 305 patients undergoing surgery with sevoflurane or desflurane providing general anesthesia. A new method of detecting the presence of alpha oscillations based on the stability of the rate of change of the peak frequency in the alpha range was developed. Linear concentration-response curves were fitted to assess the sensitivity of alpha power and frequency measures to changing levels of anesthesia. Alpha oscillations were seen to be inexplicably absent in around 4% of patients. Maximal alpha power increased with increasing volatile anesthetic concentrations in half of the patients, and decreased in the remaining patients. Alpha frequency decreased with increasing anesthetic concentrations in near to 90% of patients. Increasing age was associated with decreased sensitivity to volatile anesthesia concentrations, and with decreased alpha frequency, which sometimes transitioned into the theta range (5–7 Hz. While peak alpha frequency shows a consistent slowing to increasing volatile concentrations, the peak power of the oscillation does not, suggesting that frequency might be more informative of depth of anesthesia than traditional power based measures during volatile-based anesthesia. The alpha oscillation becomes slower with increasing age, even when the decreased anesthetic needs of older patients were taken into account.

  6. Polynomial Chaos–Based Bayesian Inference of K-Profile Parameterization in a General Circulation Model of the Tropical Pacific

    KAUST Repository

    Sraj, Ihab; Zedler, Sarah E.; Knio, Omar; Jackson, Charles S.; Hoteit, Ibrahim

    2016-01-01

    The authors present a polynomial chaos (PC)-based Bayesian inference method for quantifying the uncertainties of the K-profile parameterization (KPP) within the MIT general circulation model (MITgcm) of the tropical Pacific. The inference

  7. A neutron spectrum unfolding code based on generalized regression artificial neural networks

    International Nuclear Information System (INIS)

    Rosario Martinez-Blanco, Ma. del

    2016-01-01

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. Novel methods based on Artificial Neural Networks have been widely investigated. In prior works, back propagation neural networks (BPNN) have been used to solve the neutron spectrometry problem, however, some drawbacks still exist using this kind of neural nets, i.e. the optimum selection of the network topology and the long training time. Compared to BPNN, it's usually much faster to train a generalized regression neural network (GRNN). That's mainly because spread constant is the only parameter used in GRNN. Another feature is that the network will converge to a global minimum, provided that the optimal values of spread has been determined and that the dataset adequately represents the problem space. In addition, GRNN are often more accurate than BPNN in the prediction. These characteristics make GRNNs to be of great interest in the neutron spectrometry domain. This work presents a computational tool based on GRNN capable to solve the neutron spectrometry problem. This computational code, automates the pre-processing, training and testing stages using a k-fold cross validation of 3 folds, the statistical analysis and the post-processing of the information, using 7 Bonner spheres rate counts as only entrance data. The code was designed for a Bonner Spheres System based on a "6LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. - Highlights: • Main drawback of neutron spectrometry with BPNN is network topology optimization. • Compared to BPNN, it’s usually much faster to train a (GRNN). • GRNN are often more accurate than BPNN in the prediction. These characteristics make GRNNs to be of great interest. • This computational code, automates the pre-processing, training

  8. [Promotion of community-based care in Africa: example of community general practice in Benin].

    Science.gov (United States)

    Caplain, Roland; Yacoubou, Ismaïl; Adedemy, Didier; Sani, Alidou; Takam, Sandrine; Desplats, Dominique

    2014-01-01

    Considerable effort has been made to provide rural African populations with basic health care, but the quality of this care remains unsatisfactory due to the absence of first-line GPs. This is a paradoxical situation in view of the large number of physicians trained in medical schools in French-speaking Africa and Madagascar. of the lack of GPs working in rural areas is a real concern, as many young doctors remain unemployed in cities. For more than 20 years, the NGO Santé Sud has proposed a Community General Medicine concept, which, combined with a support system, has allowed the installation of more than 200 community GPs in Mali and Madagascar. The advantage of this concept is that it provides family medicine and primary health care in the same practice. Since 2009, Santé Sud supports an installation project in rural areas of northern Benin, where community GPs work independently, as a complementary partner of the public sector. Since 2013, the installation process comprises a university degree created with the University of Parakou Faculty of Medicine. Based on this experience in Benin, the authors show that the presence of a first-line general practitioner is an original strategy that provides a major contribution to health promotion : reducing health inequalities between rural and urban populations, allowing women to receive medically assisted childbirth close to home, developing family planning activities, education and health care for chronic diseases, strengthening health coverage by participating in vaccination campaigns, etc. Due to their functions and proximity, community GPs represent an added value for health promotion.

  9. Web-based consultation between general practitioners and nephrologists: a cluster randomized controlled trial.

    Science.gov (United States)

    van Gelder, Vincent A; Scherpbier-de Haan, Nynke D; van Berkel, Saskia; Akkermans, Reinier P; de Grauw, Inge S; Adang, Eddy M; Assendelft, Pim J; de Grauw, Wim J C; Biermans, Marion C J; Wetzels, Jack F M

    2017-08-01

    Consultation of a nephrologist is important in aligning care for patients with chronic kidney disease (CKD) at the primary-secondary care interface. However, current consultation methods come with practical difficulties that can lead to postponed consultation or patient referral instead. This study aimed to investigate whether a web-based consultation platform, telenephrology, led to a lower referral rate of indicated patients. Furthermore, we assessed consultation rate, quality of care, costs and general practitioner (GPs') experiences with telenephrology. Cluster randomized controlled trial with 47 general practices in the Netherlands was randomized to access to telenephrology or to enhanced usual care. A total of 3004 CKD patients aged 18 years or older who were under primary care were included (intervention group n = 1277, control group n = 1727) and 2693 completed the trial. All practices participated in a CKD management course and were given an overview of their CKD patients. The referral rates amounted to 2.3% (n = 29) in the intervention group and 3.0% (n = 52) in the control group, which was a non-significant difference, OR 0.61; 95% CI 0.31 to 1.23. The intervention group's consultation rate was 6.3% (n = 81) against 5.0% (n = 87) (OR 2.00; 95% CI 0.75-5.33). We found no difference in quality of care or costs. The majority of GPs had a positive opinion about telenephrology. The data in our study do not allow for conclusions on the effect of telenephrology on the rate of patient referrals and provider-to-provider consultations, compared to conventional methods. It was positively evaluated by GPs and was non-inferior in terms of quality of care and costs. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Surprise! Infants Consider Possible Bases of Generalization for a Single Input Example

    Science.gov (United States)

    Gerken, LouAnn; Dawson, Colin; Chatila, Razanne; Tenenbaum, Josh

    2015-01-01

    Infants have been shown to generalize from a small number of input examples. However, existing studies allow two possible means of generalization. One is via a process of noting similarities shared by several examples. Alternatively, generalization may reflect an implicit desire to explain the input. The latter view suggests that generalization…

  11. A multichannel nonlinear adaptive noise canceller based on generalized FLANN for fetal ECG extraction

    International Nuclear Information System (INIS)

    Ma, Yaping; Wei, Guo; Sun, Jinwei; Xiao, Yegui

    2016-01-01

    In this paper, a multichannel nonlinear adaptive noise canceller (ANC) based on the generalized functional link artificial neural network (FLANN, GFLANN) is proposed for fetal electrocardiogram (FECG) extraction. A FIR filter and a GFLANN are equipped in parallel in each reference channel to respectively approximate the linearity and nonlinearity between the maternal ECG (MECG) and the composite abdominal ECG (AECG). A fast scheme is also introduced to reduce the computational cost of the FLANN and the GFLANN. Two (2) sets of ECG time sequences, one synthetic and one real, are utilized to demonstrate the improved effectiveness of the proposed nonlinear ANC. The real dataset is derived from the Physionet non-invasive FECG database (PNIFECGDB) including 55 multichannel recordings taken from a pregnant woman. It contains two subdatasets that consist of 14 and 8 recordings, respectively, with each recording being 90 s long. Simulation results based on these two datasets reveal, on the whole, that the proposed ANC does enjoy higher capability to deal with nonlinearity between MECG and AECG as compared with previous ANCs in terms of fetal QRS (FQRS)-related statistics and morphology of the extracted FECG waveforms. In particular, for the second real subdataset, the F1-measure results produced by the PCA-based template subtraction (TS pca ) technique and six (6) single-reference channel ANCs using LMS- and RLS-based FIR filters, Volterra filter, FLANN, GFLANN, and adaptive echo state neural network (ESN a ) are 92.47%, 93.70%, 94.07%, 94.22%, 94.90%, 94.90%, and 95.46%, respectively. The same F1-measure statistical results from five (5) multi-reference channel ANCs (LMS- and RLS-based FIR filters, Volterra filter, FLANN, and GFLANN) for the second real subdataset turn out to be 94.08%, 94.29%, 94.68%, 94.91%, and 94.96%, respectively. These results indicate that the ESN a and GFLANN perform best, with the ESN a being slightly better than the GFLANN but about four times

  12. A Theory of Gravity and General Relativity based on Quantum Electromagnetism

    Science.gov (United States)

    Zheng-Johansson, J. X.

    2018-02-01

    Based on first principles solutions in a unified framework of quantum mechanics and electromagnetism we predict the presence of a universal attractive depolarisation radiation (DR) Lorentz force (F) between quantum entities, each being either an IED matter particle or light quantum, in a polarisable dielectric vacuum. Given two quantum entities i = 1, 2 of either kind, of characteristic frequencies ν _i^0, masses m_i0 = hν _i^0/{c^2} and separated at a distance r 0, the solution for F is F = - G}m_1^0m_2^0/{≤ft( {{r^2}} \\right)^2}, where G} = χ _0^2{e^4}/12{π ^2} \\in _0^2{ρ _λ };{χ _0} is the susceptibility and π λ is the reduced linear mass density of the vacuum. This force F resembles in all respects Newton’s gravity and is accurate at the weak F limit; hence ℊ equals the gravitational constant G. The DR wave fields and hence the gravity are each propagated in the dielectric vacuum at the speed of light c; these can not be shielded by matter. A test particle µ of mass m 0 therefore interacts gravitationally with all of the building particles of a given large mass M at r 0 apart, by a total gravitational force F = -GMm 0/(r 0)2 and potential V = -∂F/∂r 0. For a finite V and hence a total Hamiltonian H = m 0 c 2 + V, solution for the eigenvalue equation of µ presents a red-shift in the eigen frequency ν = ν 0(1 - GM/r 0 c 2) and hence in other wave variables. The quantum solutions combined with the wave nature of the gravity further lead to dilated gravito optical distance r = r 0/(1 - GM/r 0 c 2) and time t = t 0/(1 - GM/r 0 c 2), and modified Newton’s gravity and Einstein’s mass energy relation. Applications of these give predictions of the general relativistic effects manifested in the four classical test experiments of Einstein’s general relativity (GR), in direct agreement with the experiments and the predictions given based on GR.

  13. COMPARING IMAGE-BASED METHODS FOR ASSESSING VISUAL CLUTTER IN GENERALIZED MAPS

    Directory of Open Access Journals (Sweden)

    G. Touya

    2015-08-01

    Full Text Available Map generalization abstracts and simplifies geographic information to derive maps at smaller scales. The automation of map generalization requires techniques to evaluate the global quality of a generalized map. The quality and legibility of a generalized map is related to the complexity of the map, or the amount of clutter in the map, i.e. the excessive amount of information and its disorganization. Computer vision research is highly interested in measuring clutter in images, and this paper proposes to compare some of the existing techniques from computer vision, applied to generalized maps evaluation. Four techniques from the literature are described and tested on a large set of maps, generalized at different scales: edge density, subband entropy, quad tree complexity, and segmentation clutter. The results are analyzed against several criteria related to generalized maps, the identification of cluttered areas, the preservation of the global amount of information, the handling of occlusions and overlaps, foreground vs background, and blank space reduction.

  14. Study of pressure-volume relationships and higher derivatives of bulk modulus based on generalized equations of state

    International Nuclear Information System (INIS)

    Kushwah, S.S.; Shrivastava, H.C.; Singh, K.S.

    2007-01-01

    We have generalized the pressure-volume (P-V) relationships using simple polynomial and logarithmic expansions so as to make them consistent with the infinite pressure extrapolation according to the model of Stacey. The formulations are used to evaluate P-V relationships and pressure derivatives of bulk modulus upto third order (K', K'' and K''') for the earth core material taking input parameters based on the seismological data. The results based on the equations of state (EOS) generalized in the present study are found to yield good agreement with the Stacey EOS. The generalized logarithmic EOS due to Poirier and Tarantola deviates substantially from the seismic values for P, K and K'. The generalized Rydberg EOS gives almost identical results with the Birch-Murnaghan third-order EOS. Both of them yield deviations from the seismic data, which are in opposite direction as compared to those found from the generalized Poirier-Tarantola logarithmic EOS

  15. Calculation of generalized Lorenz-Mie theory based on the localized beam models

    International Nuclear Information System (INIS)

    Jia, Xiaowei; Shen, Jianqi; Yu, Haitao

    2017-01-01

    It has been proved that localized approximation (LA) is the most efficient way to evaluate the beam shape coefficients (BSCs) in generalized Lorenz-Mie theory (GLMT). The numerical calculation of relevant physical quantities is a challenge for its practical applications due to the limit of computer resources. The study presents an improved algorithm of the GLMT calculation based on the localized beam models. The BSCs and the angular functions are calculated by multiplying them with pre-factors so as to keep their values in a reasonable range. The algorithm is primarily developed for the original localized approximation (OLA) and is further extended to the modified localized approximation (MLA). Numerical results show that the algorithm is efficient, reliable and robust. - Highlights: • In this work, we introduce the proper pre-factors to the Bessel functions, BSCs and the angular functions. With this improvement, all the quantities involved in the numerical calculation are scaled into a reasonable range of values so that the algorithm can be used for computing the physical quantities of the GLMT. • The algorithm is not only an improvement in numerical technique, it also implies that the set of basic functions involved in the electromagnetic scattering (and sonic scattering) can be reasonably chosen. • The algorithms of the GLMT computations introduced in previous references suggested that the order of the n and m sums is interchanged. In this work, the sum of azimuth modes is performed for each partial wave. This offers the possibility to speed up the computation, since the sum of partial waves can be optimized according to the illumination conditions and the sum of azimuth modes can be truncated by selecting a criterion discussed in . • Numerical results show that the algorithm is efficient, reliable and robust, even in very exotic cases. The algorithm presented in this paper is based on the original localized approximation and it can also be used for the

  16. Generalized renewal process for repairable systems based on finite Weibull mixture

    International Nuclear Information System (INIS)

    Veber, B.; Nagode, M.; Fajdiga, M.

    2008-01-01

    Repairable systems can be brought to one of possible states following a repair. These states are: 'as good as new', 'as bad as old' and 'better than old but worse than new'. The probabilistic models traditionally used to estimate the expected number of failures account for the first two states, but they do not properly apply to the last one, which is more realistic in practice. In this paper, a probabilistic model that is applicable to all of the three after-repair states, called generalized renewal process (GRP), is applied. Simplistically, GRP addresses the repair assumption by introducing the concept of virtual age into the stochastic point processes to enable them to represent the full spectrum of repair assumptions. The shape of measured or design life distributions of systems can vary considerably, and therefore frequently cannot be approximated by simple distribution functions. The scope of the paper is to prove that a finite Weibull mixture, with positive component weights only, can be used as underlying distribution of the time to first failure (TTFF) of the GRP model, on condition that the unknown parameters can be estimated. To support the main idea, three examples are presented. In order to estimate the unknown parameters of the GRP model with m-fold Weibull mixture, the EM algorithm is applied. The GRP model with m mixture components distributions is compared to the standard GRP model based on two-parameter Weibull distribution by calculating the expected number of failures. It can be concluded that the suggested GRP model with Weibull mixture with an arbitrary but finite number of components is suitable for predicting failures based on the past performance of the system

  17. Violence Affects Physical and Mental Health Differently: The General Population Based Tromsø Study.

    Directory of Open Access Journals (Sweden)

    Oddgeir Friborg

    Full Text Available This general population-based study examined associations between violence and mental health, musculoskeletal pain, and early disability pension. The prevalence and consequences of good vs. poor adjustment (resilience vs. vulnerability following encounters with violence were also examined. Data were based on the sixth wave of the "Tromsø Study" (N = 12,981; 65.7% response rate, 53.4% women, M-age = 57.5 years, SD-age = 12.7 years. Self-reported data on psychological (threats and physical violence (beaten/kicked, mental health (anxiety/depression, musculoskeletal pain (MSP, and granting of disability pension (DP were collected. Men suffered more violent events during childhood than women did, and vice versa during adulthood. Psychological violence implied poorer mental health and slightly more MSP than physical violence. The risk of MSP was highest for violence occurring during childhood in women and during the last year for men. A dose-response relationship between an increasing number of violent encounters and poorer health was observed. About 58% of individuals reported no negative impact of violence (hence, resilience group, whereas 42% considered themselves as more vulnerable following encounters with violence. Regression analyses indicated comparable mental health but slightly more MSP in the resilience group compared to the unexposed group, whereas the vulnerable group had significantly worse health overall and a higher risk of early granting of DP. Resilience is not an all-or-nothing matter, as physical ailments may characterize individuals adapting well following encounters with violence.

  18. Efficient generalized Golub-Kahan based methods for dynamic inverse problems

    Science.gov (United States)

    Chung, Julianne; Saibaba, Arvind K.; Brown, Matthew; Westman, Erik

    2018-02-01

    We consider efficient methods for computing solutions to and estimating uncertainties in dynamic inverse problems, where the parameters of interest may change during the measurement procedure. Compared to static inverse problems, incorporating prior information in both space and time in a Bayesian framework can become computationally intensive, in part, due to the large number of unknown parameters. In these problems, explicit computation of the square root and/or inverse of the prior covariance matrix is not possible, so we consider efficient, iterative, matrix-free methods based on the generalized Golub-Kahan bidiagonalization that allow automatic regularization parameter and variance estimation. We demonstrate that these methods for dynamic inversion can be more flexible than standard methods and develop efficient implementations that can exploit structure in the prior, as well as possible structure in the forward model. Numerical examples from photoacoustic tomography, space-time deblurring, and passive seismic tomography demonstrate the range of applicability and effectiveness of the described approaches. Specifically, in passive seismic tomography, we demonstrate our approach on both synthetic and real data. To demonstrate the scalability of our algorithm, we solve a dynamic inverse problem with approximately 43 000 measurements and 7.8 million unknowns in under 40 s on a standard desktop.

  19. On the generalization of the hazard rate twisting-based simulation approach

    KAUST Repository

    Rached, Nadhir B.

    2016-11-17

    Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. A naive Monte Carlo simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. An alternative approach is represented by the use of variance reduction techniques, known for their efficiency in requiring less computations for achieving the same accuracy requirement. Most of these methods have thus far been proposed to deal with specific settings under which the RVs belong to particular classes of distributions. In this paper, we propose a generalization of the well-known hazard rate twisting Importance Sampling-based approach that presents the advantage of being logarithmic efficient for arbitrary sums of RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of the proposed method with some existing techniques.

  20. Community-based inquiry improves critical thinking in general education biology.

    Science.gov (United States)

    Quitadamo, Ian J; Faiola, Celia L; Johnson, James E; Kurtz, Martha J

    2008-01-01

    National stakeholders are becoming increasingly concerned about the inability of college graduates to think critically. Research shows that, while both faculty and students deem critical thinking essential, only a small fraction of graduates can demonstrate the thinking skills necessary for academic and professional success. Many faculty are considering nontraditional teaching methods that incorporate undergraduate research because they more closely align with the process of doing investigative science. This study compared a research-focused teaching method called community-based inquiry (CBI) with traditional lecture/laboratory in general education biology to discover which method would elicit greater gains in critical thinking. Results showed significant critical-thinking gains in the CBI group but decreases in a traditional group and a mixed CBI/traditional group. Prior critical-thinking skill, instructor, and ethnicity also significantly influenced critical-thinking gains, with nearly all ethnicities in the CBI group outperforming peers in both the mixed and traditional groups. Females, who showed decreased critical thinking in traditional courses relative to males, outperformed their male counterparts in CBI courses. Through the results of this study, it is hoped that faculty who value both research and critical thinking will consider using the CBI method.

  1. Folding to Curved Surfaces: A Generalized Design Method and Mechanics of Origami-based Cylindrical Structures

    Science.gov (United States)

    Wang, Fei; Gong, Haoran; Chen, Xi; Chen, C. Q.

    2016-09-01

    Origami structures enrich the field of mechanical metamaterials with the ability to convert morphologically and systematically between two-dimensional (2D) thin sheets and three-dimensional (3D) spatial structures. In this study, an in-plane design method is proposed to approximate curved surfaces of interest with generalized Miura-ori units. Using this method, two combination types of crease lines are unified in one reprogrammable procedure, generating multiple types of cylindrical structures. Structural completeness conditions of the finite-thickness counterparts to the two types are also proposed. As an example of the design method, the kinematics and elastic properties of an origami-based circular cylindrical shell are analysed. The concept of Poisson’s ratio is extended to the cylindrical structures, demonstrating their auxetic property. An analytical model of rigid plates linked by elastic hinges, consistent with numerical simulations, is employed to describe the mechanical response of the structures. Under particular load patterns, the circular shells display novel mechanical behaviour such as snap-through and limiting folding positions. By analysing the geometry and mechanics of the origami structures, we extend the design space of mechanical metamaterials and provide a basis for their practical applications in science and engineering.

  2. The association between Internet addiction and personality disorders in a general population-based sample.

    Science.gov (United States)

    Zadra, Sina; Bischof, Gallus; Besser, Bettina; Bischof, Anja; Meyer, Christian; John, Ulrich; Rumpf, Hans-Jürgen

    2016-12-01

    Background and aims Data on Internet addiction (IA) and its association with personality disorder are rare. Previous studies are largely restricted to clinical samples and insufficient measurement of IA. Methods Cross-sectional analysis data are based on a German sub-sample (n = 168; 86 males; 71 meeting criteria for IA) with increased levels of excessive Internet use derived from a general population sample (n = 15,023). IA was assessed with a comprehensive standardized interview using the structure of the Composite International Diagnostic Interview and the criteria of Internet Gaming Disorder as suggested in DSM-5. Impulsivity, attention deficit hyperactivity disorder, and self-esteem were assessed with the widely used questionnaires. Results Participants with IA showed higher frequencies of personality disorders (29.6%) compared to those without IA (9.3%; p < .001). In males with IA, Cluster C personality disorders were more prevalent than among non-addicted males. Compared to participants who had IA only, lower rates of remission of IA were found among participants with IA and additional cluster B personality disorder. Personality disorders were significantly associated with IA in multivariate analysis. Comorbidity of IA and personality disorders must be considered in prevention and treatment.

  3. Physiotherapists and General Practitioners attitudes towards 'Physio Direct' phone based musculoskeletal Physiotherapy services: a national survey.

    Science.gov (United States)

    Harland, Nicholas; Blacklidge, Brian

    2017-06-01

    Physiotherapy phone based, "Physio Direct" (PD) musculoskeletal triage and treat services are a relatively new phenomena. This study explored Physiotherapist and GP attitudes towards PD services. Online national survey via cascade e-mail initiated by study leads. 488 Physiotherapists and 68 GPs completed the survey. The survey asked three negatively worded and three positively worded Likert scale questions regarding PD services. It also collected demographic data and more global attitudes including a version of the friends and family test. Overall both Physiotherapists and GP's have positive attitudes towards PD services. There was global agreement that PD triage was a good idea but in both groups the majority of respondents who expressed a definite opinion thought that patients would still eventually need to be seen face to face. The vast majority of all respondents also thought patients should be given a choice about first accessing PD services. Physiotherapists with experience of PD services had more positive and less negative attitudes than those without experience. More detailed results are discussed. Relevant clinical stakeholders have generally positive attitudes towards PD services, but more so when they have experience of them. Counter to research findings significant proportions of respondents believe patients accessing PD services will still need to be seen face to face. The significant majority of respondents believe patients should be given a choice whether they access PD services in the first instance or not. Copyright © 2016 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  4. Quasi-Block Copolymers Based on a General Polymeric Chain Stopper.

    Science.gov (United States)

    Sanguramath, Rajashekharayya A; Nealey, Paul F; Shenhar, Roy

    2016-07-11

    Quasi-block copolymers (q-BCPs) are block copolymers consisting of conventional and supramolecular blocks, in which the conventional block is end-terminated by a functionality that interacts with the supramolecular monomer (a "chain stopper" functionality). A new design of q-BCPs based on a general polymeric chain stopper, which consists of polystyrene end-terminated with a sulfonate group (PS-SO3 Li), is described. Through viscosity measurements and a detailed diffusion-ordered NMR spectroscopy study, it is shown that PS-SO3 Li can effectively cap two types of model supramolecular monomers to form q-BCPs in solution. Furthermore, differential scanning calorimetry data and structural characterization of thin films by scanning force microscopy suggests the existence of the q-BCP architecture in the melt. The new design considerably simplifies the synthesis of polymeric chain stoppers; thus promoting the utilization of q-BCPs as smart, nanostructured materials. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Anxiety and dysthymia: local prevalence estimates based on drug prescriptions by general practitioners in Turin (Italy).

    Science.gov (United States)

    Mamo, C; Farina, E; Cicio, R; Fanì, M

    2014-01-01

    The aim of the study was to obtain local estimates of the prevalence of anxiety and dysthymic disorders among attendees of primary care at local level, useful to pursue a better management of the health care services. The study was conducted in the Health District no. 2 of Turin (industrial town in northwest Italy). The criteria for identification of cases were based on the drugs prescriptions made by general practitioners (GPs), selected in order to assure high specificity. The study involved 86 physicians (with 87,885 attendees). As expected, the crude and standardized prevalences were higher in women (anxiety: 2.9% vs 1.3% in men; dysthymia: 3.8% vs 1.7% in men), with a peak in women aged over 75 yrs (anxiety: 4.8%; dysthymia: 6.2%). In comparison to male GPs, female GPs had an higher prevalence of patients with anxious disorders, whereas the prevalences of dysthymia were similar. Despite the discussed limitations, the used methodology allows to obtain sufficiently reliable estimates of prevalence of common mental disorders at local level, providing informations useful for organizing the primary care in the Health district.

  6. Conceptual development of a complete LWR reload design methodology based on generalized perturbation theory

    International Nuclear Information System (INIS)

    White, J.R.

    1986-01-01

    A new approach for the physics design and analysis of LWR reload cores is developed and demonstrated through several practical applications. The new design philosophy uses first- and second-order response derivatives to predict the important reactor performance characteristics (power peaking, reactivity coefficients, etc.) for any number of possible material configurations (assembly shuffling and burnable poison loadings). The response derivatives are computed using generalized perturbation theory (GPT) techniques. This report describes in detail an idealized GPT-based design system. The idealized system would contain individual modules to generate the required first-order and higher-order sensitivity data. It would also contain at least two major application codes; one for core design optimization and the other for evaluation of several safety parameters of interest in off-normal situations. This ideal system would be fully automated, user-friendly, and quite flexible in its ability to provide a variety of design and analysis capabilities. Information gained form these three studies gives a good foundation for the development of a complete integrated design package

  7. Addition and multiplication of beta-expansions in generalized Tribonacci base

    Directory of Open Access Journals (Sweden)

    Petr Ambrož

    2007-05-01

    Full Text Available We study properties of β-numeration systems, where β > 1 is the real root of the polynomial x 3  - mx 2  - x - 1, m ∈ ℕ, m ≥ 1. We consider arithmetic operations on the set of β-integers, i.e., on the set of numbers whose greedy expansion in base β has no fractional part. We show that the number of fractional digits arising under addition of β-integers is at most 5 for m ≥ 3 and 6 for m = 2, whereas under multiplication it is at most 6 for all m ≥ 2. We thus generalize the results known for Tribonacci numeration system, i.e., for m = 1. We summarize the combinatorial properties of infinite words naturally defined by β-integers. We point out the differences between the structure of β-integers in cases m = 1 and m ≥ 2.

  8. Student participation in World Wide Web-based curriculum development of general chemistry

    Science.gov (United States)

    Hunter, William John Forbes

    1998-12-01

    This thesis describes an action research investigation of improvements to instruction in General Chemistry at Purdue University. Specifically, the study was conducted to guide continuous reform of curriculum materials delivered via the World Wide Web by involving students, instructors, and curriculum designers. The theoretical framework for this study was based upon constructivist learning theory and knowledge claims were developed using an inductive analysis procedure. This results of this study are assertions made in three domains: learning chemistry content via the World Wide Web, learning about learning via the World Wide Web, and learning about participation in an action research project. In the chemistry content domain, students were able to learn chemical concepts that utilized 3-dimensional visualizations, but not textual and graphical information delivered via the Web. In the learning via the Web domain, the use of feedback, the placement of supplementary aids, navigation, and the perception of conceptual novelty were all important to students' use of the Web. In the participation in action research domain, students learned about the complexity of curriculum. development, and valued their empowerment as part of the process.

  9. The adjoint method for general EEG and MEG sensor-based lead field equations

    International Nuclear Information System (INIS)

    Vallaghe, Sylvain; Papadopoulo, Theodore; Clerc, Maureen

    2009-01-01

    Most of the methods for the inverse source problem in electroencephalography (EEG) and magnetoencephalography (MEG) use a lead field as an input. The lead field is the function which relates any source in the brain to its measurements at the sensors. For complex geometries, there is no analytical formula of the lead field. The common approach is to numerically compute the value of the lead field for a finite number of point sources (dipoles). There are several drawbacks: the model of the source space is fixed (a set of dipoles), and the computation can be expensive for as much as 10 000 dipoles. The common idea to bypass these problems is to compute the lead field from a sensor point of view. In this paper, we use the adjoint method to derive general EEG and MEG sensor-based lead field equations. Within a simple framework, we provide a complete review of the explicit lead field equations, and we are able to extend these equations to non-pointlike sensors.

  10. Generalized hardware post-processing technique for chaos-based pseudorandom number generators

    KAUST Repository

    Barakat, Mohamed L.

    2013-06-01

    This paper presents a generalized post-processing technique for enhancing the pseudorandomness of digital chaotic oscillators through a nonlinear XOR-based operation with rotation and feedback. The technique allows full utilization of the chaotic output as pseudorandom number generators and improves throughput without a significant area penalty. Digital design of a third-order chaotic system with maximum function nonlinearity is presented with verified chaotic dynamics. The proposed post-processing technique eliminates statistical degradation in all output bits, thus maximizing throughput compared to other processing techniques. Furthermore, the technique is applied to several fully digital chaotic oscillators with performance surpassing previously reported systems in the literature. The enhancement in the randomness is further examined in a simple image encryption application resulting in a better security performance. The system is verified through experiment on a Xilinx Virtex 4 FPGA with throughput up to 15.44 Gbit/s and logic utilization less than 0.84% for 32-bit implementations. © 2013 ETRI.

  11. Pin-wise Reactor Analysis Based on the Generalized Equivalence Theory

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Hwan Yeal; Heo, Woong; Kim, Yong Hee [KAIST, Daejeon (Korea, Republic of)

    2016-05-15

    In this paper, a pin-wise reactor analysis is performed based on the generalized equivalence theory. From the conventional fuel assembly lattice calculations, pin-wise 2-group cross sections and pin DFs are generated. Based on the numerical results on a small PWR benchmark, it is observed that the pin-wise core analysis provide quite accurate prediction on the effective multiplication factor and the peak pin power error is bounded by about 3% in peripheral fuel assemblies facing the baffle-reflector. Also, it was found that relatively large pin power errors occur along the interface between clearly different fuel assemblies. It is expected that the GET-based pin-by-pin core calculation can be further developed as an advanced method for reactor analysis via improving the group constants and discontinuity factors. Recently, high-fidelity multi-dimensional analysis tools are gaining more attention because of their accurate prediction of local parameters for core design and safety assessment. In terms of accuracy, direct whole-core transport is quite promising. However, it is clear that it is still very costly in terms of the computing time and memory requirements. Another possible solution is the pin-by-pin core analysis in which only small fuel pins are homogenized and the 3-D core analysis is still performed using a low-order operator such as the diffusion theory. In this paper, a pin-by-pin core analysis is performed using the hybrid CMFD (HCMFD) method. Hybrid CMFD is a new global-local iteration method that has been developed for efficient parallel calculation of pinby-pin heterogeneous core analysis. For the HCMFD method, the one-node CMFD scheme is combined with a local two-node CMFD method in a non-linear way. Since the SPH method is iterative and SPH factors are not direction dependent, it is clear that SPH method takes more computing cost and cannot take into account the different heterogeneity and transport effects at each pin interface. Unlike the SPH

  12. Quantitative analysis by laser-induced breakdown spectroscopy based on generalized curves of growth

    Energy Technology Data Exchange (ETDEWEB)

    Aragón, C., E-mail: carlos.aragon@unavarra.es; Aguilera, J.A.

    2015-08-01

    A method for quantitative elemental analysis by laser-induced breakdown spectroscopy (LIBS) is proposed. The method (Cσ-LIBS) is based on Cσ graphs, generalized curves of growth which allow including several lines of various elements at different concentrations. A so-called homogeneous double (HD) model of the laser-induced plasma is used, defined by an integration over a single-region of the radiative transfer equation, combined with a separated treatment for neutral atoms (z = 0) and singly-charged ions (z = 1) in Cσ graphs and characteristic parameters. The procedure includes a criterion, based on a model limit, for eliminating data which, due to a high line intensity or concentration, are not well described by the HD model. An initial procedure provides a set of parameters (βA){sup z}, (ηNl){sup z}, T{sup z} and N{sub e}{sup z} (z = 0, 1) which characterize the plasma and the LIBS system. After characterization, two different analytical procedures, resulting in relative and absolute concentrations, may be applied. To test the method, fused glass samples prepared from certified slags and pure compounds are analyzed. We determine concentrations of Ca, Mn, Mg, V, Ti, Si and Al relative to Fe in three samples prepared from slags, and absolute concentrations of Fe, Ca and Mn in three samples prepared from Fe{sub 2}O{sub 3}, CaCO{sub 3} and Mn{sub 2}O{sub 3}. The accuracy obtained is 3.2% on the average for relative concentrations and 9.2% for absolute concentrations. - Highlights: • Method for quantitative analysis by LIBS, based on Csigma graphs • Conventional calibration is replaced with characterization of the LIBS system. • All elements are determined from measurement of one or two Csigma graphs. • The method is tested with fused glass disks prepared from slags and pure compounds. • Accurate results for relative (3.2%) and absolute concentrations (9.2%)

  13. The positive effect on determinants of physical activity of a tailored, general practice-based physical activity intervention

    NARCIS (Netherlands)

    van Sluijs, E.M.F.; van Poppel-Bruinvels, M.N.M.; Twisk, J.W.R.; Brug, J.; van Mechelen, W.

    2005-01-01

    PACE (Physician-based Assessment and Counseling for Exercise) is an individualized theory-based minimal intervention strategy aimed at the enhancement of regular physical activity. The aim of this study was to evaluate the effectiveness of a PACE intervention applied by general practitioners (GPs)

  14. A Critical Examination of Frequency-Fixed Second-Order Generalized Integrator-Based Phase-Locked Loops

    DEFF Research Database (Denmark)

    Golestan, Saeed; Mousazadeh Mousavi, Seyyed-Yousef; Guerrero, Josep M.

    2017-01-01

    The implementation of a large number of single-phase phase-locked loops (PLLs) involves creating a fictitious quadrature signal. A popular approach for this purpose is using a second-order generalized integrator-based quadrature signal generator (SOGIQSG) because it results in an acceptable speed......-based PLLs (FFSOGI-PLLs) to highlight their real advantages and disadvantages....

  15. 77 FR 14838 - General Electric-Hitachi Global Laser Enrichment LLC, Commercial Laser-Based Uranium Enrichment...

    Science.gov (United States)

    2012-03-13

    ... Laser Enrichment LLC, Commercial Laser-Based Uranium Enrichment Facility, Wilmington, North Carolina... a license to General Electric-Hitachi Global Laser Enrichment LLC (GLE or the applicant) to authorize construction of a laser-based uranium enrichment facility and possession and use of byproduct...

  16. Detecting generalized synchronization of chaotic dynamical systems. A kernel-based method and choice of its parameter

    International Nuclear Information System (INIS)

    Suetani, Hiromichi; Iba, Yukito; Aihara, Kazuyuki

    2006-01-01

    An approach based on the kernel methods for capturing the nonlinear interdependence between two signals is introduced. It is demonstrated that the proposed approach is useful for characterizing generalized synchronization with a successful simple example. An attempt to choose an optimal kernel parameter based on cross validation is also discussed. (author)

  17. Exposure in emergency general surgery in a time-based residency ...

    African Journals Online (AJOL)

    Objective: This paper aimed to characterize the resident exposure to acute general surgical conditions during a three-months rotation in a general surgical unit. Setting: The Department of Surgery, University of Nairobi and Kenyatta National Referral and Teaching Hospital in Nairobi. MethodS: Four residents (in their first to ...

  18. Epidemiology of unintentional injuries in childhood: a population-based survey in general practice.

    NARCIS (Netherlands)

    Otters, H.; Schellevis, F.G.; Damen, J.; Wouden, J.C. van der; Suijlekom-Smit, L.W.A.; Koes, B.W.

    2005-01-01

    This study aimed to assess the incidence of unintentional injuries presented in general practice, and to identify children at risk from experiencing an unintentional injury. We used the data of all 0–17-yearold children from a representative survey in 96 Dutch general practices in 2001. We computed

  19. Organisational determinants of production and efficiency in general practice: a population-based study

    DEFF Research Database (Denmark)

    Rose Olsen, Kim; Gyrd-Hansen, Dorte; Sørensen, Torben Højmark

    2013-01-01

    Shortage of general practitioners (GPs) and an increased political focus on primary care have enforced the interest in efficiency analysis in the Danish primary care sector. This paper assesses the association between organisational factors of general practices and production and efficiency. We a...

  20. A Computer-Based Laboratory Project for the Study of Stimulus Generalization and Peak Shift

    Science.gov (United States)

    Derenne, Adam; Loshek, Eevett

    2009-01-01

    This paper describes materials designed for classroom projects on stimulus generalization and peak shift. A computer program (originally written in QuickBASIC) is used for data collection and a Microsoft Excel file with macros organizes the raw data on a spreadsheet and creates generalization gradients. The program is designed for use with human…

  1. Structural Behavioral Study on the General Aviation Network Based on Complex Network

    Science.gov (United States)

    Zhang, Liang; Lu, Na

    2017-12-01

    The general aviation system is an open and dissipative system with complex structures and behavioral features. This paper has established the system model and network model for general aviation. We have analyzed integral attributes and individual attributes by applying the complex network theory and concluded that the general aviation network has influential enterprise factors and node relations. We have checked whether the network has small world effect, scale-free property and network centrality property which a complex network should have by applying degree distribution of functions and proved that the general aviation network system is a complex network. Therefore, we propose to achieve the evolution process of the general aviation industrial chain to collaborative innovation cluster of advanced-form industries by strengthening network multiplication effect, stimulating innovation performance and spanning the structural hole path.

  2. Optometry-based general population survey of pupil ruff atrophy and ocular hypertension.

    Science.gov (United States)

    Ang, Ghee S; Stevenson, Peter J; Sargent, Geoff; Grimmer, Peter; Corbett, Patricia; Jourdain, Erin; Wells, Anthony P

    2013-01-01

    To evaluate and describe the pupil ruff changes and relationship to intraocular pressure, pseudoexfoliation syndrome and glaucoma status in an optometric population in New Zealand. Prospective cross-sectional survey of an optometric population. Six hundred and twenty subjects over 50 years old routinely attending the participating optometry practices. Exclusion criteria included previous intraocular surgery, ophthalmic laser, uveitis, angle closure and secondary glaucoma. Multicentre study involving 11 optometry practices in the Wellington region, New Zealand. The pupillary ruff and associated gonioscopy findings of study participants were graded based on the previously published Pupil Ruff Atrophy grading system. Parameters evaluated include pupillary ruff absence and abnormality, pseudoexfoliation material and trabecular meshwork pigmentation. Correlations between intereye Pupil Ruff Atrophy grading differences and inter-eye intraocular pressure and cup:disc ratio differences. Six hundred and twenty subjects were included, with a mean age of 62.2 ± 9.1 years and mean intraocular pressure of 14.8 ± 3.4 mmHg. Four hundred and fourteen (66.8%) had bilateral pupil ruff changes and 12 (1.5%) had pseudoexfoliation. Inter-eye intraocular pressure asymmetry was significantly correlated with amount of missing pupillary ruff (r = 0.111; P = 0.022) and trabecular meshwork pigmentation (r = 0.147; P = 0.002). Inter-eye cup:disc ratio asymmetry was not correlated with any of the Pupil Ruff Atrophy grading parameters. Asymmetry of pupillary ruff absence and trabecular meshwork pigmentation was correlated with intraocular pressure asymmetry (but not with cup:disc ratio asymmetry) in a general optometric population setting in New Zealand. © 2012 The Authors. Clinical and Experimental Ophthalmology © 2012 Royal Australian and New Zealand College of Ophthalmologists.

  3. A general and Robust Ray-Casting-Based Algorithm for Triangulating Surfaces at the Nanoscale

    Science.gov (United States)

    Decherchi, Sergio; Rocchia, Walter

    2013-01-01

    We present a general, robust, and efficient ray-casting-based approach to triangulating complex manifold surfaces arising in the nano-bioscience field. This feature is inserted in a more extended framework that: i) builds the molecular surface of nanometric systems according to several existing definitions, ii) can import external meshes, iii) performs accurate surface area estimation, iv) performs volume estimation, cavity detection, and conditional volume filling, and v) can color the points of a grid according to their locations with respect to the given surface. We implemented our methods in the publicly available NanoShaper software suite (www.electrostaticszone.eu). Robustness is achieved using the CGAL library and an ad hoc ray-casting technique. Our approach can deal with any manifold surface (including nonmolecular ones). Those explicitly treated here are the Connolly-Richards (SES), the Skin, and the Gaussian surfaces. Test results indicate that it is robust to rotation, scale, and atom displacement. This last aspect is evidenced by cavity detection of the highly symmetric structure of fullerene, which fails when attempted by MSMS and has problems in EDTSurf. In terms of timings, NanoShaper builds the Skin surface three times faster than the single threaded version in Lindow et al. on a 100,000 atoms protein and triangulates it at least ten times more rapidly than the Kruithof algorithm. NanoShaper was integrated with the DelPhi Poisson-Boltzmann equation solver. Its SES grid coloring outperformed the DelPhi counterpart. To test the viability of our method on large systems, we chose one of the biggest molecular structures in the Protein Data Bank, namely the 1VSZ entry, which corresponds to the human adenovirus (180,000 atoms after Hydrogen addition). We were able to triangulate the corresponding SES and Skin surfaces (6.2 and 7.0 million triangles, respectively, at a scale of 2 grids per Å) on a middle-range workstation. PMID:23577073

  4. Generalized Hough transform based time invariant action recognition with 3D pose information

    Science.gov (United States)

    Muench, David; Huebner, Wolfgang; Arens, Michael

    2014-10-01

    Human action recognition has emerged as an important field in the computer vision community due to its large number of applications such as automatic video surveillance, content based video-search and human robot interaction. In order to cope with the challenges that this large variety of applications present, recent research has focused more on developing classifiers able to detect several actions in more natural and unconstrained video sequences. The invariance discrimination tradeoff in action recognition has been addressed by utilizing a Generalized Hough Transform. As a basis for action representation we transform 3D poses into a robust feature space, referred to as pose descriptors. For each action class a one-dimensional temporal voting space is constructed. Votes are generated from associating pose descriptors with their position in time relative to the end of an action sequence. Training data consists of manually segmented action sequences. In the detection phase valid human 3D poses are assumed as input, e.g. originating from 3D sensors or monocular pose reconstruction methods. The human 3D poses are normalized to gain view-independence and transformed into (i) relative limb-angle space to ensure independence of non-adjacent joints or (ii) geometric features. In (i) an action descriptor consists of the relative angles between limbs and their temporal derivatives. In (ii) the action descriptor consists of different geometric features. In order to circumvent the problem of time-warping we propose to use a codebook of prototypical 3D poses which is generated from sample sequences of 3D motion capture data. This idea is in accordance with the concept of equivalence classes in action space. Results of the codebook method are presented using the Kinect sensor and the CMU Motion Capture Database.

  5. Experimental teaching reforms of optical fiber communication based on general education

    Science.gov (United States)

    Lan, L.; Liu, S.; Zhou, J. H.; Peng, Z. M.

    2017-08-01

    It's necessary that higher education experimental teaching reforms on the basis of general education. This paper put forward the experimental teaching reform mode of optical fiber communication in the context of general education. With some reform measures such as improving the experimental content, enriching the experimental style, modifying the experimental teaching method, and adjusting the evaluation method of experimental teaching, the concept of general education is put throughout the experimental teaching of optical fiber communication. In this way, it facilitates the development of students and improvement of experimental teaching quality.

  6. Estimation of the shape parameter of a generalized Pareto distribution based on a transformation to Pareto distributed variables

    OpenAIRE

    van Zyl, J. Martin

    2012-01-01

    Random variables of the generalized Pareto distribution, can be transformed to that of the Pareto distribution. Explicit expressions exist for the maximum likelihood estimators of the parameters of the Pareto distribution. The performance of the estimation of the shape parameter of generalized Pareto distributed using transformed observations, based on the probability weighted method is tested. It was found to improve the performance of the probability weighted estimator and performs good wit...

  7. The impact of general anesthesia on child development and school performance: a population-based study.

    Science.gov (United States)

    Schneuer, Francisco J; Bentley, Jason P; Davidson, Andrew J; Holland, Andrew Ja; Badawi, Nadia; Martin, Andrew J; Skowno, Justin; Lain, Samantha J; Nassar, Natasha

    2018-04-27

    There has been considerable interest in the possible adverse neurocognitive effects of exposure to general anesthesia and surgery in early childhood. The aim of this data linkage study was to investigate developmental and school performance outcomes of children undergoing procedures requiring general anesthesia in early childhood. We included children born in New South Wales, Australia of 37+ weeks' gestation without major congenital anomalies or neurodevelopmental disability with either a school entry developmental assessment in 2009, 2012, or Grade-3 school test results in 2008-2014. We compared children exposed to general anesthesia aged <48 months to those without any hospitalization. Children with only 1 hospitalization with general anesthesia and no other hospitalization were assessed separately. Outcomes included being classified developmentally high risk at school entry and scoring below national minimum standard in school numeracy and reading tests. Of 211 978 children included, 82 156 had developmental assessment and 153 025 had school test results, with 12 848 (15.7%) and 25 032 (16.4%) exposed to general anesthesia, respectively. Children exposed to general anesthesia had 17%, 34%, and 23% increased odds of being developmentally high risk (adjusted odds ratio [aOR]: 1.17; 95% CI: 1.07-1.29); or scoring below the national minimum standard in numeracy (aOR: 1.34; 95% CI: 1.21-1.48) and reading (aOR: 1.23; 95% CI: 1.12-1.36), respectively. Although the risk for being developmentally high risk and poor reading attenuated for children with only 1 hospitalization and exposure to general anesthesia, the association with poor numeracy results remained. Children exposed to general anesthesia before 4 years have poorer development at school entry and school performance. While the association among children with 1 hospitalization with 1 general anesthesia and no other hospitalization was attenuated, poor numeracy outcome remained. Further investigation of

  8. Health sciences libraries' subscriptions to journals: expectations of general practice departments and collection-based analysis.

    Science.gov (United States)

    Barreau, David; Bouton, Céline; Renard, Vincent; Fournier, Jean-Pascal

    2018-04-01

    The aims of this study were to (i) assess the expectations of general practice departments regarding health sciences libraries' subscriptions to journals and (ii) describe the current general practice journal collections of health sciences libraries. A cross-sectional survey was distributed electronically to the thirty-five university general practice departments in France. General practice departments were asked to list ten journals to which they expected access via the subscriptions of their health sciences libraries. A ranked reference list of journals was then developed. Access to these journals was assessed through a survey sent to all health sciences libraries in France. Adequacy ratios (access/need) were calculated for each journal. All general practice departments completed the survey. The total reference list included 44 journals. This list was heterogeneous in terms of indexation/impact factor, language of publication, and scope (e.g., patient care, research, or medical education). Among the first 10 journals listed, La Revue Prescrire (96.6%), La Revue du Praticien-Médecine Générale (90.9%), the British Medical Journal (85.0%), Pédagogie Médicale (70.0%), Exercer (69.7%), and the Cochrane Database of Systematic Reviews (62.5%) had the highest adequacy ratios, whereas Family Practice (4.2%), the British Journal of General Practice (16.7%), Médecine (29.4%), and the European Journal of General Practice (33.3%) had the lowest adequacy ratios. General practice departments have heterogeneous expectations in terms of health sciences libraries' subscriptions to journals. It is important for librarians to understand the heterogeneity of these expectations, as well as local priorities, so that journal access meets users' needs.

  9. Computational design of new molecular scaffolds for medicinal chemistry, part II: generalization of analog series-based scaffolds

    Science.gov (United States)

    Dimova, Dilyana; Stumpfe, Dagmar; Bajorath, Jürgen

    2018-01-01

    Aim: Extending and generalizing the computational concept of analog series-based (ASB) scaffolds. Materials & methods: Methodological modifications were introduced to further increase the coverage of analog series (ASs) and compounds by ASB scaffolds. From bioactive compounds, ASs were systematically extracted and second-generation ASB scaffolds isolated. Results: More than 20,000 second-generation ASB scaffolds with single or multiple substitution sites were extracted from active compounds, achieving more than 90% coverage of ASs. Conclusion: Generalization of the ASB scaffold approach has yielded a large knowledge base of scaffold-capturing compound series and target information. PMID:29379641

  10. A generalized logarithmic image processing model based on the gigavision sensor model.

    Science.gov (United States)

    Deng, Guang

    2012-03-01

    The logarithmic image processing (LIP) model is a mathematical theory providing generalized linear operations for image processing. The gigavision sensor (GVS) is a new imaging device that can be described by a statistical model. In this paper, by studying these two seemingly unrelated models, we develop a generalized LIP (GLIP) model. With the LIP model being its special case, the GLIP model not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS. A new parametric LIP model is also developed. To illustrate the application of the new scalar multiplication operation, we propose an energy-preserving algorithm for tone mapping, which is a necessary step in image dehazing. By comparing with results using two state-of-the-art algorithms, we show that the new scalar multiplication operation is an effective tool for tone mapping.

  11. Questioning the differences between general public vs. patient based preferences towards EQ-5D-5L defined hypothetical health states.

    Science.gov (United States)

    Ogorevc, Marko; Murovec, Nika; Fernandez, Natacha Bolanos; Rupel, Valentina Prevolnik

    2017-03-28

    The purpose of this article is to explore whether any differences exist between the general population and patient based preferences towards EQ-5D-5L defined hypothetical health states. The article discusses the role of adaptation and self-interest in valuing health states and it also contributes rigorous empirical evidence to the scientific debate on the differences between the patient and general population preferences towards hypothetical health states. Patient preferences were elicited in 2015 with the EQ-5D-5L questionnaire using time trade-off and discrete choice experiment design and compared to the Spanish general population preferences, which were elicited using identical methods. Patients were chosen on a voluntary basis according to their willingness to participate in the survey. They were recruited from patient organisations and a hospital in Madrid, Spain. 282 metastatic breast cancer patients and 333 rheumatoid arthritis patients were included in the sample. The analysis revealed differences in preferences between the general population and patient groups. Based on the results of our analysis, it is suggested that the differences in preferences stem from patients being more able to accurately imagine "non-tangible" dimensions of health states (anxiety or depression, and pain or discomfort) than the general population with less experience in various health states. However, this does not mean that general public values should not be reflected in utilities derived for coverage decision making. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. A unified MGF-based capacity analysis of diversity combiners over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2012-03-01

    Unified exact ergodic capacity results for L-branch coherent diversity combiners including equal-gain combining (EGC) and maximal-ratio combining (MRC) are not known. This paper develops a novel generic framework for the capacity analysis of L-branch EGC/MRC over generalized fading channels. The framework is used to derive new results for the gamma-shadowed generalized Nakagami-m fading model which can be a suitable model for the fading environments encountered by high frequency (60 GHz and above) communications. The mathematical formalism is illustrated with some selected numerical and simulation results confirming the correctness of our newly proposed framework. © 2012 IEEE.

  13. Evidence-based classification of low back pain in the general population

    DEFF Research Database (Denmark)

    Leboeuf-Yde, Charlotte; Lemeunier, Nadège; Wedderkopp, Niels

    2013-01-01

    in the general population is a rather stable condition, characterized as either being present or absent. However, only one of the reviewed studies had used frequent data collection, which would be necessary when studying detailed course patterns over time. It was the purpose of this study to see......, if it was possible to identify whether LBP, when present, is rather episodic or chronic/persistent. Further, we wanted to see if it was possible to describe any specific course profiles of LBP in the general population....

  14. Relational coordination is associated with productivity in general practice: a survey and register based study

    DEFF Research Database (Denmark)

    Lundstrøm, Sanne Lykke; Edwards, Kasper; Reventlow, Susanne

    2014-01-01

    In this paper we investigate the association between relational coordination among the practice team in general practice and number of consultations performed in a general practice per staff, i.e. a proxy of productivity. We measured relational coordination using the Relational Coordination Survey...... and combined the results with register data. We found that relational coordination was statistically significant associated with number of consultation per staff per year. We later divided consultations in to three types: Face-to-face, Email and phone consultations. We found a statistically significant...... associating between relational coordination and with number of face-to-face consultation per staff per year....

  15. Exact soliton solutions of the generalized Gross-Pitaevskii equation based on expansion method

    Directory of Open Access Journals (Sweden)

    Ying Wang

    2014-06-01

    Full Text Available We give a more generalized treatment of the 1D generalized Gross-Pitaevskii equation (GGPE with variable term coefficients. External harmonic trapping potential is fully considered and the nonlinear interaction term is of arbitrary polytropic index of superfluid wave function. We also eliminate the interdependence between variable coefficients of the equation terms avoiding the restrictions that occur in some other works. The exact soliton solutions of the GGPE are obtained through the delicate combined utilization of modified lens-type transformation and F-expansion method with dominant features like soliton type properties highlighted.

  16. A Novel Entropy-Based Decoding Algorithm for a Generalized High-Order Discrete Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Jason Chin-Tiong Chan

    2018-01-01

    Full Text Available The optimal state sequence of a generalized High-Order Hidden Markov Model (HHMM is tracked from a given observational sequence using the classical Viterbi algorithm. This classical algorithm is based on maximum likelihood criterion. We introduce an entropy-based Viterbi algorithm for tracking the optimal state sequence of a HHMM. The entropy of a state sequence is a useful quantity, providing a measure of the uncertainty of a HHMM. There will be no uncertainty if there is only one possible optimal state sequence for HHMM. This entropy-based decoding algorithm can be formulated in an extended or a reduction approach. We extend the entropy-based algorithm for computing the optimal state sequence that was developed from a first-order to a generalized HHMM with a single observational sequence. This extended algorithm performs the computation exponentially with respect to the order of HMM. The computational complexity of this extended algorithm is due to the growth of the model parameters. We introduce an efficient entropy-based decoding algorithm that used reduction approach, namely, entropy-based order-transformation forward algorithm (EOTFA to compute the optimal state sequence of any generalized HHMM. This EOTFA algorithm involves a transformation of a generalized high-order HMM into an equivalent first-order HMM and an entropy-based decoding algorithm is developed based on the equivalent first-order HMM. This algorithm performs the computation based on the observational sequence and it requires OTN~2 calculations, where N~ is the number of states in an equivalent first-order model and T is the length of observational sequence.

  17. Generalizing the order and the parameters of macro-operators by explanation-based learning - Extension of Explanation-Based Learning on Partial Order

    International Nuclear Information System (INIS)

    Li, Huihua

    1992-01-01

    The traditional generalization methods such as FIKE's macro-operator learning and Explanation-Based Learning (EBL) deal with totally ordered plans. They generalize only the plan operators and the conditions under which the generalized plan can be applied in its initial total order, but not the partial order among operators in which the generalized plan can be successfully executed. In this paper, we extend the notion of the EBL on the partial order of plans. A new method is presented for learning, from a totally or partially ordered plan, partially ordered macro-operators (generalized plans) each of which requires a set of the weakest conditions for its reuse. It is also valuable for generalizing partially ordered plans. The operators are generalized in the FIKE's triangle table. We introduce the domain axioms to generate the constraints for the consistency of generalized states. After completing the triangle table with the information concerning the operator destructions (interactions), we obtain the global explanation of the partial order on the operators. Then, we represent all the necessary ordering relations by a directed graph. The exploitation of this graph permits to explicate the dependence between the partial orders and the constraints among the parameters of generalized operators, and allows all the solutions to be obtained. (author) [fr

  18. Research 0n Incentive Mechanism of General Contractor and Subcontractors Dynamic Alliance in Construction Project Based on Team Cooperation

    Science.gov (United States)

    Yin, Honglian; Sun, Aihua; Liu, Quanru; Chen, Zhiyi

    2018-03-01

    It is the key of motivating sub-contractors working hard and mutual cooperation, ensuring implementation overall goal of the project that to design rational incentive mechanism for general contractor. Based on the principal-agency theory, the subcontractor efforts is divided into two parts, one for individual efforts, another helping other subcontractors, team Cooperation incentive models of multiple subcontractors are set up, incentive schemes and intensities are also given. The results show that the general contractor may provide individual and team motivation incentives when subcontractors working independently, not affecting each other in time and space; otherwise, the general contractor may only provide individual incentive to entice teams collaboration between subcontractors and helping each other. The conclusions can provide a reference for the subcontract design of general and sub-contractor dynamic alliances.

  19. Generalized product

    OpenAIRE

    Greco, Salvatore; Mesiar, Radko; Rindone, Fabio

    2014-01-01

    Aggregation functions on [0,1] with annihilator 0 can be seen as a generalized product on [0,1]. We study the generalized product on the bipolar scale [–1,1], stressing the axiomatic point of view. Based on newly introduced bipolar properties, such as the bipolar increasingness, bipolar unit element, bipolar idempotent element, several kinds of generalized bipolar product are introduced and studied. A special stress is put on bipolar semicopulas, bipolar quasi-copulas and bipolar copulas.

  20. Comparison of stator-mounted permanent-magnet machines based on a general power equation

    DEFF Research Database (Denmark)

    Chen, Zhe; Hua, Wei; Cheng, Ming

    2009-01-01

    The stator-mounted permanent-magnet (SMPM) machines have some advantages compared with its counterparts, such as simple rotor, short winding terminals, and good thermal dissipation conditions for magnets. In this paper, a general power equation for three types of SMPM machine is introduced first...

  1. Robust performance enhancement using disturbance observers for hysteresis compensation based on generalized Prandtl–Ishlinskii model

    Czech Academy of Sciences Publication Activity Database

    El-Shaer, A.H.; Al Janaideh, M.; Krejčí, Pavel; Tomizuka, M.

    2013-01-01

    Roč. 135, č. 5 (2013), 051008 ISSN 0022-0434 R&D Projects: GA ČR GAP201/10/2315 Institutional support: RVO:67985840 Keywords : hysteresis * optimization * robust control Subject RIV: BA - General Mathematics Impact factor: 1.039, year: 2013 http://dynamicsystems.asmedigitalcollection.asme.org/article.aspx?articleid=1692306

  2. Quaternion based generalization of Chern–Simons theories in arbitrary dimensions

    Directory of Open Access Journals (Sweden)

    Alessandro D'Adda

    2017-08-01

    Full Text Available A generalization of Chern–Simons gauge theory is formulated in any dimension and arbitrary gauge group where gauge fields and gauge parameters are differential forms of any degree. The quaternion algebra structure of this formulation is shown to be equivalent to a three Z2-gradings structure, thus clarifying the quaternion role in the previous formulation.

  3. Study on sampling of continuous linear system based on generalized Fourier transform

    Science.gov (United States)

    Li, Huiguang

    2003-09-01

    In the research of signal and system, the signal's spectrum and the system's frequency characteristic can be discussed through Fourier Transform (FT) and Laplace Transform (LT). However, some singular signals such as impulse function and signum signal don't satisfy Riemann integration and Lebesgue integration. They are called generalized functions in Maths. This paper will introduce a new definition -- Generalized Fourier Transform (GFT) and will discuss generalized function, Fourier Transform and Laplace Transform under a unified frame. When the continuous linear system is sampled, this paper will propose a new method to judge whether the spectrum will overlap after generalized Fourier transform (GFT). Causal and non-causal systems are studied, and sampling method to maintain system's dynamic performance is presented. The results can be used on ordinary sampling and non-Nyquist sampling. The results also have practical meaning on research of "discretization of continuous linear system" and "non-Nyquist sampling of signal and system." Particularly, condition for ensuring controllability and observability of MIMO continuous systems in references 13 and 14 is just an applicable example of this paper.

  4. Designing A General Deep Web Access Approach Based On A Newly Introduced Factor; Harvestability Factor (HF)

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; van Keulen, Maurice; Hiemstra, Djoerd

    2014-01-01

    The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester

  5. Robust periodic steady state analysis of autonomous oscillators based on generalized eigenvalues

    NARCIS (Netherlands)

    Mirzavand, R.; Maten, ter E.J.W.; Beelen, T.G.J.; Schilders, W.H.A.; Abdipour, A.

    2011-01-01

    In this paper, we present a new gauge technique for the Newton Raphson method to solve the periodic steady state (PSS) analysis of free-running oscillators in the time domain. To find the frequency a new equation is added to the system of equations. Our equation combines a generalized eigenvector

  6. Robust periodic steady state analysis of autonomous oscillators based on generalized eigenvalues

    NARCIS (Netherlands)

    Mirzavand, R.; Maten, ter E.J.W.; Beelen, T.G.J.; Schilders, W.H.A.; Abdipour, A.; Michielsen, B.; Poirier, J.R.

    2012-01-01

    In this paper, we present a new gauge technique for the Newton Raphson method to solve the periodic steady state (PSS) analysis of free-running oscillators in the time domain. To find the frequency a new equation is added to the system of equations. Our equation combines a generalized eigenvector

  7. On the generalization of the hazard rate twisting-based simulation approach

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul

    2016-01-01

    requirement. Most of these methods have thus far been proposed to deal with specific settings under which the RVs belong to particular classes of distributions. In this paper, we propose a generalization of the well-known hazard rate twisting Importance

  8. Improving General Chemistry Course Performance through Online Homework-Based Metacognitive Training

    Science.gov (United States)

    Casselman, Brock L.; Atwood, Charles H.

    2017-01-01

    In a first-semester general chemistry course, metacognitive training was implemented as part of an online homework system. Students completed weekly quizzes and multiple practice tests to regularly assess their abilities on the chemistry principles. Before taking these assessments, students predicted their score, receiving feedback after…

  9. Generalized canonical analysis based on optimizing matrix correlations and a relation with IDIOSCAL

    NARCIS (Netherlands)

    Kiers, Henk A.L.; Cléroux, R.; Ten Berge, Jos M.F.

    1994-01-01

    Carroll's method for generalized canonical analysis of two or more sets of variables is shown to optimize the sum of squared inner-product matrix correlations between a consensus matrix and matrices with canonical variates for each set of variables. In addition, the method that analogously optimizes

  10. A general X-ray fluorescence spectrometric technique based on simple corrections for matrix effects

    International Nuclear Information System (INIS)

    Kruidhof, H.

    1978-01-01

    The method reported, which is relatively simple and generally applicable for most materials, involves a combination of borax fusion with matrix effect corrections. The latter are done with algorithms, which are derived from the intensity formulae, together with empirical coefficients. (Auth.)

  11. Comparative performance of diabetes-specific and general population-based cardiovascular risk assessment models in people with diabetes mellitus.

    Science.gov (United States)

    Echouffo-Tcheugui, J-B; Kengne, A P

    2013-10-01

    Multivariable models for estimating cardiovascular disease (CVD) risk in people with diabetes comprise general population-based models and those from diabetic cohorts. Whether one set of models should receive preference is unclear. We evaluated the evidence on direct comparisons of the performance of general population vs diabetes-specific CVD risk models in people with diabetes. MEDLINE and EMBASE databases were searched up to March 2013. Two reviewers independently identified studies that compared the performance of general CVD models vs diabetes-specific ones in the same group of people with diabetes. Independent, dual data extraction on study design, risk models, outcomes; and measures of performance was conducted. Eleven articles reporting on 22 pair wise comparisons of a diabetes-specific model (UKPDS, ADVANCE and DCS risk models) to a general population model (three variants of the Framingham model, Prospective Cardiovascular Münster [PROCAM] score, CardioRisk Manager [CRM], Joint British Societies Coronary Risk Chart [JBSRC], Progetto Cuore algorithm and the CHD-Riskard algorithm) were eligible. Absolute differences in C-statistic of diabetes-specific vs general population-based models varied from -0.13 to 0.09. Comparisons for other performance measures were unusual. Outcomes definitions were congruent with those applied during model development. In 14 comparisons, the UKPDS, ADVANCE or DCS diabetes-specific models were superior to the general population CVD risk models. Authors reported better C-statistic for models they developed. The limited existing evidence suggests a possible discriminatory advantage of diabetes-specific over general population-based models for CVD risk stratification in diabetes. More robust head-to-head comparisons are needed to confirm this trend and strengthen recommendations. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  12. Risk assessment based on urinary bisphenol A levels in the general Korean population

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jae-Hong; Hwang, Myung-Sil, E-mail: hwang1963@korea.kr; Ko, Ahra; Jeong, Da-Hyun; Lee, Jung-Mi; Moon, Guiim; Lee, Kwang-Soo; Kho, Young-Ho; Shin, Min-Ki; Lee, Hee-Seok; Kang, Hui-Seung; Suh, Jin-Hyang; Hwang, In-Gyun, E-mail: inghwang@korea.kr

    2016-10-15

    Bisphenol A (BPA) is a high-volume industrial chemical used in the global production of polycarbonate plastics and epoxy resins, which are used in food and drink containers, such as tableware (plates and mugs). Due to its broad applications, BPA has been detected in human blood, urine and breast milk as well as environmental substances, including water, indoor and outdoor air, and dust. Indeed, exposure to high concentrations of BPA can result in a variety of harmful effects, including reproductive toxicity, through a mechanism of endocrine disruption. Our comparison of reported BPA urinary concentrations among different countries revealed that exposures in Korea may be higher than those in other Asian countries and North America, but lower than or similar to those in European countries. The current study included a total of 2044 eligible subjects of all ages. The subjects were evenly divided between males and females (48.58% and 51.42%, respectively). The geometric mean (GM) of pre-adjusted (adjusted) urinary BPA concentrations was 1.83 μg/L (2.01 μg/g creatinine) for subjects of all ages, and there was no statistically difference in BPA concentrations between males (1.90 μg/L, 1.87 μg/g creatinine) and females (1.76 μg/L, 2.16 μg/g creatinine). Multiple regression analysis revealed only one positive association between creatinine pre-adjusted urinary BPA concentration and age (β=−0.0868, p<0.001). The 95th percentile levels of 24-hour recall (HR), food frequency questionnaires (FFQ) and estimated daily intake (EDI) through urinary BPA concentrations were 0.14, 0.13, and 0.22 μg/kg bw/day, respectively. According to the Ministry of Food and Drug Safety (MFDS), a tolerable daily intake (tDI) of 20 μg/kg bw/day was established for BPA from the available toxicological data. Recently, the European Food Safety Authority (EFSA) established a temporary TDI of 4 μg/kg bw/day based on current toxicological data. By comparing these TDIs with subjects

  13. Risk assessment based on urinary bisphenol A levels in the general Korean population

    International Nuclear Information System (INIS)

    Park, Jae-Hong; Hwang, Myung-Sil; Ko, Ahra; Jeong, Da-Hyun; Lee, Jung-Mi; Moon, Guiim; Lee, Kwang-Soo; Kho, Young-Ho; Shin, Min-Ki; Lee, Hee-Seok; Kang, Hui-Seung; Suh, Jin-Hyang; Hwang, In-Gyun

    2016-01-01

    Bisphenol A (BPA) is a high-volume industrial chemical used in the global production of polycarbonate plastics and epoxy resins, which are used in food and drink containers, such as tableware (plates and mugs). Due to its broad applications, BPA has been detected in human blood, urine and breast milk as well as environmental substances, including water, indoor and outdoor air, and dust. Indeed, exposure to high concentrations of BPA can result in a variety of harmful effects, including reproductive toxicity, through a mechanism of endocrine disruption. Our comparison of reported BPA urinary concentrations among different countries revealed that exposures in Korea may be higher than those in other Asian countries and North America, but lower than or similar to those in European countries. The current study included a total of 2044 eligible subjects of all ages. The subjects were evenly divided between males and females (48.58% and 51.42%, respectively). The geometric mean (GM) of pre-adjusted (adjusted) urinary BPA concentrations was 1.83 μg/L (2.01 μg/g creatinine) for subjects of all ages, and there was no statistically difference in BPA concentrations between males (1.90 μg/L, 1.87 μg/g creatinine) and females (1.76 μg/L, 2.16 μg/g creatinine). Multiple regression analysis revealed only one positive association between creatinine pre-adjusted urinary BPA concentration and age (β=−0.0868, p<0.001). The 95th percentile levels of 24-hour recall (HR), food frequency questionnaires (FFQ) and estimated daily intake (EDI) through urinary BPA concentrations were 0.14, 0.13, and 0.22 μg/kg bw/day, respectively. According to the Ministry of Food and Drug Safety (MFDS), a tolerable daily intake (tDI) of 20 μg/kg bw/day was established for BPA from the available toxicological data. Recently, the European Food Safety Authority (EFSA) established a temporary TDI of 4 μg/kg bw/day based on current toxicological data. By comparing these TDIs with subjects

  14. An Analysis of the Romanian General Accounting Plan. Opportunities for Adaptation to the Activity-Based Costing (ABC Method

    Directory of Open Access Journals (Sweden)

    Irina-Alina Preda

    2008-11-01

    Full Text Available In this article, we analyze the causes that have led to the improvement of the Romanian general accounting plan according to the Activity- Based Costing (ABC method. We explain the advantages presented by the dissociated organization of management accounting, in contrast with the tabular- statistical form. The article also describes the methodological steps to be taken in the process of recording book entries, according to the Activity-Based Costing (ABC method in Romania.

  15. Section 3. General issues in management : Heuristics or experience-based techniques for making accounting judgments and learning

    OpenAIRE

    Schiller, Stefan

    2013-01-01

    The purpose of this paper is to further the development of initial accounting for internally generated intangible assets, relevant to both academics and practitioners, examining what happens when accountants are given principles-based discretion. This paper draws on existing insights into heuristics or experience-based techniques for making accounting judgments. Knowledge about judgment under uncertainty, and the general framework offered by the heuristics and biases program in particular, fo...

  16. Birkhoff’s theorem in Lovelock gravity for general base manifolds

    Science.gov (United States)

    Ray, Sourya

    2015-10-01

    We extend the Birkhoff’s theorem in Lovelock gravity for arbitrary base manifolds using an elementary method. In particular, it is shown that any solution of the form of a warped product of a two-dimensional transverse space and an arbitrary base manifold must be static. Moreover, the field equations restrict the base manifold such that all the non-trivial intrinsic Lovelock tensors of the base manifold are constants, which can be chosen arbitrarily, and the metric in the transverse space is determined by a single function of a spacelike coordinate which satisfies an algebraic equation involving the constants characterizing the base manifold along with the coupling constants.

  17. A Standardized Generalized Dimensionality Discrepancy Measure and a Standardized Model-Based Covariance for Dimensionality Assessment for Multidimensional Models

    Science.gov (United States)

    Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka

    2015-01-01

    The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…

  18. Partial relay selection based on shadowing side information over generalized composite fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    In this paper, in contrast to the relay selection protocols available in the literature, we propose a partial relay selection protocol utilizing only the shadowing side information of the relays instead of their full channel side information in order to select a relay in a dual-hop relaying system through the available limited feedback channels and power budget. We then presented an exact unified performance expression combining the average bit error probability, ergodic capacity, and moments-generating function of the proposed partial relay selection over generalized fading channels. Referring to the unified performance expression introduced in [1], we explicitly offer a generic unified performance expression that can be easily calculated and that is applicable to a wide variety of fading scenarios. Finally, as an illustration of the mathematical formalism, some numerical and simulation results are generated for an extended generalized-K fading environment, and these numerical and simulation results are shown to be in perfect agreement. © 2011 IEEE.

  19. IT based social media impacts on Indonesian general legislative elections 2014

    OpenAIRE

    Abdillah, Leon Andretti

    2014-01-01

    The information technology applications in cyberspace (the internet) are currently dominated by social media. The author investigates and explores the advantages of social media implementation of any political party in Indonesian general legislative elections 2014. There are twelve national political parties participating in the election as contestants plus three local political parties in Aceh. In this research, author focus on national political parties only. The author visited, analyzed, a...

  20. GENERAL APROACH TO MODELING NONLINEAR AMPLITUDE AND FREQUENCY DEPENDENT HYSTERESIS EFFECTS BASED ON EXPERIMENTAL RESULTS

    OpenAIRE

    Christopher Heine; Markus Plagemann

    2014-01-01

    A detailed description of the rubber parts’ properties is gaining in importance in the current simulation models of multi-body simulation. One application example is a multi-body simulation of the washing machine movement. Inside the washing machine, there are different force transmission elements, which consist completely or partly of rubber. Rubber parts or, generally, elastomers usually have amplitude-dependant and frequency-dependent force transmission properties. Rheological models are u...

  1. Marital status and generalized trust in other people: A population-based study

    OpenAIRE

    Lindström, Martin

    2012-01-01

    The association between marital status and generalized trust in other people was investigated. The public health survey in Skane 2008 is a cross-sectional study including 28,198 persons (55% participation rate) aged 18-80 in southern Sweden. Logistic regression models investigated associations between marital status and trust, adjusting for age, country of birth, education, emotional support, instrumental support and economic stress. 33.9% of the men and 35.7% of the women had low trust. The ...

  2. General catalogue of products and services - geology. AERO data base. 2. ed.

    International Nuclear Information System (INIS)

    1995-01-01

    The catalogue in the second edition aims at presenting to the user a general idea on the aerogeophysical projects of Brazil database (AERO) which belongs to SIGA (Brazilian geological information system). The 151 documents (projects) are listed as follows: 52 projects performed by CPRM/DNPM - Departamento Nacional de Producao Mineral; 33 projects performed by CNEN - Commissao Nacional de Energia Nuclear and NUCLEBRAS; 7 projects executed by State government and private companies; and 59 projects executed for PETROBRAS

  3. Design of low-cost general purpose microcontroller based neuromuscular stimulator.

    Science.gov (United States)

    Koçer, S; Rahmi Canal, M; Güler, I

    2000-04-01

    In this study, a general purpose, low-cost, programmable, portable and high performance stimulator is designed and implemented. For this purpose, a microcontroller is used in the design of the stimulator. The duty cycle and amplitude of the designed system can be controlled using a keyboard. The performance test of the system has shown that the results are reliable. The overall system can be used as the neuromuscular stimulator under safe conditions.

  4. Assessment of emergency general surgery care based on formally developed quality indicators.

    Science.gov (United States)

    Ingraham, Angela; Nathens, Avery; Peitzman, Andrew; Bode, Allison; Dorlac, Gina; Dorlac, Warren; Miller, Preston; Sadeghi, Mahsa; Wasserman, Deena D; Bilimoria, Karl

    2017-08-01

    Emergency general surgery outcomes vary widely across the United States. The utilization of quality indicators can reduce variation and assist providers in administering care aligned with established recommendations. Previous quality indicators have not focused on emergency general surgery patients. We identified indicators of high-quality emergency general surgery care and assessed patient- and hospital-level compliance with these indicators. We utilized a modified Delphi technique (RAND Appropriateness Methodology) to develop quality indicators. Through 2 rankings, an expert panel ranked potential quality indicators for validity. We then examined historic compliance with select quality indicators after 4 nonelective procedures (cholecystectomy, appendectomy, colectomy, small bowel resection) at 4 academic centers. Of 25 indicators rated as valid, 13 addressed patient-level quality and 12 addressed hospital-level quality. Adherence with 18 indicators was assessed. Compliance with performing a cholecystectomy for acute cholecystitis within 72 hours of symptom onset ranged from 45% to 76%. Compliance with surgery start times within 3 hours from the decision to operate for uncontained perforated viscus ranged from 20% to 100%. Compliance with exploration of patients with small bowel obstructions with ischemia/impending perforation within 3 hours of the decision to operate was 0% to 88%. For 3 quality indicators (auditing 30-day unplanned readmissions/operations for patients previously managed nonoperatively, monitoring time to source control for intra-abdominal infections, and having protocols for bypass/transfer), none of the hospitals were compliant. Developing indicators for providers to assess their performance provides a foundation for specific initiatives. Adherence to quality indicators may improve the quality of emergency general surgery care provided for which current outcomes are potentially modifiable. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Message Embedded Chaotic Masking Synchronization Scheme Based on the Generalized Lorenz System and Its Security Analysis

    Czech Academy of Sciences Publication Activity Database

    Čelikovský, Sergej; Lynnyk, Volodymyr

    2016-01-01

    Roč. 26, č. 8 (2016), 1650140-1-1650140-15 ISSN 0218-1274 R&D Projects: GA ČR GA13-20433S Institutional support: RVO:67985556 Keywords : Chaotic masking * generalized Lorenz system * message embedded synchronization Subject RIV: BC - Control Systems Theory Impact factor: 1.329, year: 2016 http://library.utia.cas.cz/separaty/2016/TR/celikovsky-0461536.pdf

  6. General herpetological collecting is size-based for five Pacific lizards

    Science.gov (United States)

    Rodda, Gordon H.; Yackel Adams, Amy A.; Campbell, Earl W.; Fritts, Thomas H.

    2015-01-01

    Accurate estimation of a species’ size distribution is a key component of characterizing its ecology, evolution, physiology, and demography. We compared the body size distributions of five Pacific lizards (Carlia ailanpalai, Emoia caeruleocauda, Gehyra mutilata, Hemidactylus frenatus, and Lepidodactylus lugubris) from general herpetological collecting (including visual surveys and glue boards) with those from complete censuses obtained by total removal. All species exhibited the same pattern: general herpetological collecting undersampled juveniles and oversampled mid-sized adults. The bias was greatest for the smallest juveniles and was not statistically evident for newly maturing and very large adults. All of the true size distributions of these continuously breeding species were skewed heavily toward juveniles, more so than the detections obtained from general collecting. A strongly skewed size distribution is not well characterized by the mean or maximum, though those are the statistics routinely reported for species’ sizes. We found body mass to be distributed more symmetrically than was snout–vent length, providing an additional rationale for collecting and reporting that size measure.

  7. General Plan-Based Environmental Impact Analysis Process Environmental Assessment Volume 1

    Science.gov (United States)

    2009-09-01

    feet LBP lead-based paint Leq Equivalent Sound Level Lmax Maximum Sound Level MACA Mid-Air Collision Avoidance MFH Military Family Housing MGD...engages in a program of public outreach to aviators, publishing Mid-Air Collision Avoidance ( MACA ) guides at its bases, including Tyndall AFB. These...Consequences Tyndall Air Force Base, Florida September 2009 4-3 airports, are primarily intended for pilots operating under VFR. The MACA contains

  8. Making the links between domestic violence and child safeguarding: an evidence-based pilot training for general practice.

    Science.gov (United States)

    Szilassy, Eszter; Drinkwater, Jess; Hester, Marianne; Larkins, Cath; Stanley, Nicky; Turner, William; Feder, Gene

    2017-11-01

    We describe the development of an evidence-based training intervention on domestic violence and child safeguarding for general practice teams. We aimed - in the context of a pilot study - to improve knowledge, skills, attitudes and self-efficacy of general practice clinicians caring for families affected by domestic violence. Our evidence sources included: a systematic review of training interventions aiming to improve professional responses to children affected by domestic violence; content mapping of relevant current training in England; qualitative assessment of general practice professionals' responses to domestic violence in families; and a two-stage consensus process with a multi-professional stakeholder group. Data were collected between January and December 2013. This paper reports key research findings and their implications for practice and policy; describes how the research findings informed the training development and outlines the principal features of the training intervention. We found lack of cohesion and co-ordination in the approach to domestic violence and child safeguarding. General practice clinicians have insufficient understanding of multi-agency work, a limited competence in gauging thresholds for child protection referral to children's services and little understanding of outcomes for children. While prioritising children's safety, they are more inclined to engage directly with abusive parents than with affected children. Our research reveals uncertainty and confusion surrounding the recording of domestic violence cases in families' medical records. These findings informed the design of the RESPONDS training, which was developed in 2014 to encourage general practice clinicians to overcome barriers and engage more extensively with adults experiencing abuse, as well as responding directly to the needs of children. We conclude that general practice clinicians need more support in managing the complexity of this area of practice. We need to

  9. Novel MGF-based expressions for the average bit error probability of binary signalling over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2014-04-01

    The main idea in the moment generating function (MGF) approach is to alternatively express the conditional bit error probability (BEP) in a desired exponential form so that possibly multi-fold performance averaging is readily converted into a computationally efficient single-fold averaging - sometimes into a closed-form - by means of using the MGF of the signal-to-noise ratio. However, as presented in [1] and specifically indicated in [2] and also to the best of our knowledge, there does not exist an MGF-based approach in the literature to represent Wojnar\\'s generic BEP expression in a desired exponential form. This paper presents novel MGF-based expressions for calculating the average BEP of binary signalling over generalized fading channels, specifically by expressing Wojnar\\'s generic BEP expression in a desirable exponential form. We also propose MGF-based expressions to explore the amount of dispersion in the BEP for binary signalling over generalized fading channels.

  10. The generalization ability of online SVM classification based on Markov sampling.

    Science.gov (United States)

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  11. Generalized classification of welds according to defect type based on raidation testing results

    International Nuclear Information System (INIS)

    Adamenko, A.A.; Demidko, V.G.

    1980-01-01

    Constructed is a generalized classification of welds according to defect type, with respect to real danger of defect, which in the first approximation is proportional to relatively decrease of the thickness, and with respect to defect potential danger which can be determined by its pointing. According to this classification the welded joints are divided into five classes according to COMECON guides. The division into classes is carried out according to two-fold numerical criterium which is applicable in case of the presence of experimental data on three defect linear sizes. The above classification is of main importance while automatic data processing of the radiation testing

  12. The role of general practice in routes to diagnosis of lung cancer in Denmark: a population-based study of general practice involvement, diagnostic activity and diagnostic intervals.

    Science.gov (United States)

    Guldbrandt, Louise Mahncke; Fenger-Grøn, Morten; Rasmussen, Torben Riis; Jensen, Henry; Vedsted, Peter

    2015-01-22

    Lung cancer stage at diagnosis predicts possible curative treatment. In Denmark and the UK, lung cancer patients have lower survival rates than citizens in most other European countries, which may partly be explained by a comparatively longer diagnostic interval in these two countries. In Denmark, a pathway was introduced in 2008 allowing general practitioners (GPs) to refer patients suspected of having lung cancer directly to fast-track diagnostics. However, symptom presentation of lung cancer in general practice is known to be diverse and complex, and systematic knowledge of the routes to diagnosis is needed to enable earlier lung cancer diagnosis in Denmark. This study aims to describe the routes to diagnosis, the diagnostic activity preceding diagnosis and the diagnostic intervals for lung cancer in the Danish setting. We conducted a national registry-based cohort study on 971 consecutive incident lung cancer patients in 2010 using data from national registries and GP questionnaires. GPs were involved in 68.3% of cancer patients' diagnostic pathways, and 27.4% of lung cancer patients were referred from the GP to fast-track diagnostic work-up. A minimum of one X-ray was performed in 85.6% of all cases before diagnosis. Patients referred through a fast-track route more often had diagnostic X-rays (66.0%) than patients who did not go through fast-track (49.4%). Overall, 33.6% of all patients had two or more X-rays performed during the 90 days before diagnosis. Patients whose symptoms were interpreted as non-alarm symptoms or who were not referred to fast-track were more likely to experience a long diagnostic interval than patients whose symptoms were interpreted as alarm symptoms or who were referred to fast-track. Lung cancer patients followed several diagnostic pathways. The existing fast-track pathway must be supplemented to ensure earlier detection of lung cancer. The high incidence of multiple X-rays warrants a continued effort to develop more accurate lung

  13. CARDfile data base representativeness, Phase 1 : general characteristics including populations, vehicles, roads, and fatal accidents

    Science.gov (United States)

    1988-08-01

    This report details the results of an analysis performed to evaluate the : representativeness of the Crash Avoidance Research accident data base : (CARDfile). The accident records for 1983 and 1984 from six states (Indiana, : Maryland, Michigan, Penn...

  14. CARDfile Data Base Representatives - Phase I: General Characteristics including Populations, Vehicles, Roads, and Fatal Accidents

    Science.gov (United States)

    1985-12-01

    This report details the results of an analysis performed to evaluate the representativeness of the Crash Avoidance Research accident data base (CARDfile). The accident records for 1983 and 1984 from six states (Indiana, Maryland, Michigan, Pennsylvan...

  15. Generalized Selectivity Description for Polymeric Ion-Selective Electrodes Based on the Phase Boundary Potential Model.

    Science.gov (United States)

    Bakker, Eric

    2010-02-15

    A generalized description of the response behavior of potentiometric polymer membrane ion-selective electrodes is presented on the basis of ion-exchange equilibrium considerations at the sample-membrane interface. This paper includes and extends on previously reported theoretical advances in a more compact yet more comprehensive form. Specifically, the phase boundary potential model is used to derive the origin of the Nernstian response behavior in a single expression, which is valid for a membrane containing any charge type and complex stoichiometry of ionophore and ion-exchanger. This forms the basis for a generalized expression of the selectivity coefficient, which may be used for the selectivity optimization of ion-selective membranes containing electrically charged and neutral ionophores of any desired stoichiometry. It is shown to reduce to expressions published previously for specialized cases, and may be effectively applied to problems relevant in modern potentiometry. The treatment is extended to mixed ion solutions, offering a comprehensive yet formally compact derivation of the response behavior of ion-selective electrodes to a mixture of ions of any desired charge. It is compared to predictions by the less accurate Nicolsky-Eisenman equation. The influence of ion fluxes or any form of electrochemical excitation is not considered here, but may be readily incorporated if an ion-exchange equilibrium at the interface may be assumed in these cases.

  16. Transferring and generalizing deep-learning-based neural encoding models across subjects.

    Science.gov (United States)

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-08-01

    Recent studies have shown the value of using deep learning models for mapping and characterizing how the brain represents and organizes information for natural vision. However, modeling the relationship between deep learning models and the brain (or encoding models), requires measuring cortical responses to large and diverse sets of natural visual stimuli from single subjects. This requirement limits prior studies to few subjects, making it difficult to generalize findings across subjects or for a population. In this study, we developed new methods to transfer and generalize encoding models across subjects. To train encoding models specific to a target subject, the models trained for other subjects were used as the prior models and were refined efficiently using Bayesian inference with a limited amount of data from the target subject. To train encoding models for a population, the models were progressively trained and updated with incremental data from different subjects. For the proof of principle, we applied these methods to functional magnetic resonance imaging (fMRI) data from three subjects watching tens of hours of naturalistic videos, while a deep residual neural network driven by image recognition was used to model visual cortical processing. Results demonstrate that the methods developed herein provide an efficient and effective strategy to establish both subject-specific and population-wide predictive models of cortical representations of high-dimensional and hierarchical visual features. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. A general CFD framework for fault-resilient simulations based on multi-resolution information fusion

    Science.gov (United States)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-10-01

    We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.

  18. Do pregnant women contact their general practitioner? A register-based comparison of healthcare utilisation of pregnant and non-pregnant women in general practice.

    NARCIS (Netherlands)

    Feijen-de Jong, E.I.; Baarveld, F.; Jansen, D.E.M.C.; Ursum, J.; Reijneveld, S.A.; Schellevis, F.G.

    2013-01-01

    Background: Midwives and obstetricians are the key providers of care during pregnancy and postpartum. Information about the consultations with a general practitioner (GP) during this period is generally lacking. The aim of this study is to compare consultation rates, diagnoses and GP management of

  19. Do pregnant women contact their general practitioner? A register-based comparison of healthcare utilisation of pregnant and non-pregnant women in general practice

    NARCIS (Netherlands)

    Feijen-de Jong, Esther I.; Baarveld, Frank; Jansen, Danielle E. M. C.; Ursum, Jennie; Reijneveld, Sijmen A.; Schellevis, Francois G.

    2013-01-01

    Background: Midwives and obstetricians are the key providers of care during pregnancy and postpartum. Information about the consultations with a general practitioner (GP) during this period is generally lacking. The aim of this study is to compare consultation rates, diagnoses and GP management of

  20. Time-domain analysis of planar microstrip devices using a generalized Yee-algorithm based on unstructured grids

    Science.gov (United States)

    Gedney, Stephen D.; Lansing, Faiza

    1993-01-01

    The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.

  1. A General Purpose Connections type CTI Server Based on SIP Protocol and Its Implementation

    Science.gov (United States)

    Watanabe, Toru; Koizumi, Hisao

    In this paper, we propose a general purpose connections type CTI (Computer Telephony Integration) server that provides various CTI services such as voice logging where the CTI server communicates with IP-PBX using the SIP (Session Initiation Protocol), and accumulates voice packets of external line telephone call flowing between an IP telephone for extension and a VoIP gateway connected to outside line networks. The CTI server realizes CTI services such as voice logging, telephone conference, or IVR (interactive voice response) with accumulating and processing voice packets sampled. Furthermore, the CTI server incorporates a web server function which can provide various CTI services such as a Web telephone directory via a Web browser to PCs, cellular telephones or smart-phones in mobile environments.

  2. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  3. Nuclear-data evaluation based on direct and indirect measurements with general correlations

    International Nuclear Information System (INIS)

    Muir, D.W.

    1988-01-01

    Optimum procedures for the statistical improvement, or updating, of an existing nuclear-data evaluation are reviewed and redeveloped from first principles, consistently employing a minimum-variance viewpoint. A set of equations is derived which provides improved values of the data and their covariances, taking into account information from supplementary measurements and allowing for general correlations among all measurements. The minimum-variance solutions thus obtained, which we call the method of 'partitioned least squares,' are found to be equivalent to a method suggested by Yu. V. Linnik and applied by a number of authors to the analysis of fission-reactor integral experiments; however, up to now, the partitioned-least-squares formulae have not found widespread use in the field of basic data evaluation. This approach is shown to give the same results as the more commonly applied Normal equations, but with reduced matrix inversion requirements. Examples are provided to indicate potential areas of application. (author)

  4. GENERAL APROACH TO MODELING NONLINEAR AMPLITUDE AND FREQUENCY DEPENDENT HYSTERESIS EFFECTS BASED ON EXPERIMENTAL RESULTS

    Directory of Open Access Journals (Sweden)

    Christopher Heine

    2014-08-01

    Full Text Available A detailed description of the rubber parts’ properties is gaining in importance in the current simulation models of multi-body simulation. One application example is a multi-body simulation of the washing machine movement. Inside the washing machine, there are different force transmission elements, which consist completely or partly of rubber. Rubber parts or, generally, elastomers usually have amplitude-dependant and frequency-dependent force transmission properties. Rheological models are used to describe these properties. A method for characterization of the amplitude and frequency dependence of such a rheological model is presented within this paper. Within this method, the used rheological model can be reduced or expanded in order to illustrate various non-linear effects. An original result is given with the automated parameter identification. It is fully implemented in Matlab. Such identified rheological models are intended for subsequent implementation in a multi-body model. This allows a significant enhancement of the overall model quality.

  5. Variance-based selection may explain general mating patterns in social insects.

    Science.gov (United States)

    Rueppell, Olav; Johnson, Nels; Rychtár, Jan

    2008-06-23

    Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.

  6. Based on a True Story: Using Movies as Source Material for General Chemistry Reports

    Science.gov (United States)

    Griep, Mark A.; Mikasen, Marjorie L.

    2005-10-01

    Research for chemical reports and case study analysis of chemical topics are two commonly used learning activities to engage and enrich student understanding of the content in introductory chemistry courses. Even though movies are excellent vehicles for exploring the human dimension of events, they have been used only sparingly as source material in introductory science courses. One reason for this sparing use has been the lack of a list of suitable movies. To fill this void, a list of one dozen highly rated movies is presented. The focus of these movies is either a scientist's chemical research or the societal impact of some chemical compound. The method by which two of these movies were used as source material for a written report in a general chemistry course is described. The student response to the exercise was enthusiastic.

  7. Model Reduction Based on Proper Generalized Decomposition for the Stochastic Steady Incompressible Navier--Stokes Equations

    KAUST Repository

    Tamellini, L.; Le Maî tre, O.; Nouy, A.

    2014-01-01

    In this paper we consider a proper generalized decomposition method to solve the steady incompressible Navier-Stokes equations with random Reynolds number and forcing term. The aim of such a technique is to compute a low-cost reduced basis approximation of the full stochastic Galerkin solution of the problem at hand. A particular algorithm, inspired by the Arnoldi method for solving eigenproblems, is proposed for an efficient greedy construction of a deterministic reduced basis approximation. This algorithm decouples the computation of the deterministic and stochastic components of the solution, thus allowing reuse of preexisting deterministic Navier-Stokes solvers. It has the remarkable property of only requiring the solution of m uncoupled deterministic problems for the construction of an m-dimensional reduced basis rather than M coupled problems of the full stochastic Galerkin approximation space, with m l M (up to one order of magnitudefor the problem at hand in this work). © 2014 Society for Industrial and Applied Mathematics.

  8. Collaboration between physicians and a hospital-based palliative care team in a general acute-care hospital in Japan

    Directory of Open Access Journals (Sweden)

    Nishikitani Mariko

    2010-06-01

    Full Text Available Abstract Background Continual collaboration between physicians and hospital-based palliative care teams represents a very important contributor to focusing on patients' symptoms and maintaining their quality of life during all stages of their illness. However, the traditionally late introduction of palliative care has caused misconceptions about hospital-based palliative care teams (PCTs among patients and general physicians in Japan. The objective of this study is to identify the factors related to physicians' attitudes toward continual collaboration with hospital-based PCTs. Methods This cross-sectional anonymous questionnaire-based survey was conducted to clarify physicians' attitudes toward continual collaboration with PCTs and to describe the factors that contribute to such attitudes. We surveyed 339 full-time physicians, including interns, employed in a general acute-care hospital in an urban area in Japan; the response rate was 53% (N = 155. We assessed the basic characteristics, experience, knowledge, and education of respondents. Multiple logistic regression analysis was used to determine the main factors affecting the physicians' attitudes toward PCTs. Results We found that the physicians who were aware of the World Health Organization (WHO analgesic ladder were 6.7 times (OR = 6.7, 95% CI = 1.98-25.79 more likely to want to treat and care for their patients in collaboration with the hospital-based PCTs than were those physicians without such awareness. Conclusion Basic knowledge of palliative care is important in promoting physicians' positive attitudes toward collaboration with hospital-based PCTs.

  9. Planimetric Features Generalization for the Production of Small-Scale Map by Using Base Maps and the Existing Algorithms

    Directory of Open Access Journals (Sweden)

    M. Modiri

    2014-10-01

    Full Text Available Cartographic maps are representations of the Earth upon a flat surface in the smaller scale than it’s true. Large scale maps cover relatively small regions in great detail and small scale maps cover large regions such as nations, continents and the whole globe. Logical connection between the features and scale map must be maintained by changing the scale and it is important to recognize that even the most accurate maps sacrifice a certain amount of accuracy in scale to deliver a greater visual usefulness to its user. Cartographic generalization, or map generalization, is the method whereby information is selected and represented on a map in a way that adapts to the scale of the display medium of the map, not necessarily preserving all intricate geographical or other cartographic details. Due to the problems facing small-scale map production process and the need to spend time and money for surveying, today’s generalization is used as executive approach. The software is proposed in this paper that converted various data and information to certain Data Model. This software can produce generalization map according to base map using the existing algorithm. Planimetric generalization algorithms and roles are described in this article. Finally small-scale maps with 1:100,000, 1:250,000 and 1:500,000 scale are produced automatically and they are shown at the end.

  10. Drought assessment in the Dongliao River basin: traditional approaches vs. generalized drought assessment index based on water resources systems

    Science.gov (United States)

    Weng, B. S.; Yan, D. H.; Wang, H.; Liu, J. H.; Yang, Z. Y.; Qin, T. L.; Yin, J.

    2015-08-01

    Drought is firstly a resource issue, and with its development it evolves into a disaster issue. Drought events usually occur in a determinate but a random manner. Drought has become one of the major factors to affect sustainable socioeconomic development. In this paper, we propose the generalized drought assessment index (GDAI) based on water resources systems for assessing drought events. The GDAI considers water supply and water demand using a distributed hydrological model. We demonstrate the use of the proposed index in the Dongliao River basin in northeastern China. The results simulated by the GDAI are compared to observed drought disaster records in the Dongliao River basin. In addition, the temporal distribution of drought events and the spatial distribution of drought frequency from the GDAI are compared with the traditional approaches in general (i.e., standard precipitation index, Palmer drought severity index and rate of water deficit index). Then, generalized drought times, generalized drought duration, and generalized drought severity were calculated by theory of runs. Application of said runs at various drought levels (i.e., mild drought, moderate drought, severe drought, and extreme drought) during the period 1960-2010 shows that the centers of gravity of them all distribute in the middle reaches of Dongliao River basin, and change with time. The proposed methodology may help water managers in water-stressed regions to quantify the impact of drought, and consequently, to make decisions for coping with drought.

  11. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  12. Social capital and frequent attenders in general practice: a register-based cohort study.

    Science.gov (United States)

    Pasgaard, Alexander A; Mæhlisen, Maiken H; Overgaard, Charlotte; Ejlskov, Linda; Torp-Pedersen, Christian; Bøggild, Henrik

    2018-03-02

    Frequent attendance to primary care constitutes a large use of resources for the health care system. The association between frequent attendance and illness-related factors has been examined in several studies, but little is known about the association between frequent attendance and individual social capital. The aim of this study is to explore this association. The analysis is conducted on responders to the North Denmark Region Health Profile 2010 (n = 23,384), individually linked with information from administrative registers. Social capital is operationalized at the individual level, and includes cognitive (interpersonal trust and norms of reciprocity) as well as structural (social network and civic engagement) dimensions. Frequent attendance is defined as the upper-quartile of the total number of measured consultations with a general practitioner over a period of 148 weeks. Using multiple logistic regression, we found that frequent attendance was associated with a lower score in interpersonal trust [OR 0.86 (0.79-0.94)] and social network [OR 0.88 (0.79-0.98)] for women, when adjusted for age, education, income and SF12 health scores. Norms of reciprocity and civic engagement were not significantly associated with frequent attendance for women [OR 1.05 (0.99-1.11) and OR 1.01 (0.92-1.11) respectively]. None of the associations were statistically significant for men. This study suggests that for women, some aspects of social capital are associated with frequent attendance in general practice, and the statistically significant dimensions belonged to both cognitive and structural aspects of social capital. This association was not seen for men. This indicates a multifaceted and heterogeneous relationship between social capital and frequent attendance among genders.

  13. Towards a general theory of neural computation based on prediction by single neurons.

    Directory of Open Access Journals (Sweden)

    Christopher D Fiorillo

    Full Text Available Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise". A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of

  14. Artificial intelligence (AI)-based relational matching and multimodal medical image fusion: generalized 3D approaches

    Science.gov (United States)

    Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.

    1994-09-01

    A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.

  15. General rigid motion correction for computed tomography imaging based on locally linear embedding

    Science.gov (United States)

    Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge

    2018-02-01

    The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.

  16. A Full Mesh ATCA-based General Purpose Data Processing Board (Pulsar II)

    Energy Technology Data Exchange (ETDEWEB)

    Ajuha, S. [Univ. of Sao Paulo (Brazil); et al.

    2017-06-29

    The Pulsar II is a custom ATCA full mesh enabled FPGA-based processor board which has been designed with the goal of creating a scalable architecture abundant in flexible, non-blocking, high bandwidth interconnections. The design has been motivated by silicon-based tracking trigger needs for LHC experiments. In this technical memo we describe the Pulsar II hardware and its performance, such as the performance test results with full mesh backplanes from different vendors, how the backplane is used for the development of low-latency time-multiplexed data transfer schemes and how the inter-shelf and intra-shelf synchronization works.

  17. A Full Mesh ATCA-based General Purpose Data Processing Board (Pulsar II)

    CERN Document Server

    Ajuha, S; Costa de Paiva, Thiago; Das, Souvik; Eusebi, Ricardo; Finotti Ferreira, Vitor; Hahn, Kristian; Hu, Zhen; Jindariani, Sergo; Konigsberg, Jacobo; Liu, Tiehui Ted; Low, Jia Fu; Okumura, Yasuyuki; Olsen, Jamieson; Arruda Ramalho, Lucas; Rossin, Roberto; Ristori, Luciano; Akira Shinoda, Ailton; Tran, Nhan; Trovato, Marco; Ulmer, Keith; Vaz, Mario; Wen, Xianshan; Wu, Jin-Yuan; Xu, Zijun; Yin, Han; Zorzetti, Silvia

    2017-01-01

    The Pulsar II is a custom ATCA full mesh enabled FPGA-based processor board which has been designed with the goal of creating a scalable architecture abundant in flexible, non-blocking, high bandwidth interconnections. The design has been motivated by silicon-based tracking trigger needs for LHC experiments. In this technical memo we describe the Pulsar II hardware and its performance, such as the performance test results with full mesh backplanes from di↵erent vendors, how the backplane is used for the development of low-latency time-multiplexed data transfer schemes and how the inter-shelf and intra-shelf synchronization works.

  18. A General Schema for Constructing One-Point Bases in the Lambda Calculus

    DEFF Research Database (Denmark)

    Goldberg, Mayer

    2001-01-01

    In this paper, we present a schema for constructing one-point bases for recursively enumerable sets of lambda terms. The novelty of the approach is that we make no assumptions about the terms for which the one-point basis is constructed: They need not be combinators and they may contain constants...... and free variables. The significance of the construction is twofold: In the context of the lambda calculus, it characterises one-point bases as ways of ``packaging'' sets of terms into a single term; And in the context of realistic programming languages, it implies that we can define a single procedure...

  19. General classification of maturation reaction-norm shape from size-based processes

    DEFF Research Database (Denmark)

    Christensen, Asbjørn; Andersen, Ken Haste

    2011-01-01

    for growth and mortality is based on processes at the level of the individual, and is motivated by the energy budget of fish. MRN shape is a balance between opposing factors and depends on subtle details of size dependence of growth and mortality. MRNs with both positive and negative slopes are predicted...

  20. Distance-Based Image Classification: Generalizing to New Classes at Near Zero Cost

    NARCIS (Netherlands)

    Mensink, T.; Verbeek, J.; Perronnin, F.; Csurka, G.

    2013-01-01

    We study large-scale image classification methods that can incorporate new classes and training images continuously over time at negligible cost. To this end, we consider two distance-based classifiers, the k-nearest neighbor (k-NN) and nearest class mean (NCM) classifiers, and introduce a new

  1. Generalization of heterogeneous alpine vegetation in air photo-based image classification, Latnjajaure catchment, northern Sweden

    Directory of Open Access Journals (Sweden)

    Lindblad, K. E. M.

    2006-12-01

    Full Text Available

    Mapping alpine vegetation at a meso-scale (catchment level using remote sensing presents difficulties due to a patchy distribution and heterogeneous spectral appearance of the plant cover. We discuss issues of generalization and accuracy assessment in this case study when using a digital CIR air photo for an automatic classification of the dominant plant communities. Spectral information from an aerial photograph was supplemented by classified plant communities in field and by topographical information derived from a DEM. 150 control points were tracked in the field using a GPS. The outcome from three alternative classifications was analysed by Kappa statistics, user’s and producer’s accuracy. Overall accuracy did not differ between the classifications although producer’s and user’s accuracy for separate classes differed together with total surface (ha and distribution. Manual accuracy assessment when recording the occurrence of the correct class within a radius of 5 meters from the control points generated an improvement of 16 % of the total accuracy. About 10 plant communities could be classified with acceptable accuracy where the chosen classification scheme determined the final outcome. If a high resolution pixel mosaic is generalized to units that match the positional accuracy of simple GPS this generalization may also influence the information content of the image.



    Hemos llevado a cabo la cartografía de la vegetación alpina a escala media (nivel de cuenca experimental mediante interpretación remota. Esta metodología plantea dificultades debido a la distribución en mosaico de la vegetación y a la heterogeneidad del espetro obtenido. Se discuten las posibilidades de generalización de los resultados y el grado de precisión alcanzado en este caso experimental mediante fotografía aérea digital CIR aplicada a una clasificación automática de

  2. ARCADO - Adding random case analysis to direct observation in workplace-based formative assessment of general practice registrars.

    Science.gov (United States)

    Ingham, Gerard; Fry, Jennifer; Morgan, Simon; Ward, Bernadette

    2015-12-10

    Workplace-based formative assessments using consultation observation are currently conducted during the Australian general practice training program. Assessment reliability is improved by using multiple assessment methods. The aim of this study was to explore experiences of general practice medical educator assessors and registrars (trainees) when adding random case analysis to direct observation (ARCADO) during formative workplace-based assessments. A sample of general practice medical educators and matched registrars were recruited. Following the ARCADO workplace assessment, semi-structured qualitative interviews were conducted. The data was analysed thematically. Ten registrars and eight medical educators participated. Four major themes emerged - formative versus summative assessment; strengths (acceptability, flexibility, time efficiency, complementarity and authenticity); weaknesses (reduced observation and integrity risks); and contextual factors (variation in assessment content, assessment timing, registrar-medical educator relationship, medical educator's approach and registrar ability). ARCADO is a well-accepted workplace-based formative assessment perceived by registrars and assessors to be valid and flexible. The use of ARCADO enabled complementary insights that would not have been achieved with direct observation alone. Whilst there are some contextual factors to be considered in its implementation, ARCADO appears to have utility as formative assessment and, subject to further evaluation, high-stakes assessment.

  3. General purpose - expert system for the analysis and design of base plates

    International Nuclear Information System (INIS)

    Al-Shawaf, T.D.; Hahn, W.F.; Ho, A.D.

    1987-01-01

    As an expert system, the IMPLATE program uses plant specific information to make decisions in modeling and analysis of baseplates. The user supplies a minimum of information which is checked for validity and reasonableness. Once this data is supplied, the program automatically generates a compatible mesh and finite element model from its data base accounting for the attachments, stiffeners, anchor bolts and plate/concrete interface. Based on the loading direction, the program deletes certain degrees of freedom and performs a linear or a nonlinear solution, whichever is appropriate. Load step sizes and equilibrium iteration are automatically selected by the program to ensure a convergent solution. Once the analysis is completed, a code check is then performed and a summary of results is produced. Plots of the plate deformation pattern and stress contours are also generated. (orig.)

  4. Optimization of Multiresonant Wireless Power Transfer Network Based on Generalized Coupled Matrix

    Directory of Open Access Journals (Sweden)

    Qiang Zhao

    2017-01-01

    Full Text Available Magnetic coupling resonant wireless power transfer network (MCRWPTN system can realize wireless power transfer for some electrical equipment real-time and high efficiency in a certain spatial scale, which resolves the contradiction between power transfer efficiency and the power transfer distance of the wireless power transfer. A fully coupled resonant energy transfer model for multirelay coils and ports is established. A dynamic adaptive impedance matching control based on fully coupling matrix and particle swarm optimization algorithm based on annealing is developed for the MCRWPTN. Furthermore, as an example, the network which has twenty nodes is analyzed, and the best transmission coefficient which has the highest power transfer efficiency is found using the optimization algorithm, and the coupling constraints are considered simultaneously. Finally, the effectiveness of the proposed method is proved by the simulation results.

  5. An Improved Minimum Error Interpolator of CNC for General Curves Based on FPGA

    Directory of Open Access Journals (Sweden)

    Jiye HUANG

    2014-05-01

    Full Text Available This paper presents an improved minimum error interpolation algorithm for general curves generation in computer numerical control (CNC. Compared with the conventional interpolation algorithms such as the By-Point Comparison method, the Minimum- Error method and the Digital Differential Analyzer (DDA method, the proposed improved Minimum-Error interpolation algorithm can find a balance between accuracy and efficiency. The new algorithm is applicable for the curves of linear, circular, elliptical and parabolic. The proposed algorithm is realized on a field programmable gate array (FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe. The algorithm has the following advantages: firstly, the maximum interpolation error is only half of the minimum step-size; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  6. A Fast General-Purpose Clustering Algorithm Based on FPGAs for High-Throughput Data Processing

    CERN Document Server

    Annovi, A; The ATLAS collaboration; Castegnaro, A; Gatta, M

    2012-01-01

    We present a fast general-purpose algorithm for high-throughput clustering of data ”with a two dimensional organization”. The algorithm is designed to be implemented with FPGAs or custom electronics. The key feature is a processing time that scales linearly with the amount of data to be processed. This means that clustering can be performed in pipeline with the readout, without suffering from combinatorial delays due to looping multiple times through all the data. This feature makes this algorithm especially well suited for problems where the data has high density, e.g. in the case of tracking devices working under high-luminosity condition such as those of LHC or Super-LHC. The algorithm is organized in two steps: the first step (core) clusters the data; the second step analyzes each cluster of data to extract the desired information. The current algorithm is developed as a clustering device for modern high-energy physics pixel detectors. However, the algorithm has much broader field of applications. In ...

  7. General practitioners' attitude to sport and exercise medicine services: a questionnaire-based survey.

    Science.gov (United States)

    Kassam, H; Tzortziou Brown, V; O'Halloran, P; Wheeler, P; Fairclough, J; Maffulli, N; Morrissey, D

    2014-12-01

    Sport and exercise medicine (SEM) aims to manage sporting injuries and promote physical activity. This study explores general practitioners' (GPs) awareness, understanding and utilisation of their local SEM services. A questionnaire survey, including patient case scenarios, was administered between February and May 2011. 693 GPs working in Cardiff and Vale, Leicester and Tower Hamlets were invited to participate. 244 GPs responded to the questionnaire (35.2% response rate). Less than half (46%; 112/244) were aware of their nearest SEM service and only 38% (92/244) had a clear understanding on referral indications. The majority (82%; 199/244) felt confident advising less active patients about exercise. There were divergent management opinions about the case scenarios of patients who were SEM referral candidates. Overall, GPs were significantly more likely to refer younger patients and patients with sport-related problems rather than patients who would benefit from increasing their activity levels in order to prevent or manage chronic conditions (pHealth Service which may be resulting in suboptimal utilisation especially for patients who could benefit from increasing their activity levels. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Development of a Corrosion Potential Measuring System Based on the Generalization of DACS Physical Scale Modeling

    Directory of Open Access Journals (Sweden)

    Song Dalei

    2015-01-01

    Full Text Available A feasible method in evaluating the protection effect and corrosion state of marine cathodic protection (CP systems is collecting sufficient electric potential data around a submarine pipeline and then establishing the mapping relations between these data and corrosion states of pipelines. However, it is difficult for scientists and researchers to obtain those data accurately due to the harsh marine environments and absence of dedicated potential measurement device. In this paper, to alleviate these two problems, firstly, the theory of dimension and conductivity scaling (DACS physical scale modeling of marine impressed current cathodic protection (ICCP systems is generalized to marine CP systems, secondly, a potential measurement device is developed specially and analogue experiment is designed according to DACS physical scale modeling to verify the feasibility of the measuring system. The experimental results show that 92 percent of the measurement errors are less than 0.25mv, thereby providing an economical and feasible measuring system to get electric potential data around an actual submarine pipeline under CP.

  9. A standardised questionnaire for evaluating hospital-based rotations in general practice vocational training.

    Science.gov (United States)

    Viniol, Annika; Lommler-Thamer, Martina; Baum, Erika; Banzhoff, Norbert Donner

    2015-05-01

    The residency period of vocational training is a fundamental component for a general practitioner's (GP's) qualification. During the residency, teaching is given by consultants and staff from different medical disciplines who do not necessarily know the GP registrar's training objectives. To develop a standardised feedback questionnaire to evaluate residency attachments and enable GP registrars and their supervisors to discuss the actual training environment and areas for improvement. A cross-sectional study was carried out to assess GP registrars' ratings of items preselected by an expert group to reflect important elements in high-quality vocational training. We recruited GP registrars in Germany via mailing lists and online discussion forums and used the importance-severity-score method to score the content and importance of the selected items. On the basis of these ratings, we eliminated 74 items. This version of the questionnaire then underwent an intra-observer-reliability evaluation by the same GP registrars after eight weeks. Items with a correlation of less than 0.4 (Pearson correlation coefficient) were dropped or rephrased. Our initial questionnaire featured 117 potential items. We reduced this to 43 items after the first study; two additional items were dropped following the reliability test. To our knowledge, this is the first standardised feedback questionnaire evaluating the residency period of vocational training. We hope our work will improve the quality of GPs' vocational training. The importance-severity-score method is a fast and efficient method for developing instruments using personal judgement to evaluate constructs.

  10. Structure factors for tunneling ionization rates of molecules: General Hartree-Fock-based integral representation

    Science.gov (United States)

    Madsen, Lars Bojer; Jensen, Frank; Dnestryan, Andrey I.; Tolstikhin, Oleg I.

    2017-07-01

    In the leading-order approximation of the weak-field asymptotic theory (WFAT), the dependence of the tunneling ionization rate of a molecule in an electric field on its orientation with respect to the field is determined by the structure factor of the ionizing molecular orbital. The WFAT yields an expression for the structure factor in terms of a local property of the orbital in the asymptotic region. However, in general quantum chemistry approaches molecular orbitals are expanded in a Gaussian basis which does not reproduce their asymptotic behavior correctly. This hinders the application of the WFAT to polyatomic molecules, which are attracting increasing interest in strong-field physics. Recently, an integral-equation approach to the WFAT for tunneling ionization of one electron from an arbitrary potential has been developed. The structure factor is expressed in an integral form as a matrix element involving the ionizing orbital. The integral is not sensitive to the asymptotic behavior of the orbital, which resolves the difficulty mentioned above. Here, we extend the integral representation for the structure factor to many-electron systems treated within the Hartree-Fock method and show how it can be implemented on the basis of standard quantum chemistry software packages. We validate the methodology by considering noble-gas atoms and the CO molecule, for which accurate structure factors exist in the literature. We also present benchmark results for CO2 and for NH3 in the pyramidal and planar geometries.

  11. Cultural transmission and the evolution of human behaviour: a general approach based on the Price equation.

    Science.gov (United States)

    El Mouden, C; André, J-B; Morin, O; Nettle, D

    2014-02-01

    Transmitted culture can be viewed as an inheritance system somewhat independent of genes that is subject to processes of descent with modification in its own right. Although many authors have conceptualized cultural change as a Darwinian process, there is no generally agreed formal framework for defining key concepts such as natural selection, fitness, relatedness and altruism for the cultural case. Here, we present and explore such a framework using the Price equation. Assuming an isolated, independently measurable culturally transmitted trait, we show that cultural natural selection maximizes cultural fitness, a distinct quantity from genetic fitness, and also that cultural relatedness and cultural altruism are not reducible to or necessarily related to their genetic counterparts. We show that antagonistic coevolution will occur between genes and culture whenever cultural fitness is not perfectly aligned with genetic fitness, as genetic selection will shape psychological mechanisms to avoid susceptibility to cultural traits that bear a genetic fitness cost. We discuss the difficulties with conceptualizing cultural change using the framework of evolutionary theory, the degree to which cultural evolution is autonomous from genetic evolution, and the extent to which cultural change should be seen as a Darwinian process. We argue that the nonselection components of evolutionary change are much more important for culture than for genes, and that this and other important differences from the genetic case mean that different approaches and emphases are needed for cultural than genetic processes. © 2013 The Authors. Journal of Evolutionary Biology © 2013 European Society For Evolutionary Biology.

  12. A Business Process Management System based on a General Optimium Criterion

    Directory of Open Access Journals (Sweden)

    Vasile MAZILESCU

    2009-01-01

    Full Text Available Business Process Management Systems (BPMS provide a broadrange of facilities to manage operational business processes. These systemsshould provide support for the complete Business Process Management (BPMlife-cycle [16]: (redesign, configuration, execution, control, and diagnosis ofprocesses. BPMS can be seen as successors of Workflow Management (WFMsystems. However, already in the seventies people were working on officeautomation systems which are comparable with today’s WFM systems.Recently, WFM vendors started to position their systems as BPMS. Our paper’sgoal is a proposal for a Tasks-to-Workstations Assignment Algorithm (TWAAfor assembly lines which is a special implementation of a stochastic descenttechnique, in the context of BPMS, especially at the control level. Both cases,single and mixed-model, are treated. For a family of product models having thesame generic structure, the mixed-model assignment problem can be formulatedthrough an equivalent single-model problem. A general optimum criterion isconsidered. As the assembly line balancing, this kind of optimisation problemleads to a graph partitioning problem meeting precedence and feasibilityconstraints. The proposed definition for the "neighbourhood" function involvesan efficient way for treating the partition and precedence constraints. Moreover,the Stochastic Descent Technique (SDT allows an implicit treatment of thefeasibility constraint. The proposed algorithm converges with probability 1 toan optimal solution.

  13. Rigorous Line-Based Transformation Model Using the Generalized Point Strategy for the Rectification of High Resolution Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Kun Hu

    2016-09-01

    Full Text Available High precision geometric rectification of High Resolution Satellite Imagery (HRSI is the basis of digital mapping and Three-Dimensional (3D modeling. Taking advantage of line features as basic geometric control conditions instead of control points, the Line-Based Transformation Model (LBTM provides a practical and efficient way of image rectification. It is competent to build the mathematical relationship between image space and the corresponding object space accurately, while it reduces the workloads of ground control and feature recognition dramatically. Based on generalization and the analysis of existing LBTMs, a novel rigorous LBTM is proposed in this paper, which can further eliminate the geometric deformation caused by sensor inclination and terrain variation. This improved nonlinear LBTM is constructed based on a generalized point strategy and resolved by least squares overall adjustment. Geo-positioning accuracy experiments with IKONOS, GeoEye-1 and ZiYuan-3 satellite imagery are performed to compare rigorous LBTM with other relevant line-based and point-based transformation models. Both theoretic analysis and experimental results demonstrate that the rigorous LBTM is more accurate and reliable without adding extra ground control. The geo-positioning accuracy of satellite imagery rectified by rigorous LBTM can reach about one pixel with eight control lines and can be further improved by optimizing the horizontal and vertical distribution of control lines.

  14. Generalized Tensor-Based Morphometry of HIV/AIDS Using Multivariate Statistics on Deformation Tensors

    OpenAIRE

    Lepore, Natasha; Brun, Caroline; Chou, Yi-Yu; Chiang, Ming-Chang; Dutton, Rebecca A.; Hayashi, Kiralee M.; Luders, Eileen; Lopez, Oscar L.; Aizenstein, Howard J.; Toga, Arthur W.; Becker, James T.; Thompson, Paul M.

    2008-01-01

    This paper investigates the performance of a new multivariate method for tensor-based morphometry (TBM). Statistics on Riemannian manifolds are developed that exploit the full information in deformation tensor fields. In TBM, multiple brain images are warped to a common neuroanatomical template via 3-D nonlinear registration; the resulting deformation fields are analyzed statistically to identify group differences in anatomy. Rather than study the Jacobian determinant (volume expansion factor...

  15. Development of restaurant serviceology based on the methodology of general theory of service

    OpenAIRE

    Glushchenko V.; Glushchenko I.; Katz S.; Olshevskaya K.; Pryazhnikova A.; Stashkova E.

    2018-01-01

    The positions of restaurant service (service in restaurant business — restaurantology) are formed as a scientific basis for designing a business and assessing the quality of services in restaurant business, developing the service sector in restaurant business, exploring and forming theoretical bases for the development of economy and management in the restaurant business in the globalization of the market for such a kind of services, development of service and information technologies and com...

  16. A generalized polynomial chaos based ensemble Kalman filter with high accuracy

    International Nuclear Information System (INIS)

    Li Jia; Xiu Dongbin

    2009-01-01

    As one of the most adopted sequential data assimilation methods in many areas, especially those involving complex nonlinear dynamics, the ensemble Kalman filter (EnKF) has been under extensive investigation regarding its properties and efficiency. Compared to other variants of the Kalman filter (KF), EnKF is straightforward to implement, as it employs random ensembles to represent solution states. This, however, introduces sampling errors that affect the accuracy of EnKF in a negative manner. Though sampling errors can be easily reduced by using a large number of samples, in practice this is undesirable as each ensemble member is a solution of the system of state equations and can be time consuming to compute for large-scale problems. In this paper we present an efficient EnKF implementation via generalized polynomial chaos (gPC) expansion. The key ingredients of the proposed approach involve (1) solving the system of stochastic state equations via the gPC methodology to gain efficiency; and (2) sampling the gPC approximation of the stochastic solution with an arbitrarily large number of samples, at virtually no additional computational cost, to drastically reduce the sampling errors. The resulting algorithm thus achieves a high accuracy at reduced computational cost, compared to the classical implementations of EnKF. Numerical examples are provided to verify the convergence property and accuracy improvement of the new algorithm. We also prove that for linear systems with Gaussian noise, the first-order gPC Kalman filter method is equivalent to the exact Kalman filter.

  17. CONSTANCES: a general prospective population-based cohort for occupational and environmental epidemiology: cohort profile.

    Science.gov (United States)

    Goldberg, Marcel; Carton, Matthieu; Descatha, Alexis; Leclerc, Annette; Roquelaure, Yves; Santin, Gaëlle; Zins, Marie

    2017-01-01

    WHY THE COHORT WAS SET UP?: CONSTANCES is a general-purpose cohort with a focus on occupational and environmental factors. CONSTANCES was designed as a randomly selected sample of French adults aged 18-69 years at inception; 200 000 participants will be included. At enrolment, the participants are invited to complete questionnaires and to attend a health screening centre (HSC) for a health examination. A biobank will be set up. The follow-up includes an yearly self-administered questionnaire, a periodic visit to an HSC and linkage to social and national health administrative databases. Data collected for participants include social and demographic characteristics, socioeconomic status, life events and behaviours. Regarding occupational and environmental factors, a wealth of data on organisational, chemical, biological, biomechanical and psychosocial lifelong exposure, as well as residential characteristics, are collected at enrolment and during follow-up. The health data cover a wide spectrum: self-reported health scales, reported prevalent and incident diseases, long-term chronic diseases and hospitalisations, sick-leaves, handicaps, limitations, disabilities and injuries, healthcare usage and services provided, and causes of death. To take into account non-participation and attrition, a random cohort of non-participants was set up and will be followed through the same national databases as participants. Inclusions begun at the end of 2012 and more than 110 000 participants were already included by September 2016. Several projects on occupational and environmental risks already applied to a public call for nested research projects. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  18. Generalized functional linear models for gene-based case-control association studies.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao

    2014-11-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.

  19. A temporal subtraction method for thoracic CT images based on generalized gradient vector flow

    International Nuclear Information System (INIS)

    Miyake, Noriaki; Kim, H.; Maeda, Shinya; Itai, Yoshinori; Tan, J.K.; Ishikawa, Seiji; Katsuragawa, Shigehiko

    2010-01-01

    A temporal subtraction image, which is obtained by subtraction of a previous image from a current one, can be used for enhancing interval changes (such as formation of new lesions and changes in existing abnormalities) on medical images by removing most of the normal structures. If image registration is incorrect, not only the interval changes but also the normal structures would be appeared as some artifacts on the temporal subtraction image. In a temporal subtraction technique for 2-D X-ray image, the effectiveness is shown through a lot of clinical evaluation experiments, and practical use is advancing. Moreover, the MDCT (Multi-Detector row Computed Tomography) can easily introduced on medical field, the development of a temporal subtraction for thoracic CT Images is expected. In our study, a temporal subtraction technique for thoracic CT Images is developed. As the technique, the vector fields are described by use of GGVF (Generalized Gradient Vector Flow) from the previous and current CT images. Afterwards, VOI (Volume of Interest) are set up on the previous and current CT image pairs. The shift vectors are calculated by using nearest neighbor matching of the vector fields in these VOIs. The search kernel on previous CT image is set up from the obtained shift vector. The previous CT voxel which resemble standard the current voxel is detected by voxel value and vector of the GGVF in the kernel. And, the previous CT image is transformed to the same coordinate of standard voxel. Finally, temporal subtraction image is made by subtraction of a warping image from a current one. To verify the proposal method, the result of application to 7 cases and the effectiveness are described. (author)

  20. Generalized Dissimilarity Modeling of Late-Quaternary Variations in Pollen-Based Compositional Dissimilarity

    Science.gov (United States)

    Williams, J. W.; Blois, J.; Ferrier, S.; Manion, G.; Fitzpatrick, M.; Veloz, S.; He, F.; Liu, Z.; Otto-Bliesner, B. L.

    2011-12-01

    In Quaternary paleoecology and paleoclimatology, compositionally dissimilar fossil assemblages usually indicate dissimilar environments; this relationship underpins assemblage-level techniques for paleoenvironmental reconstruction such as mutual climatic ranges or the modern analog technique. However, there has been relatively little investigation into the form of the relationship between compositional dissimilarity and climatic dissimilarity. Here we apply generalized dissimilarity modeling (GDM; Ferrier et al. 2007) as a tool for modeling the expected non-linear relationships between compositional and climatic dissimilarity. We use the CCSM3.0 transient paleoclimatic simulations from the SynTrace working group (Liu et al. 2009) and a new generation of fossil pollen maps from eastern North America (Blois et al. 2011) to 1) assess the spatial relationships between compositional dissimilarity and climatic dissimilarity and 2) whether these spatial relationships change over time. We used a taxonomic list of 106 genus-level pollen types, six climatic variables (winter precipitation and mean temperature, summer precipitation and temperature, seasonality of precipitation, and seasonality of temperature) that were chosen to minimize collinearity, and a cross-referenced pollen and climate dataset mapped for time slices spaced 1000 years apart. When GDM was trained for one time slice, the correlation between predicted and observed spatial patterns of community dissimilarity for other times ranged between 0.3 and 0.73. The selection of climatic predictor variables changed over time, as did the form of the relationship between compositional turnover and climatic predictors. Summer temperature was the only variable selected for all time periods. These results thus suggest that the relationship between compositional dissimilarity in pollen assemblages (and, by implication, beta diversity in plant communities) and climatic dissimilarity can change over time, for reasons to be

  1. A general U-block model-based design procedure for nonlinear polynomial control systems

    Science.gov (United States)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  2. History of the application of the generalized Lewis acid-base theory to metals

    International Nuclear Information System (INIS)

    Brewer, L.

    1988-11-01

    The history of my experiences with intermetallics has been found useful by students seeking my advice on which directions in science they should be emphasizing. In response to their question, I point to a mobile in my office consisting of seven hands pointing in different directions. Science comes up with so many unexpected developments that one's education should have a broad enough base to allow one to branch out in any direction to take advantage of unexpected opportunities. My historical presentation will be a personal account that I hope will serve as a guide to students. There have been many unexpected abrupt changes in my research

  3. A general parallelization strategy for random path based geostatistical simulation methods

    Science.gov (United States)

    Mariethoz, Grégoire

    2010-07-01

    The size of simulation grids used for numerical models has increased by many orders of magnitude in the past years, and this trend is likely to continue. Efficient pixel-based geostatistical simulation algorithms have been developed, but for very large grids and complex spatial models, the computational burden remains heavy. As cluster computers become widely available, using parallel strategies is a natural step for increasing the usable grid size and the complexity of the models. These strategies must profit from of the possibilities offered by machines with a large number of processors. On such machines, the bottleneck is often the communication time between processors. We present a strategy distributing grid nodes among all available processors while minimizing communication and latency times. It consists in centralizing the simulation on a master processor that calls other slave processors as if they were functions simulating one node every time. The key is to decouple the sending and the receiving operations to avoid synchronization. Centralization allows having a conflict management system ensuring that nodes being simulated simultaneously do not interfere in terms of neighborhood. The strategy is computationally efficient and is versatile enough to be applicable to all random path based simulation methods.

  4. Generalized tensor-based morphometry of HIV/AIDS using multivariate statistics on deformation tensors.

    Science.gov (United States)

    Lepore, N; Brun, C; Chou, Y Y; Chiang, M C; Dutton, R A; Hayashi, K M; Luders, E; Lopez, O L; Aizenstein, H J; Toga, A W; Becker, J T; Thompson, P M

    2008-01-01

    This paper investigates the performance of a new multivariate method for tensor-based morphometry (TBM). Statistics on Riemannian manifolds are developed that exploit the full information in deformation tensor fields. In TBM, multiple brain images are warped to a common neuroanatomical template via 3-D nonlinear registration; the resulting deformation fields are analyzed statistically to identify group differences in anatomy. Rather than study the Jacobian determinant (volume expansion factor) of these deformations, as is common, we retain the full deformation tensors and apply a manifold version of Hotelling's $T(2) test to them, in a Log-Euclidean domain. In 2-D and 3-D magnetic resonance imaging (MRI) data from 26 HIV/AIDS patients and 14 matched healthy subjects, we compared multivariate tensor analysis versus univariate tests of simpler tensor-derived indices: the Jacobian determinant, the trace, geodesic anisotropy, and eigenvalues of the deformation tensor, and the angle of rotation of its eigenvectors. We detected consistent, but more extensive patterns of structural abnormalities, with multivariate tests on the full tensor manifold. Their improved power was established by analyzing cumulative p-value plots using false discovery rate (FDR) methods, appropriately controlling for false positives. This increased detection sensitivity may empower drug trials and large-scale studies of disease that use tensor-based morphometry.

  5. Decentralized Job Scheduling in the Cloud Based on a Spatially Generalized Prisoner’s Dilemma Game

    Directory of Open Access Journals (Sweden)

    Gąsior Jakub

    2015-12-01

    Full Text Available We present in this paper a novel distributed solution to a security-aware job scheduling problem in cloud computing infrastructures. We assume that the assignment of the available resources is governed exclusively by the specialized brokers assigned to individual users submitting their jobs to the system. The goal of this scheme is allocating a limited quantity of resources to a specific number of jobs minimizing their execution failure probability and total completion time. Our approach is based on the Pareto dominance relationship and implemented at an individual user level. To select the best scheduling strategies from the resulting Pareto frontiers and construct a global scheduling solution, we developed a decision-making mechanism based on the game-theoretic model of Spatial Prisoner’s Dilemma, realized by selfish agents operating in the two-dimensional cellular automata space. Their behavior is conditioned by the objectives of the various entities involved in the scheduling process and driven towards a Nash equilibrium solution by the employed social welfare criteria. The performance of the scheduler applied is verified by a number of numerical experiments. The related results show the effectiveness and scalability of the scheme in the presence of a large number of jobs and resources involved in the scheduling process.

  6. A General Cognitive System Architecture Based on Dynamic Vision for Motion Control

    Directory of Open Access Journals (Sweden)

    Ernst D. Dickmanns

    2003-10-01

    Full Text Available Animation of spatio-temporal generic models for 3-D shape and motion of objects and subjects, based on feature sets evaluated in parallel from several image streams, is considered to be the core of dynamic vision. Subjects are a special kind of objects capable of sensing environmental parameters and of initiating own actions in combination with stored knowledge. Object / subject recognition and scene understanding are achieved on different levels and scales. Multiple objects are tracked individually in the image streams for perceiving their actual state ('here and now'. By analyzing motion of all relevant objects / subjects over a larger time scale on the level of state variables in the 'scene tree representation' known from computer graphics, the situation with respect to decision taking is assessed. Behavioral capabilities of subjects are represented explicitly on an abstract level for characterizing their potential behaviors. These are generated by stereotypical feed-forward and feedback control applications on a separate systems dynamics level with corresponding methods close to the actuator hardware. This dual representation on an abstract level (for decision making and on the implementation level allows for flexibility and easy adaptation or extension. Results are shown for road vehicle guidance based on three cameras on a gaze control platform.

  7. Study on Fault Diagnosis of Rolling Bearing Based on Time-Frequency Generalized Dimension

    Directory of Open Access Journals (Sweden)

    Yu Yuan

    2015-01-01

    Full Text Available The condition monitoring technology and fault diagnosis technology of mechanical equipment played an important role in the modern engineering. Rolling bearing is the most common component of mechanical equipment which sustains and transfers the load. Therefore, fault diagnosis of rolling bearings has great significance. Fractal theory provides an effective method to describe the complexity and irregularity of the vibration signals of rolling bearings. In this paper a novel multifractal fault diagnosis approach based on time-frequency domain signals was proposed. The method and numerical algorithm of Multi-fractal analysis in time-frequency domain were provided. According to grid type J and order parameter q in algorithm, the value range of J and the cut-off condition of q were optimized based on the effect on the dimension calculation. Simulation experiments demonstrated that the effective signal identification could be complete by multifractal method in time-frequency domain, which is related to the factors such as signal energy and distribution. And the further fault diagnosis experiments of bearings showed that the multifractal method in time-frequency domain can complete the fault diagnosis, such as the fault judgment and fault types. And the fault detection can be done in the early stage of fault. Therefore, the multifractal method in time-frequency domain used in fault diagnosis of bearing is a practicable method.

  8. Generalized Superconductivity. Generalized Levitation

    International Nuclear Information System (INIS)

    Ciobanu, B.; Agop, M.

    2004-01-01

    In the recent papers, the gravitational superconductivity is described. We introduce the concept of generalized superconductivity observing that any nongeodesic motion and, in particular, the motion in an electromagnetic field, can be transformed in a geodesic motion by a suitable choice of the connection. In the present paper, the gravitoelectromagnetic London equations have been obtained from the generalized Helmholtz vortex theorem using the generalized local equivalence principle. In this context, the gravitoelectromagnetic Meissner effect and, implicitly, the gravitoelectromagnetic levitation are given. (authors)

  9. Power Flow Calculation for Weakly Meshed Distribution Networks with Multiple DGs Based on Generalized Chain-table Storage Structure

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Chen, Zhe

    2014-01-01

    Based on generalized chain-table storage structure (GCTSS), a novel power flow method is proposed, which can be used to solve the power flow of weakly meshed distribution networks with multiple distributed generators (DGs). GCTSS is designed based on chain-table technology and its target is to de......Based on generalized chain-table storage structure (GCTSS), a novel power flow method is proposed, which can be used to solve the power flow of weakly meshed distribution networks with multiple distributed generators (DGs). GCTSS is designed based on chain-table technology and its target...... is to describe the topology of radial distribution networks with a clear logic and a small memory size. The strategies of compensating the equivalent currents of break-point branches and the reactive power outputs of PV-type DGs are presented on the basis of superposition theorem. Their formulations...... are simplified to be the final multi-variable linear functions. Furthermore, an accelerating factor is applied to the outer-layer reactive power compensation for improving the convergence procedure. Finally, the proposed power flow method is performed in program language VC++ 6.0, and numerical tests have been...

  10. Prototype performance studies of a Full Mesh ATCA-based General Purpose Data Processing Board

    CERN Document Server

    Okumura, Yasuyuki; Liu, Tiehui Ted; Yin, Hang

    2013-01-01

    High luminosity conditions at the LHC pose many unique challenges for potential silicon based track trigger systems. One of the major challenges is data formatting, where hits from thousands of silicon modules must first be shared and organized into overlapping eta-phi trigger towers. Communication between nodes requires high bandwidth, low latency, and flexible real time data sharing, for which a full mesh backplane is a natural solution. A custom Advanced Telecommunications Computing Architecture data processing board is designed with the goal of creating a scalable architecture abundant in flexible, non-blocking, high bandwidth board to board communication channels while keeping the design as simple as possible. We have performed the first prototype board testing and our first attempt at designing the prototype system has proven to be successful. Leveraging the experience we gained through designing, building and testing the prototype board system we are in the final stages of laying out the next generatio...

  11. Influence of mobile phone traffic on base station exposure of the general public.

    Science.gov (United States)

    Joseph, Wout; Verloock, Leen

    2010-11-01

    The influence of mobile phone traffic on temporal radiofrequency exposure due to base stations during 7 d is compared for five different sites with Erlang data (representing average mobile phone traffic intensity during a period of time). The time periods of high exposure and high traffic during a day are compared and good agreement is obtained. The minimal required measurement periods to obtain accurate estimates for maximal and average long-period exposure (7 d) are determined. It is shown that these periods may be very long, indicating the necessity of new methodologies to estimate maximal and average exposure from short-period measurement data. Therefore, a new method to calculate the fields at a time instant from fields at another time instant using normalized Erlang values is proposed. This enables the estimation of maximal and average exposure during a week from short-period measurements using only Erlang data and avoids the necessity of long measurement times.

  12. A chaos-based evolutionary algorithm for general nonlinear programming problems

    International Nuclear Information System (INIS)

    El-Shorbagy, M.A.; Mousa, A.A.; Nasr, S.M.

    2016-01-01

    In this paper we present a chaos-based evolutionary algorithm (EA) for solving nonlinear programming problems named chaotic genetic algorithm (CGA). CGA integrates genetic algorithm (GA) and chaotic local search (CLS) strategy to accelerate the optimum seeking operation and to speed the convergence to the global solution. The integration of global search represented in genetic algorithm and CLS procedures should offer the advantages of both optimization methods while offsetting their disadvantages. By this way, it is intended to enhance the global convergence and to prevent to stick on a local solution. The inherent characteristics of chaos can enhance optimization algorithms by enabling it to escape from local solutions and increase the convergence to reach to the global solution. Twelve chaotic maps have been analyzed in the proposed approach. The simulation results using the set of CEC’2005 show that the application of chaotic mapping may be an effective strategy to improve the performances of EAs.

  13. Generalized location-based resource allocation for OFDMA cognitive radio systems

    KAUST Repository

    Ben Ghorbel, Mahdi

    2010-09-01

    Cognitive radio is one of the hot topics for emerging and future wireless communication. Cognitive users can share channels with primary users under the condition of non interference. In order to compute this interference, the cognitive system usually use the channel state information of the primary user which is often impractical to obtain. However, using location information, we can estimate this interference by pathloss computation. In this paper, we introduce a low-complexity resource allocation algorithm for orthogonal frequency division multiple access (OFDMA) based cognitive radio systems, which uses relative location information between primary and secondary users to estimate the interference. This algorithm considers interference with multiple primary users having different thresholds. The simulation results show the efficiency of the proposed algorithm by comparing it with an optimal exhaustive search method. © 2010 IEEE.

  14. A Full Mesh ATCA-based General Purpose Data Processing Board: Pulsar II

    CERN Document Server

    Olsen, J; Okumura, Y

    2014-01-01

    High luminosity conditions at the LHC pose many unique challenges for potential silicon based track trigger systems. Among those challenges is data formatting, where hits from thousands of silicon modules must first be shared and organized into overlapping trigger towers. Other challenges exist for Level-1 track triggers, where many parallel data paths may be used for 5 high speed time multiplexed data transfers. Communication between processing nodes requires high bandwidth, low latency, and flexible real time data sharing, for which a full mesh backplane is a natural fit. A custom full mesh enabled ATCA board called the Pulsar II has been designed with the goal of creating a scalable architecture abundant in flexible, non-blocking, high bandwidth board- to-board communication channels while keeping the design as simple as possible.

  15. Workplace-based assessment for general practitioners: using stakeholder perception to aid blueprinting of an assessment battery.

    Science.gov (United States)

    Murphy, Douglas J; Bruce, David; Eva, Kevin W

    2008-01-01

    The implementation of an assessment system may be facilitated by stakeholder agreement that appropriate qualities are being tested. This study investigated the extent to which stakeholders perceived 8 assessment formats (multiple-choice questions, objective structured clinical examination, video, significant event analysis, criterion audit, multi-source feedback, case analysis and patient satisfaction questionnaire) as able to assess varying qualities of doctors training in UK general practice. Educationalists, general practice trainers and registrars completed a blueprinting style of exercise to rate the extent to which each evaluation format was perceived to assess each of 8 competencies derived primarily from the General Medical Council document 'Good Medical Practice'. There were high levels of agreement among stakeholders regarding the perceived qualities tested by the proposed formats (G = 0.82-0.93). Differences were found in participants' perceptions of how well qualities were able to be assessed and in the ability of the respective formats to test each quality. Multi-source feedback (MSF) was expected to assess a wide range of qualities, whereas Probity, Health and Ability to work with colleagues were limited in terms of how well they could be tested by the proposed formats. Awareness of the perceptions of stakeholders should facilitate the development and implementation of workplace-based assessment (WPBA) systems. These data shed light on the acceptability of various formats in a way that will inform further investigation of WPBA formats' validity and feasibility, while also providing evidence on which to base educational efforts regarding the value of each format.

  16. Multi-band Microwave Antennas and Devices based on Generalized Negative-Refractive-Index Transmission Lines

    Science.gov (United States)

    Ryan, Colan Graeme Matthew

    Focused on the quad-band generalized negative-refractive-index transmission line (G-NRI-TL), this thesis presents a variety of novel printed G-NRI-TL multi-band microwave device and antenna prototypes. A dual-band coupled-line coupler, an all-pass G-NRI-TL bridged-T circuit, a dual-band metamaterial leaky-wave antenna, and a multi-band G-NRI-TL resonant antenna are all new developments resulting from this research. In addition, to continue the theme of multi-band components, negative-refractive-index transmission lines are used to create a dual-band circularly polarized transparent patch antenna and a two-element wideband decoupled meander antenna system. High coupling over two independently-specified frequency bands is the hallmark of the G-NRI-TL coupler: it is 0.35lambda0 long but achieves approximately -3 dB coupling over both bands with a maximum insertion loss of 1 dB. This represents greater design flexibility than conventional coupled-line couplers and less loss than subsequent G-NRI-TL couplers. The single-ended bridged-T G-NRI-TL offers a metamaterial unit cell with an all-pass magnitude response up to 8 GHz, while still preserving the quad-band phase response of the original circuit. It is shown how the all-pass response leads to wider bandwidths and improved matching in quad-band inverters, power dividers, and hybrid couplers. The dual-band metamaterial leaky-wave antenna presented here was the first to be reported in the literature, and it allows broadside radiation at both 2 GHz and 6 GHz without experiencing the broadside stopband common to conventional periodic antennas. Likewise, the G-NRI-TL resonant antenna is the first reported instance of such a device, achieving quad-band operation between 2.5 GHz and 5.6 GHz, with a minimum radiation efficiency of 80%. Negative-refractive-index transmission line loading is applied to two devices: an NRI-TL meander antenna achieves a measured 52% impedance bandwidth, while a square patch antenna incorporates

  17. General imaging of advanced 3D mask objects based on the fully-vectorial extended Nijboer-Zernike (ENZ) theory

    Science.gov (United States)

    van Haver, Sven; Janssen, Olaf T. A.; Braat, Joseph J. M.; Janssen, Augustus J. E. M.; Urbach, H. Paul; Pereira, Silvania F.

    2008-03-01

    In this paper we introduce a new mask imaging algorithm that is based on the source point integration method (or Abbe method). The method presented here distinguishes itself from existing methods by exploiting the through-focus imaging feature of the Extended Nijboer-Zernike (ENZ) theory of diffraction. An introduction to ENZ-theory and its application in general imaging is provided after which we describe the mask imaging scheme that can be derived from it. The remainder of the paper is devoted to illustrating the advantages of the new method over existing methods (Hopkins-based). To this extent several simulation results are included that illustrate advantages arising from: the accurate incorporation of isolated structures, the rigorous treatment of the object (mask topography) and the fully vectorial through-focus image formation of the ENZ-based algorithm.

  18. The Traditional Model Does Not Explain Attitudes Toward Euthanasia: A Web-Based Survey of the General Public in Finland.

    Science.gov (United States)

    Terkamo-Moisio, Anja; Kvist, Tarja; Laitila, Teuvo; Kangasniemi, Mari; Ryynänen, Olli-Pekka; Pietilä, Anna-Maija

    2017-08-01

    The debate about euthanasia is ongoing in several countries including Finland. However, there is a lack of information on current attitudes toward euthanasia among general Finnish public. The traditional model for predicting individuals' attitudes to euthanasia is based on their age, gender, educational level, and religiosity. However, a new evaluation of religiosity is needed due to the limited operationalization of this factor in previous studies. This study explores the connections between the factors of the traditional model and the attitudes toward euthanasia among the general public in the Finnish context. The Finnish public's attitudes toward euthanasia have become remarkably more positive over the last decade. Further research is needed on the factors that predict euthanasia attitudes. We suggest two different explanatory models for consideration: one that emphasizes the value of individual autonomy and another that approaches euthanasia from the perspective of fears of death or the process of dying.

  19. Bit Error Rate Performance Analysis of a Threshold-Based Generalized Selection Combining Scheme in Nakagami Fading Channels

    Directory of Open Access Journals (Sweden)

    Kousa Maan

    2005-01-01

    Full Text Available The severity of fading on mobile communication channels calls for the combining of multiple diversity sources to achieve acceptable error rate performance. Traditional approaches perform the combining of the different diversity sources using either the conventional selective diversity combining (CSC, equal-gain combining (EGC, or maximal-ratio combining (MRC schemes. CSC and MRC are the two extremes of compromise between performance quality and complexity. Some researches have proposed a generalized selection combining scheme (GSC that combines the best branches out of the available diversity resources ( . In this paper, we analyze a generalized selection combining scheme based on a threshold criterion rather than a fixed-size subset of the best channels. In this scheme, only those diversity branches whose energy levels are above a specified threshold are combined. Closed-form analytical solutions for the BER performances of this scheme over Nakagami fading channels are derived. We also discuss the merits of this scheme over GSC.

  20. Fuzzy stochastic generalized reliability studies on embankment systems based on first-order approximation theorem

    Directory of Open Access Journals (Sweden)

    Wang Yajun

    2008-12-01

    Full Text Available In order to address the complex uncertainties caused by interfacing between the fuzziness and randomness of the safety problem for embankment engineering projects, and to evaluate the safety of embankment engineering projects more scientifically and reasonably, this study presents the fuzzy logic modeling of the stochastic finite element method (SFEM based on the harmonious finite element (HFE technique using a first-order approximation theorem. Fuzzy mathematical models of safety repertories were introduced into the SFEM to analyze the stability of embankments and foundations in order to describe the fuzzy failure procedure for the random safety performance function. The fuzzy models were developed with membership functions with half depressed gamma distribution, half depressed normal distribution, and half depressed echelon distribution. The fuzzy stochastic mathematical algorithm was used to comprehensively study the local failure mechanism of the main embankment section near Jingnan in the Yangtze River in terms of numerical analysis for the probability integration of reliability on the random field affected by three fuzzy factors. The result shows that the middle region of the embankment is the principal zone of concentrated failure due to local fractures. There is also some local shear failure on the embankment crust. This study provides a referential method for solving complex multi-uncertainty problems in engineering safety analysis.

  1. Response matrix Monte Carlo based on a general geometry local calculation for electron transport

    International Nuclear Information System (INIS)

    Ballinger, C.T.; Rathkopf, J.A.; Martin, W.R.

    1991-01-01

    A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs

  2. Safety analysis of thorium-based fuels in the General Electric Standard BWR

    International Nuclear Information System (INIS)

    Colby, M.J.; Townsend, D.B.; Kunz, C.L.

    1980-06-01

    A denatured (U-233/Th)O 2 fuel assembly has been designed which is energy equivalent to and hardware interchangeable with a modern boiling water reactor (BWR) reference reload assembly. Relative to the reference UO 2 fuel, the thorium fuel design shows better performance during normal and transient reactor operation for the BWR/6 product line and will meet or exceed current safety and licensing criteria. Power distributions are flattened and thermal operating margins are increased by reduced steam void reactivity coefficients caused by U-233. However, a (U-233/Th)O 2 -fueled BWR will likely have reduced operating flexibility. A (U-235/Th)O 2 -fueled BWR should perform similar to a UO 2 -fueled BWR under all operating conditions. A (Pu/Th)O 2 -fueled BWR may have reduced thermal margins and similar accident response and be less stable than a UO 2 -fueled BWR. The assessment is based on comparisions of point model and infinite lattice predictions of various nuclear reactivity parameters, including void reactivity coefficients, Doppler reactivity coefficients, and control blade worths

  3. Teaching Introductory Oceanography through Case Studies: Project based approach for general education students

    Science.gov (United States)

    Farnsworth, K. L.; House, M.; Hovan, S. A.

    2013-12-01

    A recent workshop sponsored by SERC-On the Cutting Edge brought together science educators from a range of schools across the country to discuss new approaches in teaching oceanography. In discussing student interest in our classes, we were struck by the fact that students are drawn to emotional or controversial topics such as whale hunting and tsunami hazard and that these kinds of topics are a great vehicle for introducing more complex concepts such as wave propagation, ocean upwelling and marine chemistry. Thus, we have developed an approach to introductory oceanography that presents students with real-world issues in the ocean sciences and requires them to explore the science behind them in order to improve overall ocean science literacy among non-majors and majors at 2 and 4 year colleges. We have designed a project-based curriculum built around topics that include, but are not limited to: tsunami hazard, whale migration, ocean fertilization, ocean territorial claims, rapid climate change, the pacific trash patch, overfishing, and ocean acidification. Each case study or project consists of three weeks of class time and is structured around three elements: 1) a media analysis; 2) the role of ocean science in addressing the issue; 3) human impact/response. Content resources range from textbook readings, popular or current print news, documentary film and television, and data available on the world wide web from a range of sources. We employ a variety of formative assessments for each case study in order to monitor student access and understanding of content and include a significant component of in-class student discussion and brainstorming guided by faculty input to develop the case study. Each study culminates in summative assessments ranging from exams to student posters to presentations, depending on the class size and environment. We envision this approach for a range of classroom environments including large group face-to-face instruction as well as hybrid

  4. A general SNP-based molecular barcode for Plasmodium falciparum identification and tracking

    Directory of Open Access Journals (Sweden)

    Rosen David

    2008-10-01

    Full Text Available Abstract Background Single nucleotide polymorphism (SNP genotyping provides the means to develop a practical, rapid, inexpensive assay that will uniquely identify any Plasmodium falciparum parasite using a small amount of DNA. Such an assay could be used to distinguish recrudescence from re-infection in drug trials, to monitor the frequency and distribution of specific parasites in a patient population undergoing drug treatment or vaccine challenge, or for tracking samples and determining purity of isolates in the laboratory during culture adaptation and sub-cloning, as well as routine passage. Methods A panel of twenty-four SNP markers has been identified that exhibit a high minor allele frequency (average MAF > 35%, for which robust TaqMan genotyping assays were constructed. All SNPs were identified through whole genome sequencing and MAF was estimated through Affymetrix array-based genotyping of a worldwide collection of parasites. These assays create a "molecular barcode" to uniquely identify a parasite genome. Results Using 24 such markers no two parasites known to be of independent origin have yet been found to have the same allele signature. The TaqMan genotyping assays can be performed on a variety of samples including cultured parasites, frozen whole blood, or whole blood spotted onto filter paper with a success rate > 99%. Less than 5 ng of parasite DNA is needed to complete a panel of 24 markers. The ability of this SNP panel to detect and identify parasites was compared to the standard molecular methods, MSP-1 and MSP-2 typing. Conclusion This work provides a facile field-deployable genotyping tool that can be used without special skills with standard lab equipment, and at reasonable cost that will unambiguously identify and track P. falciparum parasites both from patient samples and in the laboratory.

  5. Modeling and Analysis of a Fractional-Order Generalized Memristor-Based Chaotic System and Circuit Implementation

    Science.gov (United States)

    Yang, Ningning; Xu, Cheng; Wu, Chaojun; Jia, Rong; Liu, Chongxin

    2017-12-01

    Memristor is a nonlinear “missing circuit element”, that can easily achieve chaotic oscillation. Memristor-based chaotic systems have received more and more attention. Research shows that fractional-order systems are more close to real systems. As an important parameter, the order can increase the flexibility and degree of freedom of the system. In this paper, a fractional-order generalized memristor, which consists of a diode bridge and a parallel circuit with an equivalent unit circuit and a linear resistance, is proposed. Frequency and electrical characteristics of the fractional-order memristor are analyzed. A chain structure circuit is used to implement the fractional-order unit circuit. Then replacing the conventional Chua’s diode by the fractional-order generalized memristor, a fractional-order memristor-based chaotic circuit is proposed. A large amount of research work has been done to investigate the influence of the order on the dynamical behaviors of the fractional-order memristor-based chaotic circuit. Varying with the order, the system enters the chaotic state from the periodic state through the Hopf bifurcation and period-doubling bifurcation. The chaotic state of the system has two types of attractors: single-scroll and double-scroll attractor. The stability theory of fractional-order systems is used to determine the minimum order occurring Hopf bifurcation. And the influence of the initial value on the system is analyzed. Circuit simulations are designed to verify the results of theoretical analysis and numerical simulation.

  6. Role of the parameters involved in the plan optimization based on the generalized equivalent uniform dose and radiobiological implications

    International Nuclear Information System (INIS)

    Widesott, L; Strigari, L; Pressello, M C; Landoni, V; Benassi, M

    2008-01-01

    We investigated the role and the weight of the parameters involved in the intensity modulated radiation therapy (IMRT) optimization based on the generalized equivalent uniform dose (gEUD) method, for prostate and head-and-neck plans. We systematically varied the parameters (gEUD max and weight) involved in the gEUD-based optimization of rectal wall and parotid glands. We found that the proper value of weight factor, still guaranteeing planning treatment volumes coverage, produced similar organs at risks dose-volume (DV) histograms for different gEUD max with fixed a = 1. Most of all, we formulated a simple relation that links the reference gEUD max and the associated weight factor. As secondary objective, we evaluated plans obtained with the gEUD-based optimization and ones based on DV criteria, using the normal tissue complication probability (NTCP) models. gEUD criteria seemed to improve sparing of rectum and parotid glands with respect to DV-based optimization: the mean dose, the V 40 and V 50 values to the rectal wall were decreased of about 10%, the mean dose to parotids decreased of about 20-30%. But more than the OARs sparing, we underlined the halving of the OARs optimization time with the implementation of the gEUD-based cost function. Using NTCP models we enhanced differences between the two optimization criteria for parotid glands, but no for rectum wall

  7. Impact of Dry Eye Syndrome on Vision-Related Quality of Life in a Non-Clinic-Based General Population

    Directory of Open Access Journals (Sweden)

    Le Qihua

    2012-07-01

    Full Text Available Abstract Background Dry eye syndrome (DES is a common ocular disorder occurring in general population. The purpose of this study is to evaluate the impact of DES on vision-related quality of life (QoL in a non-clinic-based general population. Methods This population-based cross-sectional study enrolled subjects older than 40 years, who took part in an epidemiological study on dry eye in Sanle Community, Shanghai. Apart from the collection of sociodemographics, dry eye symptoms, and other clinical data, a Chinese version of the 25-item National Eye Institute Visual Functioning Questionnaire (NEI VFQ-25 was administered to all subjects. Comparisons of the NEI VFQ-25 subscale item scores and composite score were made among subgroups divided according to the presence of dry eye symptoms or signs. Multivariate regression analysis was performed to investigate the relationship between the clinical variables and the VFQ-25 composite score. Results A total of 229 participants were enrolled in the study, with an average age of (60.7 ±10.1 years old. Majority of these participants were female (59.8 %, 137/229. The total DES symptom scores (TDSS in subjects either with definite DES or only with dry eye symptoms were significantly higher (F = 60.331, P  Conclusions The symptoms of dry eye are associated with an adverse impact on vision-related QoL in non-clinic-based general population, which is mainly represented as more ocular pain and discomfort, and impaired mental health as well. Apart from clinical examination, it is also important to refer to subjective symptoms and QoL scores when assessing the severity of DES.

  8. PCR-based cDNA library construction: general cDNA libraries at the level of a few cells.

    OpenAIRE

    Belyavsky, A; Vinogradova, T; Rajewsky, K

    1989-01-01

    A procedure for the construction of general cDNA libraries is described which is based on the amplification of total cDNA in vitro. The first cDNA strand is synthesized from total RNA using an oligo(dT)-containing primer. After oligo(dG) tailing the total cDNA is amplified by PCR using two primers complementary to oligo(dA) and oligo(dG) ends of the cDNA. For insertion of the cDNA into a vector a controlled trimming of the 3' ends of the cDNA by Klenow enzyme was used. Starting from 10 J558L ...

  9. General general game AI

    OpenAIRE

    Togelius, Julian; Yannakakis, Georgios N.; 2016 IEEE Conference on Computational Intelligence and Games (CIG)

    2016-01-01

    Arguably the grand goal of artificial intelligence research is to produce machines with general intelligence: the capacity to solve multiple problems, not just one. Artificial intelligence (AI) has investigated the general intelligence capacity of machines within the domain of games more than any other domain given the ideal properties of games for that purpose: controlled yet interesting and computationally hard problems. This line of research, however, has so far focuse...

  10. A cubic map chaos criterion theorem with applications in generalized synchronization based pseudorandom number generator and image encryption

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiuping, E-mail: yangxiuping-1990@163.com; Min, Lequan, E-mail: minlequan@sina.com; Wang, Xue, E-mail: wangxue-20130818@163.com [Schools of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083 (China)

    2015-05-15

    This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% key streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 2{sup 1345}. As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.

  11. A cubic map chaos criterion theorem with applications in generalized synchronization based pseudorandom number generator and image encryption.

    Science.gov (United States)

    Yang, Xiuping; Min, Lequan; Wang, Xue

    2015-05-01

    This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% key streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 2(1345). As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.

  12. Languages, communication potential and generalized trust in Sub-Saharan Africa: evidence based on the Afrobarometer Survey.

    Science.gov (United States)

    Buzasi, Katalin

    2015-01-01

    The goal of this study is to investigate whether speaking other than home languages in Sub-Saharan Africa promotes generalized trust. Based on various psychological and economic theories, a simple model is provided to illustrate how languages might shape trust through various channels. Relying on data from the Afrobarometer Project, which provides information on home and additional languages, the Index of Communication Potential (ICP) is introduced to capture the linguistic situation in the 20 sample countries. The ICP, which can be computed at any desired level of aggregation, refers to the probability that an individual can communicate with a randomly selected person in the society based on common languages. The estimated two-level hierarchical models show that, however, individual level communication potential does not seem to impact trust formation, but living in an area with higher average communication potential increases the chance of exhibiting higher trust toward unknown people. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Association of general psychological factors with frequent attendance in primary care: a population-based cross-sectional observational study.

    Science.gov (United States)

    Hajek, André; Bock, Jens-Oliver; König, Hans-Helmut

    2017-03-24

    Whereas several studies have examined the association between frequent attendance in primary care and illness-specific psychological factors, little is known about the relation between frequent attendance and general psychological factors. Thus, the aim of this study was to investigate the association between being a frequent attender in primary care and general psychological factors. Data were used from a large, population-based sample of community-dwelling individuals aged 40 and above in Germany in 2014 (n = 7,446). Positive and negative affect, life satisfaction, optimism, self-esteem, self-efficacy, and self-regulation were included as general psychological factors. The number of self-reported GP visits in the past twelve months was used to quantify frequency of attendance; individuals with more than 9 visits (highest decile) were defined as frequent attenders. Multiple logistic regressions showed that being a frequent attender was positively associated with less life satisfaction [OR: 0.79 (0.70-0.89)], higher negative affect [OR: 1.38 (1.17-1.62)], less self-efficacy [OR: 0.74 (0.63-0.86)], less self-esteem [OR: 0.65 (0.54-0.79)], less self-regulation [OR: 0.74 (0.60-0.91)], and higher perceived stress [OR: 1.46 (1.28-1.66)], after adjusting for sociodemographic factors, morbidity and lifestyle factors. However, frequent attendance was not significantly associated with positive affect and self-regulation. The present study highlights the association between general psychological factors and frequent attendance. As frequent GP visits produce high health care costs and are potentially associated with increased referrals and use of secondary health care services, this knowledge might help to address these individuals with high needs.

  14. Barriers to the implementation and uptake of simulation-based training programs in general surgery: a multinational qualitative study.

    Science.gov (United States)

    Hosny, Shady G; Johnston, Maximilian J; Pucher, Philip H; Erridge, Simon; Darzi, Ara

    2017-12-01

    Despite evidence demonstrating the advantages of simulation training in general surgery, it is not widely integrated into surgical training programs worldwide. The aim of this study was to identify barriers and facilitators to the implementation and uptake of surgical simulation training programs. A multinational qualitative study was conducted using semi-structured interviews of general surgical residents and experts. Each interview was audio recorded, transcribed verbatim, and underwent emergent theme analysis. All data were anonymized and results pooled. A total of 37 individuals participated in the study. Seventeen experts (Program Directors and Surgical Attendings with an interest in surgical education) and 20 residents drawn from the United States, Canada, United Kingdom, France, and Japan were interviewed. Barriers to simulation-based training were identified based on key themes including financial cost, access, and translational benefit. Participants described cost (89%) and access (76%) as principal barriers to uptake. Common facilitators included a mandatory requirement to complete simulation training (78%) and on-going assessment of skills (78%). Participants felt that simulation training could improve patient outcomes (76%) but identified a lack of evidence to demonstrate benefit (38%). There was a consensus that simulation training has not been widely implemented (70%). There are multiple barriers to the implementation of surgical simulation training programs, however, there is agreement that these programs could potentially improve patient outcomes. Identifying these barriers enable the targeted use of facilitators to deliver simulation training programs. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    DEFF Research Database (Denmark)

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  16. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    Science.gov (United States)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  17. "iBIM"--internet-based interactive modules: an easy and interesting learning tool for general surgery residents.

    Science.gov (United States)

    Azer, Nader; Shi, Xinzhe; de Gara, Chris; Karmali, Shahzeer; Birch, Daniel W

    2014-04-01

    The increased use of information technology supports a resident- centred educational approach that promotes autonomy, flexibility and time management and helps residents to assess their competence, promoting self-awareness. We established a web-based e-learning tool to introduce general surgery residents to bariatric surgery and evaluate them to determine the most appropriate implementation strategy for Internet-based interactive modules (iBIM) in surgical teaching. Usernames and passwords were assigned to general surgery residents at the University of Alberta. They were directed to the Obesity101 website and prompted to complete a multiple-choice precourse test. Afterwards, they were able to access the interactive modules. Residents could review the course material as often as they wanted before completing a multiple-choice postcourse test and exit survey. We used paired t tests to assess the difference between pre- and postcourse scores. Out of 34 residents who agreed to participate in the project, 12 completed the project (35.3%). For these 12 residents, the precourse mean score was 50 ± 17.3 and the postcourse mean score was 67 ± 14 (p = 0.020). Most residents who participated in this study recommended using the iBIMs as a study tool for bariatric surgery. Course evaluation scores suggest this novel approach was successful in transferring knowledge to surgical trainees. Further development of this tool and assessment of implementation strategies will determine how iBIM in bariatric surgery may be integrated into the curriculum.

  18. Computational Fluid Dynamics Modeling of Steam Condensation on Nuclear Containment Wall Surfaces Based on Semiempirical Generalized Correlations

    Directory of Open Access Journals (Sweden)

    Pavan K. Sharma

    2012-01-01

    Full Text Available In water-cooled nuclear power reactors, significant quantities of steam and hydrogen could be produced within the primary containment following the postulated design basis accidents (DBA or beyond design basis accidents (BDBA. For accurate calculation of the temperature/pressure rise and hydrogen transport calculation in nuclear reactor containment due to such scenarios, wall condensation heat transfer coefficient (HTC is used. In the present work, the adaptation of a commercial CFD code with the implementation of models for steam condensation on wall surfaces in presence of noncondensable gases is explained. Steam condensation has been modeled using the empirical average HTC, which was originally developed to be used for “lumped-parameter” (volume-averaged modeling of steam condensation in the presence of noncondensable gases. The present paper suggests a generalized HTC based on curve fitting of most of the reported semiempirical condensation models, which are valid for specific wall conditions. The present methodology has been validated against limited reported experimental data from the COPAIN experimental facility. This is the first step towards the CFD-based generalized analysis procedure for condensation modeling applicable for containment wall surfaces that is being evolved further for specific wall surfaces within the multicompartment containment atmosphere.

  19. Improving incidence estimation in practice-based sentinel surveillance networks using spatial variation in general practitioner density

    Directory of Open Access Journals (Sweden)

    Cécile Souty

    2016-11-01

    Full Text Available Abstract Background In surveillance networks based on voluntary participation of health-care professionals, there is little choice regarding the selection of participants’ characteristics. External information about participants, for example local physician density, can help reduce bias in incidence estimates reported by the surveillance network. Methods There is an inverse association between the number of reported influenza-like illness (ILI cases and local general practitioners (GP density. We formulated and compared estimates of ILI incidence using this relationship. To compare estimates, we simulated epidemics using a spatially explicit disease model and their observation by surveillance networks with different characteristics: random, maximum coverage, largest cities, etc. Results In the French practice-based surveillance network – the “Sentinelles” network – GPs reported 3.6% (95% CI [3;4] less ILI cases as local GP density increased by 1 GP per 10,000 inhabitants. Incidence estimates varied markedly depending on scenarios for participant selection in surveillance. Yet accounting for change in GP density for participants allowed reducing bias. Applied on data from the Sentinelles network, changes in overall incidence ranged between 1.6 and 9.9%. Conclusions Local GP density is a simple measure that provides a way to reduce bias in estimating disease incidence in general practice. It can contribute to improving disease monitoring when it is not possible to choose the characteristics of participants.

  20. A High-Precision Time-Frequency Entropy Based on Synchrosqueezing Generalized S-Transform Applied in Reservoir Detection

    Directory of Open Access Journals (Sweden)

    Hui Chen

    2018-06-01

    Full Text Available According to the fact that high frequency will be abnormally attenuated when seismic signals travel across reservoirs, a new method, which is named high-precision time-frequency entropy based on synchrosqueezing generalized S-transform, is proposed for hydrocarbon reservoir detection in this paper. First, the proposed method obtains the time-frequency spectra by synchrosqueezing generalized S-transform (SSGST, which are concentrated around the real instantaneous frequency of the signals. Then, considering the characteristics and effects of noises, we give a frequency constraint condition to calculate the entropy based on time-frequency spectra. The synthetic example verifies that the entropy will be abnormally high when seismic signals have an abnormal attenuation. Besides, comparing with the GST time-frequency entropy and the original SSGST time-frequency entropy in field data, the results of the proposed method show higher precision. Moreover, the proposed method can not only accurately detect and locate hydrocarbon reservoirs, but also effectively suppress the impact of random noises.

  1. Accurate and computationally efficient prediction of thermochemical properties of biomolecules using the generalized connectivity-based hierarchy.

    Science.gov (United States)

    Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-08-14

    In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.

  2. Using Web-Based Questionnaires and Obstetric Records to Assess General Health Characteristics Among Pregnant Women: A Validation Study.

    Science.gov (United States)

    van Gelder, Marleen M H J; Schouten, Naomi P E; Merkus, Peter J F M; Verhaak, Chris M; Roeleveld, Nel; Roukema, Jolt

    2015-06-16

    Self-reported medical history information is included in many studies. However, data on the validity of Web-based questionnaires assessing medical history are scarce. If proven to be valid, Web-based questionnaires may provide researchers with an efficient means to collect data on this parameter in large populations. The aim of this study was to assess the validity of a Web-based questionnaire on chronic medical conditions, allergies, and blood pressure readings against obstetric records and data from general practitioners. Self-reported questionnaire data were compared with obstetric records for 519 pregnant women participating in the Dutch PRegnancy and Infant DEvelopment (PRIDE) Study from July 2011 through November 2012. These women completed Web-based questionnaires around their first prenatal care visit and in gestational weeks 17 and 34. We calculated kappa statistics (κ) and the observed proportions of positive and negative agreement between the baseline questionnaire and obstetric records for chronic conditions and allergies. In case of inconsistencies between these 2 data sources, medical records from the woman's general practitioner were consulted as the reference standard. For systolic and diastolic blood pressure, intraclass correlation coefficients (ICCs) were calculated for multiple data points. Agreement between the baseline questionnaire and the obstetric record was substantial (κ=.61) for any chronic condition and moderate for any allergy (κ=.51). For specific conditions, we found high observed proportions of negative agreement (range 0.88-1.00) and on average moderate observed proportions of positive agreement with a wide range (range 0.19-0.90). Using the reference standard, the sensitivity of the Web-based questionnaire for chronic conditions and allergies was comparable to or even better than the sensitivity of the obstetric records, in particular for migraine (0.90 vs 0.40, P=.02), asthma (0.86 vs 0.61, P=.04), inhalation allergies (0

  3. Quality and Safety of General Anesthesia with Propofol and Sevoflurane in Children Aged 1-14 Based on Laboratory Parameters.

    Science.gov (United States)

    Vanis-Vatrenjak, Selma; Mesic, Amira; Abdagic, Ines; Mujezinovic, Djenita; Zvizdic, Zlatan

    2015-08-01

    Knowledge of anatomic, physiological, biochemical and physical characteristics of children of all age groups, the existing illness and possible pathological response of the organism to the existing situation, require a pediatric anesthesiologist to participate in the preparation of a child for surgical treatment, to choose the best anesthesia technique and medications, and manipulative techniques to enable the scheduled surgical treatment with minimum anesthesia risks. The aim of this clinical study was to prove reliability and quality of propofol or sevoflurane general anesthesia in children in the age group of 1-14 years from the ASA I group and in the elective surgical treatments in duration of 60 minutes, based on preoperative and postoperative levels of laboratory findings (transaminases, blood sugar, urea and creatinine). the study included 160 patients randomized in two groups based on different approaches: total intravenous anesthesia was used for the propofol group (n=80) (TIVA) and the inhalation technique was used for the sevoflurane group (n=80). statistical evaluation of the obtained results indicates stability of laboratory findings in the immediate postoperative course (after 24 hours) in respect to the preoperative period. Based on the Mann Whitney test (P), preoperative and postoperative blood sugar levels in the sevoflurane vs. propofol group were P=0.152 vs. 0.021; creatinine levels P=0.113 vs. 0.325; urea levels P= 0.016 vs. 0.900; AST levels P=0,031 vs. 0,268 and ALT levels P=0.021 vs. 0.058. Level of significance was Psecurity and quality of general anesthesia in children age group 1-14 years, from the ASA I group. All analyzed laboratory levels in the postoperative course remained in their referential values in both groups of participants.

  4. Test-retest reliability of computer-based video analysis of general movements in healthy term-born infants.

    Science.gov (United States)

    Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars

    2015-10-01

    A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (ptest-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. General Practitioners' Attitudes Toward a Web-Based Mental Health Service for Adolescents: Implications for Service Design and Delivery.

    Science.gov (United States)

    Subotic-Kerry, Mirjana; King, Catherine; O'Moore, Kathleen; Achilles, Melinda; O'Dea, Bridianne

    2018-03-23

    Anxiety disorders and depression are prevalent among youth. General practitioners (GPs) are often the first point of professional contact for treating health problems in young people. A Web-based mental health service delivered in partnership with schools may facilitate increased access to psychological care among adolescents. However, for such a model to be implemented successfully, GPs' views need to be measured. This study aimed to examine the needs and attitudes of GPs toward a Web-based mental health service for adolescents, and to identify the factors that may affect the provision of this type of service and likelihood of integration. Findings will inform the content and overall service design. GPs were interviewed individually about the proposed Web-based service. Qualitative analysis of transcripts was performed using thematic coding. A short follow-up questionnaire was delivered to assess background characteristics, level of acceptability, and likelihood of integration of the Web-based mental health service. A total of 13 GPs participated in the interview and 11 completed a follow-up online questionnaire. Findings suggest strong support for the proposed Web-based mental health service. A wide range of factors were found to influence the likelihood of GPs integrating a Web-based service into their clinical practice. Coordinated collaboration with parents, students, school counselors, and other mental health care professionals were considered important by nearly all GPs. Confidence in Web-based care, noncompliance of adolescents and GPs, accessibility, privacy, and confidentiality were identified as potential barriers to adopting the proposed Web-based service. GPs were open to a proposed Web-based service for the monitoring and management of anxiety and depression in adolescents, provided that a collaborative approach to care is used, the feedback regarding the client is clear, and privacy and security provisions are assured. ©Mirjana Subotic

  6. Participants' evaluation of a group-based organisational assessment tool in Danish general practice: the Maturity Matrix.

    Science.gov (United States)

    Buch, Martin Sandberg; Edwards, Adrian; Eriksson, Tina

    2009-01-01

    The Maturity Matrix is a group-based formative self-evaluation tool aimed at assessing the degree of organisational development in general practice and providing a starting point for local quality improvement. Earlier studies of the Maturity Matrix have shown that participants find the method a useful way of assessing their practice's organisational development. However, little is known about participants' views on the resulting efforts to implement intended changes. To explore users' perspectives on the Maturity Matrix method, the facilitation process, and drivers and barriers for implementation of intended changes. Observation of two facilitated practice meetings, 17 semi-structured interviews with participating general practitioners (GPs) or their staff, and mapping of reasons for continuing or quitting the project. General practices in Denmark Main outcomes: Successful change was associated with: a clearly identified anchor person within the practice, a shared and regular meeting structure, and an external facilitator who provides support and counselling during the implementation process. Failure to implement change was associated with: a high patient-related workload, staff or GP turnover (that seemed to affect small practices more), no clearly identified anchor person or anchor persons who did not do anything, no continuous support from an external facilitator, and no formal commitment to working with agreed changes. Future attempts to improve the impact of the Maturity Matrix, and similar tools for quality improvement, could include: (a) attention to matters of variation caused by practice size, (b) systematic counselling on barriers to implementation and support to structure the change processes, (c) a commitment from participants that goes beyond participation in two-yearly assessments, and (d) an anchor person for each identified goal who takes on the responsibility for improvement in practice.

  7. Standard Technical Specifications General Electric plants, BWR/4:Bases (Sections 3.4-3.10). Volume 3, Revision 1

    International Nuclear Information System (INIS)

    1995-04-01

    This report documents the results of the combined effort of the NRC and the industry to produce improved Standard Technical Specifications (STS), Revision 1 for General Electric BWR/4 Plants. The changes reflected in Revision 1 resulted from the experience gained from license amendment applications to convert to these improved STS or to adopt partial improvements to existing technical specifications. This NUREG is the result of extensive public technical meetings and discussions between the Nuclear Regulatory Commission (NRC) staff and various nuclear power plant licensees, Nuclear Steam Supply System (NSSS) Owners Groups, NSSS vendors, and the Nuclear Energy Institute (NEI). The improved STS were developed based on the criteria in the Final Commission Policy Statement on Technical Specifications Improvements for Nuclear Power Reactors, dated July 22, 1993. The improved STS will be used as the basis for individual nuclear power plant licensees to develop improved plant-specific technical specifications. This report contains three volumes. Volume 1 contains the specifications for all chapters and sections of the improved STS. Volume 2 contains he Bases for Chapters 2.0 and 3.0, and Sections 3.1-3.3 of the improved STS. This document, Volume 3, contains the Bases for Sections 3.4-3.10 of the improved STS

  8. Standard Technical Specifications General Electric plants, BWR/4: Bases (Sections 2.0-3.3). Volume 2, Revision 1

    International Nuclear Information System (INIS)

    1995-04-01

    This report documents the results of the combined effort of the NRC and the industry produce improved Standard Technical Specifications (STS), Revision 1 for General Electric BWR/4 Plants. The changes reflected in Revision 1 resulted from the experience gained from license amendment applications to convert to these improved ST or to adopt partial improvements to existing technical specifications. This NUREG is the result of extensive public technical meetings and discussions between the Nuclear Regulatory Commission (NRC) staff and various nuclear power plant licensees, Nuclear Steam Supply System (NSSS) Owners Groups, NSSS vendors, and the Nuclear Energy Institute (NEI). The improved STS were developed based on the criteria in the Final Commission Policy Statement on Technical Specifications Improvements for Nuclear Power Reactors, dated July 22, 1993. The improved STS will be used as the basis for individual nuclear power plant licensees to develop improved plant-specific technical specifications. This report contains three volumes. Volume I contains the Specifications for all chapters and sections of the improved STS. This document, Volume 2, contains the Bases for Chapters 2.0 and 3.0, and Sections 3.1-3.3 of the improved STS. Volume 3 contains the Bases for Sections 3.4-3.10 of the improved STS

  9. The general behavior of NLO unintegrated parton distributions based on the single-scale evolution and the angular ordering constraint

    International Nuclear Information System (INIS)

    Hosseinkhani, H.; Modarres, M.

    2011-01-01

    To overcome the complexity of generalized two hard scale (k t ,μ) evolution equation, well known as the Ciafaloni, Catani, Fiorani and Marchesini (CCFM) evolution equations, and calculate the unintegrated parton distribution functions (UPDF), Kimber, Martin and Ryskin (KMR) proposed a procedure based on (i) the inclusion of single-scale (μ) only at the last step of evolution and (ii) the angular ordering constraint (AOC) on the DGLAP terms (the DGLAP collinear approximation), to bring the second scale, k t into the UPDF evolution equations. In this work we intend to use the MSTW2008 (Martin et al.) parton distribution functions (PDF) and try to calculate UPDF for various values of x (the longitudinal fraction of parton momentum), μ (the probe scale) and k t (the parton transverse momentum) to see the general behavior of three-dimensional UPDF at the NLO level up to the LHC working energy scales (μ 2 ). It is shown that there exits some pronounced peaks for the three-dimensional UPDF(f a (x,k t )) with respect to the two variables x and k t at various energies (μ). These peaks get larger and move to larger values of k t , as the energy (μ) is increased. We hope these peaks could be detected in the LHC experiments at CERN and other laboratories in the less exclusive processes.

  10. Improving communication in general practice when mental health issues appear: piloting a set of six evidence-based skills.

    Science.gov (United States)

    Stensrud, Tonje Lauritzen; Gulbrandsen, Pål; Mjaaland, Trond Arne; Skretting, Sidsel; Finset, Arnstein

    2014-04-01

    To test a communication skills training program teaching general practitioners (GPs) a set of six evidence-based mental health related skills. A training program was developed and tested in a pilot test-retest study with 21 GPs. Consultations were videotaped and actors used as patients. A coding scheme was created to assess the effect of training on GP behavior. Relevant utterances were categorized as examples of each of the six specified skills. The GPs' self-perceived learning needs and self-efficacy were measured with questionnaires. The mean number of GP utterances related to the six skills increased from 13.3 (SD 6.2) utterances before to 23.6 (SD 7.2) utterances after training; an increase of 77.4% (PSkills exploring emotions, cognitions and resources, and the skill Promote coping, increased significantly. Self-perceived learning needs and self-efficacy did not change significantly. The results from this pilot test are encouraging. GPs enhanced their use on four out of six mental health related communication skills significantly, and the effects were medium to large. This training approach appears to be an efficacious approach to mental health related communication skills training in general practice. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. A Generalized Perturbation Theory Solver In Rattlesnake Based On PETSc With Application To TREAT Steady State Uncertainty Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Schunert, Sebastian; Wang, Congjian; Wang, Yaqi; Kong, Fande; Ortensi, Javier; Baker, Benjamin; Gleicher, Frederick; DeHart, Mark; Martineau, Richard

    2017-04-01

    Rattlesnake and MAMMOTH are the designated TREAT analysis tools currently being developed at the Idaho National Laboratory. Concurrent with development of the multi-physics, multi-scale capabilities, sensitivity analysis and uncertainty quantification (SA/UQ) capabilities are required for predicitive modeling of the TREAT reactor. For steady-state SA/UQ, that is essential for setting initial conditions for the transients, generalized perturbation theory (GPT) will be used. This work describes the implementation of a PETSc based solver for the generalized adjoint equations that constitute a inhomogeneous, rank deficient problem. The standard approach is to use an outer iteration strategy with repeated removal of the fundamental mode contamination. The described GPT algorithm directly solves the GPT equations without the need of an outer iteration procedure by using Krylov subspaces that are orthogonal to the operator’s nullspace. Three test problems are solved and provide sufficient verification for the Rattlesnake’s GPT capability. We conclude with a preliminary example evaluating the impact of the Boron distribution in the TREAT reactor using perturbation theory.

  12. Evaluating the Generalization Value of Process-based Models in a Deep-in-time Machine Learning framework

    Science.gov (United States)

    Shen, C.; Fang, K.

    2017-12-01

    Deep Learning (DL) methods have made revolutionary strides in recent years. A core value proposition of DL is that abstract notions and patterns can be extracted purely from data, without the need for domain expertise. Process-based models (PBM), on the other hand, can be regarded as repositories of human knowledge or hypotheses about how systems function. Here, through computational examples, we argue that there is merit in integrating PBMs with DL due to the imbalance and lack of data in many situations, especially in hydrology. We trained a deep-in-time neural network, the Long Short-Term Memory (LSTM), to learn soil moisture dynamics from Soil Moisture Active Passive (SMAP) Level 3 product. We show that when PBM solutions are integrated into LSTM, the network is able to better generalize across regions. LSTM is able to better utilize PBM solutions than simpler statistical methods. Our results suggest PBMs have generalization value which should be carefully assessed and utilized. We also emphasize that when properly regularized, the deep network is robust and is of superior testing performance compared to simpler methods.

  13. Relationship between the generalized equivalent uniform dose formulation and the Poisson statistics-based tumor control probability model

    International Nuclear Information System (INIS)

    Zhou Sumin; Das, Shiva; Wang Zhiheng; Marks, Lawrence B.

    2004-01-01

    The generalized equivalent uniform dose (GEUD) model uses a power-law formalism, where the outcome is related to the dose via a power law. We herein investigate the mathematical compatibility between this GEUD model and the Poisson statistics based tumor control probability (TCP) model. The GEUD and TCP formulations are combined and subjected to a compatibility constraint equation. This compatibility constraint equates tumor control probability from the original heterogeneous target dose distribution to that from the homogeneous dose from the GEUD formalism. It is shown that this constraint equation possesses a unique, analytical closed-form solution which relates radiation dose to the tumor cell survival fraction. It is further demonstrated that, when there is no positive threshold or finite critical dose in the tumor response to radiation, this relationship is not bounded within the realistic cell survival limits of 0%-100%. Thus, the GEUD and TCP formalisms are, in general, mathematically inconsistent. However, when a threshold dose or finite critical dose exists in the tumor response to radiation, there is a unique mathematical solution for the tumor cell survival fraction that allows the GEUD and TCP formalisms to coexist, provided that all portions of the tumor are confined within certain specific dose ranges

  14. A bottom-up, scientist-based initiative for the communication of climate sciences with the general public

    Science.gov (United States)

    Bourqui, Michel; Bolduc, Cassandra; Paul, Charbonneau; Marie, Charrière; Daniel, Hill; Angelica, Lopez; Enrique, Loubet; Philippe, Roy; Barbara, Winter

    2015-04-01

    This talk introduces a scientists-initiated, new online platform whose aim is to contribute to making climate sciences become public knowledge. It takes a unique bottom-up approach, strictly founded on individual-based participation, high scientific standards and independence The main purpose is to build an open-access, multilingual and peer-reviewed journal publishing short climate articles in non-scientific language. The targeted public includes journalists, teachers, students, local politicians, economists, members of the agriculture sector, and any other citizens from around the world with an interest in climate sciences. This journal is meant to offer a simple and direct channel for scientists wishing to disseminate their research to the general public. A high standard of climate articles is ensured through: a) requiring that the main author is an active climate scientist, and b) an innovative peer-review process involving scientific and non-scientific referees with distinct roles. The platform fosters the direct participation of non-scientists through co-authoring, peer-reviewing, language translation. It furthermore engages the general public in the scientific inquiry by allowing non-scientists to invite manuscripts to be written on topics of their concern. The platform is currently being developed by a community of scientists and non-scientists. In this talk, I will present the basic ideas behind this new online platform, its current state and the plans for the next future. The beta version of the platform is available at: http://www.climateonline.bourquiconsulting.ch

  15. General filtering method for electronic speckle pattern interferometry fringe images with various densities based on variational image decomposition.

    Science.gov (United States)

    Li, Biyuan; Tang, Chen; Gao, Guannan; Chen, Mingming; Tang, Shuwei; Lei, Zhenkun

    2017-06-01

    Filtering off speckle noise from a fringe image is one of the key tasks in electronic speckle pattern interferometry (ESPI). In general, ESPI fringe images can be divided into three categories: low-density fringe images, high-density fringe images, and variable-density fringe images. In this paper, we first present a general filtering method based on variational image decomposition that can filter speckle noise for ESPI fringe images with various densities. In our method, a variable-density ESPI fringe image is decomposed into low-density fringes, high-density fringes, and noise. A low-density fringe image is decomposed into low-density fringes and noise. A high-density fringe image is decomposed into high-density fringes and noise. We give some suitable function spaces to describe low-density fringes, high-density fringes, and noise, respectively. Then we construct several models and numerical algorithms for ESPI fringe images with various densities. And we investigate the performance of these models via our extensive experiments. Finally, we compare our proposed models with the windowed Fourier transform method and coherence enhancing diffusion partial differential equation filter. These two methods may be the most effective filtering methods at present. Furthermore, we use the proposed method to filter a collection of the experimentally obtained ESPI fringe images with poor quality. The experimental results demonstrate the performance of our proposed method.

  16. Validation of the generalized model of two-phase thermosyphon loop based on experimental measurements of volumetric flow rate

    Science.gov (United States)

    Bieliński, Henryk

    2016-09-01

    The current paper presents the experimental validation of the generalized model of the two-phase thermosyphon loop. The generalized model is based on mass, momentum, and energy balances in the evaporators, rising tube, condensers and the falling tube. The theoretical analysis and the experimental data have been obtained for a new designed variant. The variant refers to a thermosyphon loop with both minichannels and conventional tubes. The thermosyphon loop consists of an evaporator on the lower vertical section and a condenser on the upper vertical section. The one-dimensional homogeneous and separated two-phase flow models were used in calculations. The latest minichannel heat transfer correlations available in literature were applied. A numerical analysis of the volumetric flow rate in the steady-state has been done. The experiment was conducted on a specially designed test apparatus. Ultrapure water was used as a working fluid. The results show that the theoretical predictions are in good agreement with the measured volumetric flow rate at steady-state.

  17. Combining Generalized Renewal Processes with Non-Extensive Entropy-Based q-Distributions for Reliability Applications

    Directory of Open Access Journals (Sweden)

    Isis Didier Lins

    2018-03-01

    Full Text Available The Generalized Renewal Process (GRP is a probabilistic model for repairable systems that can represent the usual states of a system after a repair: as new, as old, or in a condition between new and old. It is often coupled with the Weibull distribution, widely used in the reliability context. In this paper, we develop novel GRP models based on probability distributions that stem from the Tsallis’ non-extensive entropy, namely the q-Exponential and the q-Weibull distributions. The q-Exponential and Weibull distributions can model decreasing, constant or increasing failure intensity functions. However, the power law behavior of the q-Exponential probability density function for specific parameter values is an advantage over the Weibull distribution when adjusting data containing extreme values. The q-Weibull probability distribution, in turn, can also fit data with bathtub-shaped or unimodal failure intensities in addition to the behaviors already mentioned. Therefore, the q-Exponential-GRP is an alternative for the Weibull-GRP model and the q-Weibull-GRP generalizes both. The method of maximum likelihood is used for their parameters’ estimation by means of a particle swarm optimization algorithm, and Monte Carlo simulations are performed for the sake of validation. The proposed models and algorithms are applied to examples involving reliability-related data of complex systems and the obtained results suggest GRP plus q-distributions are promising techniques for the analyses of repairable systems.

  18. [Impact analysis of shuxuetong injection on abnormal changes of ALT based on generalized boosted models propensity score weighting].

    Science.gov (United States)

    Yang, Wei; Yi, Dan-Hui; Xie, Yan-Ming; Yang, Wei; Dai, Yi; Zhi, Ying-Jie; Zhuang, Yan; Yang, Hu

    2013-09-01

    To estimate treatment effects of Shuxuetong injection on abnormal changes on ALT index, that is, to explore whether the Shuxuetong injection harms liver function in clinical settings and to provide clinical guidance for its safe application. Clinical information of traditional Chinese medicine (TCM) injections is gathered from hospital information system (HIS) of eighteen general hospitals. This is a retrospective cohort study, using abnormal changes in ALT index as an outcome. A large number of confounding biases are taken into account through the generalized boosted models (GBM) and multiple logistic regression model (MLRM) to estimate the treatment effects of Shuxuetong injections on abnormal changes in ALT index and to explore possible influencing factors. The advantages and process of application of GBM has been demonstrated with examples which eliminate the biases from most confounding variables between groups. This serves to modify the estimation of treatment effects of Shuxuetong injection on ALT index making the results more reliable. Based on large scale clinical observational data from HIS database, significant effects of Shuxuetong injection on abnormal changes in ALT have not been found.

  19. Optimization of the general acceptability though affective tests and response surface methodology of a dry cacao powder mixture based beverage

    Directory of Open Access Journals (Sweden)

    Elena Chau Loo Kung

    2013-09-01

    Full Text Available This research work had as main objective optimizing the general acceptability though affective tests and response surface methodology of a dry cacao powder mixture based beverage. We obtained formulations of mixtures of cacao powder with different concentrations of 15%, 17.5% and 20%, as well as lecithin concentrations of 0.1%; 0.3%; and 0.5% maintaining a constant content of sugar (25 %, Vanillin (1% that included cacao powder with different pH values: natural (pH 5 and alkalinized (pH 6.5 and pH 8 and water by difference to 100%, generating a total of fifteen treatments to be evaluated, according to the Box-Behnen design for three factors. The treatments underwent satisfaction level tests to establish the general acceptability. The treatment that included cacao powder with a concentration of 17.5 %, pH 6.5 and lecithin concentration of 0.3 % obtained the best levels of acceptability. The software Statgraphics Plus 5.1 was used to obtain the treatment with maximum acceptability that corresponded to cacao powder with pH 6.81, with a concentration of 18.24 % and soy lecithin in 0.28% with a tendency to what was obtained in the satisfaction levels tests. Finally we characterized in a physical-chemistry and microbiological way the optimum formulation as well as evaluated sensitively obtaining an acceptability of 6.17.

  20. Occupational therapy in hospital based care in the Netherlands: a comparison of occupational therapy in general care (nursing homes, rehabilitation centres and general hospitals) and psychiatric care.

    NARCIS (Netherlands)

    Driessen, M.J.; Dekker, J.; Zee, J. van der; Lankhorst, G.

    1996-01-01

    The case of a 26-year old woman with Chronic Fatigue Syndrome (CFS) is presented. Multidimensional assessment showing severe debilitating fatigue and considerable psychological, social and occupational impairment confirmed the diagnosis. Cognitive behavior therapy (CBT) was based on a tested causal

  1. General beam position controlling method for 3D optical systems based on the method of solving ray matrix equations

    Science.gov (United States)

    Chen, Meixiong; Yuan, Jie; Long, Xingwu; Kang, Zhenglong; Wang, Zhiguo; Li, Yingying

    2013-12-01

    A general beam position controlling method for 3D optical systems based on the method of solving ray matrix equations has been proposed in this paper. As a typical 3D optical system, nonplanar ring resonator of Zero-Lock Laser Gyroscopes has been chosen as an example to show its application. The total mismatching error induced by Faraday-wedge in nonplanar ring resonator has been defined and eliminated quite accurately with the error less than 1 μm. Compared with the method proposed in Ref. [14], the precision of the beam position controlling has been improved by two orders of magnitude. The novel method can be used to implement automatic beam position controlling in 3D optical systems with servo circuit. All those results have been confirmed by related alignment experiments. The results in this paper are important for beam controlling, ray tracing, cavity design and alignment in 3D optical systems.

  2. Preferences for Web-Based Information Material for Low Back Pain: Qualitative Interview Study on People Consulting a General Practitioner

    DEFF Research Database (Denmark)

    Riis, Allan; Hjelmager, Meulengracht Ditte; Vinther, Dausel Line

    2018-01-01

    in Denmark. Methods: This is a phenomenological qualitative study. Adults who had consulted their general practitioner because of LBP within the past 14 days were included. Each participated in a semistructured interview, which was audiotaped and transcribed for text condensation. Interviews were conducted...... at the participant?s home by 2 interviewers. Participants also completed a questionnaire that requested information on age, gender, internet usage, interest in searching new knowledge, LBP-related function, and pain. Results: Fifteen 45-min interviews were conducted. Participants had a median age of 40 years (range......-based information confusing, often difficult to comprehend, and not relevant for them, and they questioned the motives driving most hosting companies or organizations. The Patient Handbook, a Danish government-funded website that provides information to Danes about health, was mentioned as a trustworthy...

  3. Identification of general characteristics, motivation, and satisfaction of internet-based medical consultation service users in Croatia.

    Science.gov (United States)

    Klinar, Ivana; Balazin, Ana; Barsić, Bruno; Tiljak, Hrvoje

    2011-08-15

    To identify users' reasons to look for physician consultation on the internet instead of visiting a physician and to explore their general characteristics, motivation, and satisfaction with internet medical consultation service 'Your Questions.' Users of a free internet medical consultation service 'Your Questions' (www.plivazdravlje.hr) were invited to participate in a web-based survey designed to explore their general characteristics (age, sex, etc), reasons for using the service, the nature of their health problem or question, and their satisfaction with the service. Respondents were divided into two groups: users who consulted an internet physician only (Group I) and users who used internet consulting before or after visiting a physician (Group II). The response rate was 38% (1036/2747), with 79% female respondents. A fifth of the respondents (21%) consulted an internet physician only (Group I). Multivariate analysis revealed that the respondents in Group I were younger (median 24 vs 28 years in Group II), more interested into questions about pregnancy (odds ratio [OR], 1.984; 95% confidence interval [CI], 1.203-3.272), more often embarrassed to talk to a physician in person (OR, 1.828; 95% CI, 1.119-2.989), and more motivated to protect their privacy (OR, 1.727; 95% CI, 1.252-2.380). They also had greater satisfaction with the service (77% vs 60%, Pinternet-based medical consultation services were younger age, need for privacy protection, avoidance of embarrassment at the physician's office, and having a question related to pregnancy. This reveals the internet medical consultation service as a useful health promotion supplement that is particularly applicable for the population of young adults.

  4. Performance scores in general practice: a comparison between the clinical versus medication-based approach to identify target populations.

    Directory of Open Access Journals (Sweden)

    Olivier Saint-Lary

    Full Text Available CONTEXT: From one country to another, the pay-for-performance mechanisms differ on one significant point: the identification of target populations, that is, populations which serve as a basis for calculating the indicators. The aim of this study was to compare clinical versus medication-based identification of populations of patients with diabetes and hypertension over the age of 50 (for men or 60 (for women, and any consequences this may have on the calculation of P4P indicators. METHODS: A comparative, retrospective, observational study was carried out with clinical and prescription data from a panel of general practitioners (GPs, the Observatory of General Medicine (OMG for the year 2007. Two indicators regarding the prescription for statins and aspirin in these populations were calculated. RESULTS: We analyzed data from 21.690 patients collected by 61 GPs via electronic medical files. Following the clinical-based approach, 2.278 patients were diabetic, 8,271 had hypertension and 1.539 had both against respectively 1.730, 8.511 and 1.304 following the medication-based approach (% agreement = 96%, kappa = 0.69. The main reasons for these differences were: forgetting to code the morbidities in the clinical approach, not taking into account the population of patients who were given life style and diet rules only or taking into account patients for whom morbidities other than hypertension could justify the use of antihypertensive drugs in the medication-based approach. The mean (confidence interval per doctor was 33.7% (31.5-35.9 for statin indicator and 38.4% (35.4-41.4 for aspirin indicator when the target populations were identified on the basis of clinical criteria whereas they were 37.9% (36.3-39.4 and 43.8% (41.4-46.3 on the basis of treatment criteria. CONCLUSION: The two approaches yield very "similar" scores but these scores cover different realities and offer food for thought on the possible usage of these indicators in the

  5. Prevalence of pain in the orofacial regions in patients visiting general dentists in the Northwest Practice-based REsearch Collaborative in Evidence-based DENTistry research network.

    Science.gov (United States)

    Horst, Orapin V; Cunha-Cruz, Joana; Zhou, Lingmei; Manning, Walter; Mancl, Lloyd; DeRouen, Timothy A

    2015-10-01

    This study aimed to measure prevalence of pain in the orofacial regions and determine association with demographics, treatment history, and oral health conditions in dental patients visiting clinics in the Northwest Practice-based REsearch Collaborative in Evidence-based DENTistry (PRECEDENT) research network. Data were recorded in a survey with systematic random sampling of patients (n = 1,668, 18 to 93 years old, 56% female) visiting 100 general dentists in the Northwest PRECEDENT research network. Prevalence ratios (PR) of orofacial pain by each variable were estimated by generalized estimating equations for Poisson regression. The prevalence of orofacial pain during the past year was 16.1% (95% confidence interval [CI], 13.4-18.9), of which the most prevalent pain locations were dentoalveolar (9.1%; 95% CI, 7.0-11.2) and musculoligamentous tissues (6.6%; 95% CI, 4.5-8.7). Other locations included soft tissues (0.5%; 95% CI, 0.2-0.8) and nonspecific areas (0.6%; 95% CI, 0.2-1.0). The prevalence of dentoalveolar but not musculoligamentous pain decreased with age. When comparing the 18- to 29-year-old patients, dentoalveolar pain decreased significantly in 45- to 64-year-old patients (PR, 0.59; 95% CI, 0.4-0.9) and in those 65 years or older (PR, 0.5; 95% CI, 0.3-0.9). Sex significantly affected the prevalence of musculoligamentous but not dentoalveolar pain. Women (PR, 3.2; 95% CI, 2.0-5.1) were more likely to have musculoligamentous pain. The prevalence of dentoalveolar and musculoligamentous pain did not vary significantly by ethnicity. Dentoalveolar pain was reported more frequently in patients who did not receive dental maintenance (PR, 2.9; 95% CI, 2.1-4.2) and those visiting community-based public health clinics (PR, 2.2; 95% CI, 1.2-3.7). One in 6 patients visiting a general dentist had experienced orofacial pain during the past year. Dentoalveolar and musculoligamentous pains were the most prevalent types of pain. Pain in the muscles and

  6. The singular value filter: a general filter design strategy for PCA-based signal separation in medical ultrasound imaging.

    Science.gov (United States)

    Mauldin, F William; Lin, Dan; Hossack, John A

    2011-11-01

    A general filtering method, called the singular value filter (SVF), is presented as a framework for principal component analysis (PCA) based filter design in medical ultrasound imaging. The SVF approach operates by projecting the original data onto a new set of bases determined from PCA using singular value decomposition (SVD). The shape of the SVF weighting function, which relates the singular value spectrum of the input data to the filtering coefficients assigned to each basis function, is designed in accordance with a signal model and statistical assumptions regarding the underlying source signals. In this paper, we applied SVF for the specific application of clutter artifact rejection in diagnostic ultrasound imaging. SVF was compared to a conventional PCA-based filtering technique, which we refer to as the blind source separation (BSS) method, as well as a simple frequency-based finite impulse response (FIR) filter used as a baseline for comparison. The performance of each filter was quantified in simulated lesion images as well as experimental cardiac ultrasound data. SVF was demonstrated in both simulation and experimental results, over a wide range of imaging conditions, to outperform the BSS and FIR filtering methods in terms of contrast-to-noise ratio (CNR) and motion tracking performance. In experimental mouse heart data, SVF provided excellent artifact suppression with an average CNR improvement of 1.8 dB with over 40% reduction in displacement tracking error. It was further demonstrated from simulation and experimental results that SVF provided superior clutter rejection, as reflected in larger CNR values, when filtering was achieved using complex pulse-echo received data and non-binary filter coefficients.

  7. Handling high predictor dimensionality in slope-unit-based landslide susceptibility models through LASSO-penalized Generalized Linear Model

    KAUST Repository

    Camilo, Daniela Castro

    2017-08-30

    Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.

  8. Handling high predictor dimensionality in slope-unit-based landslide susceptibility models through LASSO-penalized Generalized Linear Model

    KAUST Repository

    Camilo, Daniela Castro; Lombardo, Luigi; Mai, Paul Martin; Dou, Jie; Huser, Raphaë l

    2017-01-01

    Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.

  9. Simulation-based investigation of the generality of Lyzenga's multispectral bathymetry formula in Case-1 coral reef water

    Science.gov (United States)

    Manessa, Masita Dwi Mandini; Kanno, Ariyo; Sagawa, Tatsuyuki; Sekine, Masahiko; Nurdin, Nurjannah

    2018-01-01

    Lyzenga's multispectral bathymetry formula has attracted considerable interest due to its simplicity. However, there has been little discussion of the effect that variation in optical conditions and bottom types-which commonly appears in coral reef environments-has on this formula's results. The present paper evaluates Lyzenga's multispectral bathymetry formula for a variety of optical conditions and bottom types. A noiseless dataset of above-water remote sensing reflectance from WorldView-2 images over Case-1 shallow coral reef water is simulated using a radiative transfer model. The simulation-based assessment shows that Lyzenga's formula performs robustly, with adequate generality and good accuracy, under a range of conditions. As expected, the influence of bottom type on depth estimation accuracy is far greater than the influence of other optical parameters, i.e., chlorophyll-a concentration and solar zenith angle. Further, based on the simulation dataset, Lyzenga's formula estimates depth when the bottom type is unknown almost as accurately as when the bottom type is known. This study provides a better understanding of Lyzenga's multispectral bathymetry formula under various optical conditions and bottom types.

  10. New Smith Internal Model Control of Two-Motor Drive System Based on Neural Network Generalized Inverse

    Directory of Open Access Journals (Sweden)

    Guohai Liu

    2016-01-01

    Full Text Available Multimotor drive system is widely applied in industrial control system. Considering the characteristics of multi-input multioutput, nonlinear, strong-coupling, and time-varying delay in two-motor drive systems, this paper proposes a new Smith internal model (SIM control method, which is based on neural network generalized inverse (NNGI. This control strategy adopts the NNGI system to settle the decoupling issue and utilizes the SIM control structure to solve the delay problem. The NNGI method can decouple the original system into several composite pseudolinear subsystems and also complete the pole-zero allocation of subsystems. Furthermore, based on the precise model of pseudolinear system, the proposed SIM control structure is used to compensate the network delay and enhance the interference resisting the ability of the whole system. Both simulation and experimental results are given, verifying that the proposed control strategy can effectively solve the decoupling problem and exhibits the strong robustness to load impact disturbance at various operations.

  11. Polynomial Chaos–Based Bayesian Inference of K-Profile Parameterization in a General Circulation Model of the Tropical Pacific

    KAUST Repository

    Sraj, Ihab

    2016-08-26

    The authors present a polynomial chaos (PC)-based Bayesian inference method for quantifying the uncertainties of the K-profile parameterization (KPP) within the MIT general circulation model (MITgcm) of the tropical Pacific. The inference of the uncertain parameters is based on a Markov chain Monte Carlo (MCMC) scheme that utilizes a newly formulated test statistic taking into account the different components representing the structures of turbulent mixing on both daily and seasonal time scales in addition to the data quality, and filters for the effects of parameter perturbations over those as a result of changes in the wind. To avoid the prohibitive computational cost of integrating the MITgcm model at each MCMC iteration, a surrogate model for the test statistic using the PC method is built. Because of the noise in the model predictions, a basis-pursuit-denoising (BPDN) compressed sensing approach is employed to determine the PC coefficients of a representative surrogate model. The PC surrogate is then used to evaluate the test statistic in the MCMC step for sampling the posterior of the uncertain parameters. Results of the posteriors indicate good agreement with the default values for two parameters of the KPP model, namely the critical bulk and gradient Richardson numbers; while the posteriors of the remaining parameters were barely informative. © 2016 American Meteorological Society.

  12. Generalized Fragility Relationships with Local Site Conditions for Probabilistic Performance-based Seismic Risk Assessment of Bridge Inventories

    Directory of Open Access Journals (Sweden)

    Sivathayalan S.

    2012-01-01

    Full Text Available The current practice of detailed seismic risk assessment cannot be easily applied to all the bridges in a large transportation networks due to limited resources. This paper presents a new approach for seismic risk assessment of large bridge inventories in a city or national bridge network based on the framework of probabilistic performance based seismic risk assessment. To account for the influences of local site effects, a procedure to generate site-specific hazard curves that includes seismic hazard microzonation information has been developed for seismic risk assessment of bridge inventories. Simulated ground motions compatible with the site specific seismic hazard are used as input excitations in nonlinear time history analysis of representative bridges for calibration. A normalizing procedure to obtain generalized fragility relationships in terms of structural characteristic parameters of bridge span and size and longitudinal and transverse reinforcement ratios is presented. The seismic risk of bridges in a large inventory can then be easily evaluated using the normalized fragility relationships without the requirement of carrying out detailed nonlinear time history analysis.

  13. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    Science.gov (United States)

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  14. A General Self-Organized Tree-Based Energy-Balance Routing Protocol for Wireless Sensor Network

    Science.gov (United States)

    Han, Zhao; Wu, Jie; Zhang, Jie; Liu, Liefeng; Tian, Kaiyun

    2014-04-01

    Wireless sensor network (WSN) is a system composed of a large number of low-cost micro-sensors. This network is used to collect and send various kinds of messages to a base station (BS). WSN consists of low-cost nodes with limited battery power, and the battery replacement is not easy for WSN with thousands of physically embedded nodes, which means energy efficient routing protocol should be employed to offer a long-life work time. To achieve the aim, we need not only to minimize total energy consumption but also to balance WSN load. Researchers have proposed many protocols such as LEACH, HEED, PEGASIS, TBC and PEDAP. In this paper, we propose a General Self-Organized Tree-Based Energy-Balance routing protocol (GSTEB) which builds a routing tree using a process where, for each round, BS assigns a root node and broadcasts this selection to all sensor nodes. Subsequently, each node selects its parent by considering only itself and its neighbors' information, thus making GSTEB a dynamic protocol. Simulation results show that GSTEB has a better performance than other protocols in balancing energy consumption, thus prolonging the lifetime of WSN.

  15. Photometric redshift estimation via deep learning. Generalized and pre-classification-less, image based, fully probabilistic redshifts

    Science.gov (United States)

    D'Isanto, A.; Polsterer, K. L.

    2018-01-01

    Context. The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. Aims: We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. Methods: A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). Results: We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. Conclusions: The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.

  16. Sex-based differences in income and response to proposed financial incentives among general practitioners in France.

    Science.gov (United States)

    Weeks, William B; Paraponaris, Alain; Ventelou, Bruno

    2013-11-01

    Women represent a growing proportion of the physician workforce, worldwide. Therefore, for the purposes of workforce planning, it is increasingly important to understand differences in how male and female physicians work and might respond to financial incentives. A recent survey allowed us to determine whether sex-based differences in either physician income or responses to a hypothetical increase in reimbursement exist among French General Practitioners (GPs). Our analysis of 828 male and 244 female GPs' responses showed that females earned 35% less per year from medical practice than their male counterparts. After adjusting for the fact that female GPs had practiced medicine fewer years, worked 11% fewer hours per year, and spent more time with each consultation, female GPs earned 11,194€, or 20.6%, less per year (95% CI: 7085€-15,302€ less per year). Male GPs were more likely than female GPs to indicate that they would work fewer hours if consultation fees were to be increased. Our findings suggest that, as the feminization of medicine increases, the need to address gender-based income disparities increases and the tools that French policymakers use to regulate the physician supply might need to change. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Upper gastrointestinal symptoms, psychosocial co-morbidity and health care seeking in general practice: population based case control study

    Directory of Open Access Journals (Sweden)

    Schellevis François G

    2009-09-01

    Full Text Available Abstract Background The pathophysiology of upper gastrointestinal (GI symptoms is still poorly understood. Psychological symptoms were found to be more common in patients with functional gastrointestinal complaints, but it is debated whether they are primarily linked to GI symptoms or rather represent motivations for health-care seeking. Purpose of our study was to compare co-morbidity, in particular psychological and social problems, between patients with and without upper GI symptoms. In addition, we investigated whether the prevalence of psychological and social problems is part of a broader pattern of illness related health care use. Methods Population based case control study based on the second Dutch National Survey of general practice (conducted in 2001. Cases (adults visiting their primary care physician (PCP with upper GI symptoms and controls (individuals not having any of these complaints, matched for gender, age, PCP-practice and ethnicity were compared. Main outcome measures were contact frequency, prevalence of somatic as well as psychosocial diagnoses, prescription rate of (psychopharmacological agents, and referral rates. Data were analyzed using odds ratios, the Chi square test as well as multivariable logistic regression analysis. Results Data from 13,389 patients with upper GI symptoms and 13,389 control patients were analyzed. Patients with upper GI symptoms visited their PCP twice as frequently as controls (8.6 vs 4.4 times/year. Patients with upper GI symptoms presented not only more psychological and social problems, but also more other health problems to their PCP (odds ratios (ORs ranging from 1.37 to 3.45. Patients with upper GI symptoms more frequently used drugs of any ATC-class (ORs ranging from 1.39 to 2.90, including psychotropic agents. The observed differences were less pronounced when we adjusted for non-attending control patients. In multivariate regression analysis, contact frequency and not psychological or

  18. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    Science.gov (United States)

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  19. Generalized corrosion of nickel base alloys in high temperature aqueous media: a contribution to the comprehension of the mechanisms

    International Nuclear Information System (INIS)

    Marchetti-Sillans, L.

    2007-11-01

    In France, nickel base alloys, such as alloy 600 and alloy 690, are the materials constituting steam generators (SG) tubes of pressurized water reactors (PWR). The generalized corrosion resulting from the interaction between these alloys and the PWR primary media leads, on the one hand, to the formation of a thin protective oxide scale (∼ 10 nm), and on the other hand, to the release of cations in the primary circuit, which entails an increase of the global radioactivity of this circuit. The goal of this work is to supply some new comprehension elements about nickel base alloys corrosion phenomena in PWR primary media, taking up with underlining the effects of metallurgical and physico-chemical parameters on the nature and the growth mechanisms of the protective oxide scale. In this context, the passive film formed during the exposition of alloys 600, 690 and Ni-30Cr, in conditions simulating the PWR primary media, has been analyzed by a set of characterization techniques (SEM, TEM, PEC and MPEC, XPS). The coupling of these methods leads to a fine description, in terms of nature and structure, of the multilayered oxide forming during the exposition of nickel base alloys in primary media. Thus, the protective part of the oxide scale is composed of a continuous layer of iron and nickel mixed chromite, and Cr 2 O 3 nodules dispersed at the alloy / mixed chromite interface. The study of protective scale growth mechanisms by tracers and markers experiments reveals that the formation of the mixed chromite is the consequence of an anionic mechanism, resulting from short circuits like grain boundaries diffusion. Besides, the impact of alloy surface defects has also been studied, underlining a double effect of this parameter, which influences the short circuits diffusion density in oxide and the formation rate of Cr 2 O 3 nodules. The sum of these results leads to suggest a description of the nickel base alloys corrosion mechanisms in PWR primary media and to tackle some

  20. Supporting patients treated for prostate cancer: a video vignette study with an email-based educational program in general practice.

    Science.gov (United States)

    Jiwa, Moyez; Halkett, Georgia; Meng, Xingqiong; Pillai, Vinita; Berg, Melissa; Shaw, Tim

    2014-02-26

    Men who have been treated for prostate cancer in Australia can consult their general practitioner (GP) for advice about symptoms or side effects at any time following treatment. However, there is no evidence that such men are consistently advised by GPs and patients experience substantial unmet need for reassurance and advice. The intent of the study was to evaluate a brief, email-based educational program for GPs to manage standardized patients presenting with symptoms or side effects months or years after prostate cancer treatment. GPs viewed six pairs of video vignettes of actor-patients depicting men who had been treated for prostate cancer. The actor-patients presented problems that were attributable to the treatment of cancer. In Phase 1, GPs indicated their diagnosis and stated if they would prescribe, refer, or order tests based on that diagnosis. These responses were compared to the management decisions for those vignettes as recommended by a team of experts in prostate cancer. After Phase 1, all the GPs were invited to participate in an email-based education program (Spaced Education) focused on prostate cancer. Participants received feedback and could compare their progress and their performance with other participants in the study. In Phase 2, all GPs, regardless of whether they had completed the program, were invited to view another set of six video vignettes with men presenting similar problems to Phase 1. They again offered a diagnosis and stated if they would prescribe, refer, or order tests based on that diagnosis. In total, 64 general practitioners participated in the project, 57 GPs participated in Phase 1, and 45 in Phase 2. The Phase 1 education program was completed by 38 of the 57 (59%) participants. There were no significant differences in demographics between those who completed the program and those who did not. Factors determining whether management of cases was consistent with expert opinion were number of sessions worked per week (OR 0

  1. A randomized clinical trial comparing an acceptance-based behavior therapy to applied relaxation for generalized anxiety disorder.

    Science.gov (United States)

    Hayes-Skelton, Sarah A; Roemer, Lizabeth; Orsillo, Susan M

    2013-10-01

    To examine whether an empirically and theoretically derived treatment combining mindfulness- and acceptance-based strategies with behavioral approaches would improve outcomes in generalized anxiety disorder (GAD) over an empirically supported treatment. This trial randomized 81 individuals (65.4% female, 80.2% identified as White, average age 32.92) diagnosed with GAD to receive 16 sessions of either an acceptance-based behavior therapy (ABBT) or applied relaxation (AR). Assessments at pretreatment, posttreatment, and 6-month follow-up included the following primary outcome measures: GAD clinician severity rating, Structured Interview Guide for the Hamilton Anxiety Rating Scale, Penn State Worry Questionnaire, Depression Anxiety Stress Scale, and the State-Trait Anxiety Inventory. Secondary outcomes included the Beck Depression Inventory-II, Quality of Life Inventory, and number of comorbid diagnoses. Mixed effect regression models showed significant, large effects for time for all primary outcome measures (ds = 1.27 to 1.61) but nonsignificant, small effects for condition and Condition × Time (ds = 0.002 to 0.20), indicating that clients in the 2 treatments improved comparably over treatment. For secondary outcomes, time was significant (ds = 0.74 to 1.38), but condition and Condition × Time effects were not (ds = 0.004 to 0.31). No significant differences emerged over follow-up (ds = 0.03 to 0.39), indicating maintenance of gains. Between 63.3 and 80.0% of clients in ABBT and 60.6 and 78.8% of clients in AR experienced clinically significant change across 5 calculations of change at posttreatment and follow-up. ABBT is a viable alternative for treating GAD. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  2. Area-based cell colony surviving fraction evaluation: A novel fully automatic approach using general-purpose acquisition hardware.

    Science.gov (United States)

    Militello, Carmelo; Rundo, Leonardo; Conti, Vincenzo; Minafra, Luigi; Cammarata, Francesco Paolo; Mauri, Giancarlo; Gilardi, Maria Carla; Porcino, Nunziatina

    2017-10-01

    The current methodology for the Surviving Fraction (SF) measurement in clonogenic assay, which is a technique to study the anti-proliferative effect of treatments on cell cultures, involves manual counting of cell colony forming units. This procedure is operator-dependent and error-prone. Moreover, the identification of the exact colony number is often not feasible due to the high growth rate leading to the adjacent colony merging. As a matter of fact, conventional assessment does not deal with the colony size, which is generally correlated with the delivered radiation dose or the administered cytotoxic agent. Considering that the Area Covered by Colony (ACC) is proportional to the colony number and size as well as to the growth rate, we propose a novel fully automatic approach exploiting Circle Hough Transform, to automatically detect the wells in the plate, and local adaptive thresholding, which calculates the percentage of ACC for the SF quantification. This measurement relies just on this covering percentage and does not consider the colony number, preventing inconsistencies due to intra- and inter-operator variability. To evaluate the accuracy of the proposed approach, we compared the SFs obtained by our automatic ACC-based method against the conventional counting procedure. The achieved results (r = 0.9791 and r = 0.9682 on MCF7 and MCF10A cells, respectively) showed values highly correlated with the measurements using the traditional approach based on colony number alone. The proposed computer-assisted methodology could be integrated in laboratory practice as an expert system for the SF evaluation in clonogenic assays. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. MOCUM: A two-dimensional method of characteristics code based on constructive solid geometry and unstructured meshing for general geometries

    International Nuclear Information System (INIS)

    Yang Xue; Satvat, Nader

    2012-01-01

    Highlight: ► A two-dimensional numerical code based on the method of characteristics is developed. ► The complex arbitrary geometries are represented by constructive solid geometry and decomposed by unstructured meshing. ► Excellent agreement between Monte Carlo and the developed code is observed. ► High efficiency is achieved by parallel computing. - Abstract: A transport theory code MOCUM based on the method of characteristics as the flux solver with an advanced general geometry processor has been developed for two-dimensional rectangular and hexagonal lattice and full core neutronics modeling. In the code, the core structure is represented by the constructive solid geometry that uses regularized Boolean operations to build complex geometries from simple polygons. Arbitrary-precision arithmetic is also used in the process of building geometry objects to eliminate the round-off error from the commonly used double precision numbers. Then, the constructed core frame will be decomposed and refined into a Conforming Delaunay Triangulation to ensure the quality of the meshes. The code is fully parallelized using OpenMP and is verified and validated by various benchmarks representing rectangular, hexagonal, plate type and CANDU reactor geometries. Compared with Monte Carlo and deterministic reference solution, MOCUM results are highly accurate. The mentioned characteristics of the MOCUM make it a perfect tool for high fidelity full core calculation for current and GenIV reactor core designs. The detailed representation of reactor physics parameters can enhance the safety margins with acceptable confidence levels, which lead to more economically optimized designs.

  4. Computational Model of D-Region Ion Production Caused by Energetic Electron Precipitations Based on General Monte Carlo Transport Calculations

    Science.gov (United States)

    Kouznetsov, A.; Cully, C. M.

    2017-12-01

    During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.

  5. Gravitational-Wave Tests of General Relativity with Ground-Based Detectors and Pulsar-Timing Arrays

    Directory of Open Access Journals (Sweden)

    Nicolás Yunes

    2013-11-01

    Full Text Available This review is focused on tests of Einstein's theory of general relativity with gravitational waves that are detectable by ground-based interferometers and pulsar-timing experiments. Einstein’s theory has been greatly constrained in the quasi-linear, quasi-stationary regime, where gravity is weak and velocities are small. Gravitational waves will allow us to probe a complimentary, yet previously unexplored regime: the non-linear and dynamical strong-field regime. Such a regime is, for example, applicable to compact binaries coalescing, where characteristic velocities can reach fifty percent the speed of light and gravitational fields are large and dynamical. This review begins with the theoretical basis and the predicted gravitational-wave observables of modified gravity theories. The review continues with a brief description of the detectors, including both gravitational-wave interferometers and pulsar-timing arrays, leading to a discussion of the data analysis formalism that is applicable for such tests. The review ends with a discussion of gravitational-wave tests for compact binary systems.

  6. Load Frequency Control in Isolated Micro-Grids with Electrical Vehicles Based on Multivariable Generalized Predictive Theory

    Directory of Open Access Journals (Sweden)

    Jun Yang

    2015-03-01

    Full Text Available In power systems, although the inertia energy in power sources can partly cover power unbalances caused by load disturbance or renewable energy fluctuation, it is still hard to maintain the frequency deviation within acceptable ranges. However, with the vehicle-to-grid (V2G technique, electric vehicles (EVs can act as mobile energy storage units, which could be a solution for load frequency control (LFC in an isolated grid. In this paper, a LFC model of an isolated micro-grid with EVs, distributed generations and their constraints is developed. In addition, a controller based on multivariable generalized predictive control (MGPC theory is proposed for LFC in the isolated micro-grid, where EVs and diesel generator (DG are coordinated to achieve a satisfied performance on load frequency. A benchmark isolated micro-grid with EVs, DG, and wind farm is modeled in the Matlab/Simulink environment to demonstrate the effectiveness of the proposed method. Simulation results demonstrate that with MGPC, the energy stored in EVs can be managed intelligently according to LFC requirement. This improves the system frequency stability with complex operation situations including the random renewable energy resource and the continuous load disturbances.

  7. Interpersonal Problems, Mindfulness, and Therapy Outcome in an Acceptance-Based Behavior Therapy for Generalized Anxiety Disorder.

    Science.gov (United States)

    Millstein, Daniel J; Orsillo, Susan M; Hayes-Skelton, Sarah A; Roemer, Lizabeth

    2015-01-01

    To better understand the role interpersonal problems play in response to two treatments for generalized anxiety disorder (GAD); an acceptance-based behavior therapy (ABBT) and applied relaxation (AR), and to examine how the development of mindfulness may be related to change in interpersonal problems over treatment and at follow-up. Eighty-one individuals diagnosed with GAD (65.4% female, 80.2% identified as white, average age 32.92) were randomized to receive 16 sessions of either ABBT or AR. GAD severity, interpersonal problems, and mindfulness were measured at pre-treatment, post-treatment, 6-month follow-up, and 12-month follow-up. Mixed effect regression models did not reveal any significant effects of pre-treatment interpersonal problems on GAD severity over treatment. After controlling for post-treatment GAD severity, remaining post-treatment interpersonal problems predicted 6- but not 12-month GAD severity. Participants in both conditions experienced a large decrease in interpersonal problems over treatment. Increases in mindfulness over treatment and through follow-up were associated with decreases in interpersonal problems, even when accounting for reductions in overall GAD severity. Interpersonal problems may be an important target of treatment in GAD, even if pre-treatment interpersonal problems are not predictive of outcome. Developing mindfulness in individuals with GAD may help ameliorate interpersonal difficulties among this population.

  8. Analysis of Future Vehicle Energy Demand in China Based on a Gompertz Function Method and Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Tian Wu

    2014-11-01

    Full Text Available This paper presents a model for the projection of Chinese vehicle stocks and road vehicle energy demand through 2050 based on low-, medium-, and high-growth scenarios. To derive a gross-domestic product (GDP-dependent Gompertz function, Chinese GDP is estimated using a recursive dynamic Computable General Equilibrium (CGE model. The Gompertz function is estimated using historical data on vehicle development trends in North America, Pacific Rim and Europe to overcome the problem of insufficient long-running data on Chinese vehicle ownership. Results indicate that the number of projected vehicle stocks for 2050 is 300, 455 and 463 million for low-, medium-, and high-growth scenarios respectively. Furthermore, the growth in China’s vehicle stock will increase beyond the inflection point of Gompertz curve by 2020, but will not reach saturation point during the period 2014–2050. Of major road vehicle categories, cars are the largest energy consumers, followed by trucks and buses. Growth in Chinese vehicle demand is primarily determined by per capita GDP. Vehicle saturation levels solely influence the shape of the Gompertz curve and population growth weakly affects vehicle demand. Projected total energy consumption of road vehicles in 2050 is 380, 575 and 586 million tonnes of oil equivalent for each scenario.

  9. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams.

    Science.gov (United States)

    Gao, Lili; Zhou, Zai-Fa; Huang, Qing-An

    2017-11-08

    A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.

  10. Comprehensive optimisation of China’s energy prices, taxes and subsidy policies based on the dynamic computable general equilibrium model

    International Nuclear Information System (INIS)

    He, Y.X.; Liu, Y.Y.; Du, M.; Zhang, J.X.; Pang, Y.X.

    2015-01-01

    Highlights: • Energy policy is defined as a complication of energy price, tax and subsidy policies. • The maximisation of total social benefit is the optimised objective. • A more rational carbon tax ranges from 10 to 20 Yuan/ton under the current situation. • The optimal coefficient pricing is more conducive to maximise total social benefit. - Abstract: Under the condition of increasingly serious environmental pollution, rational energy policy plays an important role in the practical significance of energy conservation and emission reduction. This paper defines energy policies as the compilation of energy prices, taxes and subsidy policies. Moreover, it establishes the optimisation model of China’s energy policy based on the dynamic computable general equilibrium model, which maximises the total social benefit, in order to explore the comprehensive influences of a carbon tax, the sales pricing mechanism and the renewable energy fund policy. The results show that when the change rates of gross domestic product and consumer price index are ±2%, ±5% and the renewable energy supply structure ratio is 7%, the more reasonable carbon tax ranges from 10 to 20 Yuan/ton, and the optimal coefficient pricing mechanism is more conducive to the objective of maximising the total social benefit. From the perspective of optimising the overall energy policies, if the upper limit of change rate in consumer price index is 2.2%, the existing renewable energy fund should be improved

  11. A general approach to one-pot fabrication of crumpled graphene-based nanohybrids for energy applications.

    Science.gov (United States)

    Mao, Shun; Wen, Zhenhai; Kim, Haejune; Lu, Ganhua; Hurley, Patrick; Chen, Junhong

    2012-08-28

    Crumpled graphene oxide (GO)/graphene is a new type of carbon nanostructure that has drawn growing attention due to its three-dimensional open structure and excellent stability in an aqueous solution. Here we report a general and one-step approach to produce crumpled graphene (CG)-nanocrystal hybrids, which are produced by direct aerosolization of a GO suspension mixed with precursor ions. Nanocrystals spontaneously grow from precursor ions and assemble on both external and internal surfaces of CG balls during the solvent evaporation and GO crumpling process. More importantly, CG-nanocrystal hybrids can be directly deposited onto various current-collecting substrates, enabling their tremendous potential for energy applications. As a proof of concept, we demonstrate the use of hybrid electrodes of CG-Mn(3)O(4) and CG-SnO(2) in an electrochemical supercapacitor and a lithium-ion battery, respectively. The performance of the resulting capacitor/battery is attractive and outperforms conventional flat graphene-based hybrid devices. This study provides a new and facile route to fabricating high-performance hybrid CG-nanocrystal electrodes for various energy systems.

  12. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams

    Directory of Open Access Journals (Sweden)

    Lili Gao

    2017-11-01

    Full Text Available A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC, is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.

  13. The Optimal Price Ratio of Typical Energy Sources in Beijing Based on the Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Yongxiu He

    2014-04-01

    Full Text Available In Beijing, China, the rational consumption of energy is affected by the insufficient linkage mechanism of the energy pricing system, the unreasonable price ratio and other issues. This paper combines the characteristics of Beijing’s energy market, putting forward the society-economy equilibrium indicator R maximization taking into consideration the mitigation cost to determine a reasonable price ratio range. Based on the computable general equilibrium (CGE model, and dividing four kinds of energy sources into three groups, the impact of price fluctuations of electricity and natural gas on the Gross Domestic Product (GDP, Consumer Price Index (CPI, energy consumption and CO2 and SO2 emissions can be simulated for various scenarios. On this basis, the integrated effects of electricity and natural gas price shocks on the Beijing economy and environment can be calculated. The results show that relative to the coal prices, the electricity and natural gas prices in Beijing are currently below reasonable levels; the solution to these unreasonable energy price ratios should begin by improving the energy pricing mechanism, through means such as the establishment of a sound dynamic adjustment mechanism between regulated prices and market prices. This provides a new idea for exploring the rationality of energy price ratios in imperfect competitive energy markets.

  14. Gravitational-Wave Tests of General Relativity with Ground-Based Detectors and Pulsar-Timing Arrays.

    Science.gov (United States)

    Yunes, Nicolás; Siemens, Xavier

    2013-01-01

    This review is focused on tests of Einstein's theory of general relativity with gravitational waves that are detectable by ground-based interferometers and pulsar-timing experiments. Einstein's theory has been greatly constrained in the quasi-linear, quasi-stationary regime, where gravity is weak and velocities are small. Gravitational waves will allow us to probe a complimentary, yet previously unexplored regime: the non-linear and dynamical strong-field regime . Such a regime is, for example, applicable to compact binaries coalescing, where characteristic velocities can reach fifty percent the speed of light and gravitational fields are large and dynamical. This review begins with the theoretical basis and the predicted gravitational-wave observables of modified gravity theories. The review continues with a brief description of the detectors, including both gravitational-wave interferometers and pulsar-timing arrays, leading to a discussion of the data analysis formalism that is applicable for such tests. The review ends with a discussion of gravitational-wave tests for compact binary systems.

  15. Localization of skeletal and aortic landmarks in trauma CT data based on the discriminative generalized Hough transform

    Science.gov (United States)

    Lorenz, Cristian; Hansis, Eberhard; Weese, Jürgen; Carolus, Heike

    2016-03-01

    Computed tomography is the modality of choice for poly-trauma patients to assess rapidly skeletal and vascular integrity of the whole body. Often several scans with and without contrast medium or with different spatial resolution are acquired. Efficient reading of the resulting extensive set of image data is vital, since it is often time critical to initiate the necessary therapeutic actions. A set of automatically found landmarks can facilitate navigation in the data and enables anatomy oriented viewing. Following this intention, we selected a comprehensive set of 17 skeletal and 5 aortic landmarks. Landmark localization models for the Discriminative Generalized Hough Transform (DGHT) were automatically created based on a set of about 20 training images with ground truth landmark positions. A hierarchical setup with 4 resolution levels was used. Localization results were evaluated on a separate test set, consisting of 50 to 128 images (depending on the landmark) with available ground truth landmark locations. The image data covers a large amount of variability caused by differences of field-of-view, resolution, contrast agent, patient gender and pathologies. The median localization error for the set of aortic landmarks was 14.4 mm and for the set of skeleton landmarks 5.5 mm. Median localization errors for individual landmarks ranged from 3.0 mm to 31.0 mm. The runtime performance for the whole landmark set is about 5s on a typical PC.

  16. General chemistry

    International Nuclear Information System (INIS)

    Kwon, Yeong Sik; Lee, Dong Seop; Ryu, Haung Ryong; Jang, Cheol Hyeon; Choi, Bong Jong; Choi, Sang Won

    1993-07-01

    The book concentrates on the latest general chemistry, which is divided int twenty-three chapters. It deals with basic conception and stoichiometry, nature of gas, structure of atoms, quantum mechanics, symbol and structure of an electron of ion and molecule, chemical thermodynamics, nature of solid, change of state and liquid, properties of solution, chemical equilibrium, solution and acid-base, equilibrium of aqueous solution, electrochemistry, chemical reaction speed, molecule spectroscopy, hydrogen, oxygen and water, metallic atom; 1A, IIA, IIIA, carbon and atom IVA, nonmetal atom and an inert gas, transition metals, lanthanons, and actinoids, nuclear properties and radioactivity, biochemistry and environment chemistry.

  17. Plasma-based water treatment: development of a general mechanistic model to estimate the treatability of different types of contaminants

    International Nuclear Information System (INIS)

    Mededovic Thagard, Selma; Stratton, Gunnar R; Paek, Eunsu; Dai, Fei; Holsen, Thomas M; Bellona, Christopher L; Bohl, Douglas G; Dickenson, Eric R V

    2017-01-01

    To determine the types of applications for which plasma-based water treatment (PWT) is best suited, the treatability of 23 environmental contaminants was assessed through treatment in a gas discharge reactor with argon bubbling, termed the enhanced-contact reactor. The contaminants were treated in a mixture to normalize reaction conditions and convective transport limitations. Treatability was compared in terms of the observed removal rate constant ( k obs ). To characterize the influence of interfacial processes on k obs , a model was developed that accurately predicts k obs for each compound, as well as the contributions to k obs from each of the three general degradation mechanisms thought to occur at or near the gas–liquid interface: ‘sub-surface’, ‘surface’ and ‘above-surface’. Sub-surface reactions occur just underneath the gas–liquid interface between the contaminants and dissolved plasma-generated radicals, contributing significantly to the removal of compounds that lack surfactant-like properties and so are not highly concentrated at the interface. Surface reactions occur at the interface between the contaminants and dissolved radicals, contributing significantly to the removal of surfactant-like compounds that have high interfacial concentrations. The contaminants’ interfacial concentrations were calculated using surface-activity parameters determined through surface tension measurements. Above-surface reactions are proposed to take place in the plasma interior between highly energetic plasma species and exposed portions of compounds that extend out of the interface. This mechanism largely accounts for the degradation of surfactant-like contaminants that contain highly hydrophobic perfluorocarbon groups, which are most likely to protrude from the interface. For a few compounds, the degree of exposure to the plasma interior was supported by new and previously reported molecular dynamics simulations results. By reviewing the predicted

  18. GENERAL: Kinetic Behaviors of Catalysis-Driven Growth of Three-Species Aggregates on Base of Exchange-Driven Aggregations

    Science.gov (United States)

    Sun, Yun-Fei; Chen, Dan; Lin, Zhen-Quan; Ke, Jian-Hong

    2009-06-01

    We propose a solvable aggregation model to mimic the evolution of population A, asset B, and the quantifiable resource C in a society. In this system, the population and asset aggregates themselves grow through self-exchanges with the rate kernels K1(k, j) = K1kj and K2(k, j) = K2kj, respectively. The actions of the population and asset aggregations on the aggregation evolution of resource aggregates are described by the population-catalyzed monomer death of resource aggregates and asset-catalyzed monomer birth of resource aggregates with the rate kernels J1(k, j) = J1k and J2(k, j) = J2k, respectively. Meanwhile, the asset and resource aggregates conjunctly catalyze the monomer birth of population aggregates with the rate kernel I1(k, i, j) = I1kiμjη, and population and resource aggregates conjunctly catalyze the monomer birth of asset aggregates with the rate kernel I2(k, i, j) = I2kivjη. The kinetic behaviors of species A, B, and C are investigated by means of the mean-field rate equation approach. The effects of the population-catalyzed death and asset-catalyzed birth on the evolution of resource aggregates based on the self-exchanges of population and asset appear in effective forms. The coefficients of the effective population-catalyzed death and the asset-catalyzed birth are expressed as J1e = J1/K1 and J2e = J2/K2, respectively. The aggregate size distribution of C species is found to be crucially dominated by the competition between the effective death and the effective birth. It satisfies the conventional scaling form, generalized scaling form, and modified scaling form in the cases of J1e J2e, respectively. Meanwhile, we also find the aggregate size distributions of populations and assets both fall into two distinct categories for different parameters μ, ν, and η: (i) When μ = ν = η = 0 and μ = ν = 0, η = 1, the population and asset aggregates obey the generalized scaling forms; and (ii) When μ = ν = 1, η = 0, and μ = ν = η = 1, the

  19. Plasma-based water treatment: development of a general mechanistic model to estimate the treatability of different types of contaminants

    Science.gov (United States)

    Mededovic Thagard, Selma; Stratton, Gunnar R.; Dai, Fei; Bellona, Christopher L.; Holsen, Thomas M.; Bohl, Douglas G.; Paek, Eunsu; Dickenson, Eric R. V.

    2017-01-01

    To determine the types of applications for which plasma-based water treatment (PWT) is best suited, the treatability of 23 environmental contaminants was assessed through treatment in a gas discharge reactor with argon bubbling, termed the enhanced-contact reactor. The contaminants were treated in a mixture to normalize reaction conditions and convective transport limitations. Treatability was compared in terms of the observed removal rate constant (k obs). To characterize the influence of interfacial processes on k obs, a model was developed that accurately predicts k obs for each compound, as well as the contributions to k obs from each of the three general degradation mechanisms thought to occur at or near the gas-liquid interface: ‘sub-surface’, ‘surface’ and ‘above-surface’. Sub-surface reactions occur just underneath the gas-liquid interface between the contaminants and dissolved plasma-generated radicals, contributing significantly to the removal of compounds that lack surfactant-like properties and so are not highly concentrated at the interface. Surface reactions occur at the interface between the contaminants and dissolved radicals, contributing significantly to the removal of surfactant-like compounds that have high interfacial concentrations. The contaminants’ interfacial concentrations were calculated using surface-activity parameters determined through surface tension measurements. Above-surface reactions are proposed to take place in the plasma interior between highly energetic plasma species and exposed portions of compounds that extend out of the interface. This mechanism largely accounts for the degradation of surfactant-like contaminants that contain highly hydrophobic perfluorocarbon groups, which are most likely to protrude from the interface. For a few compounds, the degree of exposure to the plasma interior was supported by new and previously reported molecular dynamics simulations results. By reviewing the predicted

  20. A Reconstruction of Development of the Periodic Table Based on History and Philosophy of Science and Its Implications for General Chemistry Textbooks

    Science.gov (United States)

    Brito, Angmary; Rodriguez, Maria A.; Niaz, Mansoor

    2005-01-01

    The objectives of this study are: (a) elaboration of a history and philosophy of science (HPS) framework based on a reconstruction of the development of the periodic table; (b) formulation of seven criteria based on the framework; and (c) evaluation of 57 freshman college-level general chemistry textbooks with respect to the presentation of the…

  1. The Role of Culture Theory in Cross-Cultural Training: A Multimethod Study of Culture-Specific, Culture-General, and Culture Theory-Based Assimilators.

    Science.gov (United States)

    Bhawuk, Dharm P. S.

    1998-01-01

    In a multimethod evaluation of cross-cultural training tools involving 102 exchange students at a midwestern university, a theory-based individualism and collectivism assimilator tool had significant advantages over culture-specific and culture-general assimilators and a control condition. Results support theory-based culture assimilators. (SLD)

  2. Incidence rates and risk factors of bipolar disorder in the general population: a population-based cohort study

    NARCIS (Netherlands)

    Kroon, Jojanneke S.; Wohlfarth, Tamar D.; Dieleman, Jeanne; Sutterland, Arjen L.; Storosum, Jitschak G.; Denys, Damiaan; de Haan, Lieuwe; Sturkenboom, Mirjam C. J. M.

    2013-01-01

    To estimate the incidence rates (IRs) of bipolar I and bipolar II disorders in the general population according to sociodemographic population characteristics. A cohort study (during the years 1996-2007) was conducted in a general practitioners research database with a longitudinal electronic record

  3. The Association of Frailty With Outcomes and Resource Use After Emergency General Surgery: A Population-Based Cohort Study.

    Science.gov (United States)

    McIsaac, Daniel I; Moloo, Husein; Bryson, Gregory L; van Walraven, Carl

    2017-05-01

    Older patients undergoing emergency general surgery (EGS) experience high rates of postoperative morbidity and mortality. Studies focused primarily on elective surgery indicate that frailty is an important predictor of adverse outcomes in older surgical patients. The population-level effect of frailty on EGS is poorly described. Therefore, our objective was to measure the association of preoperative frailty with outcomes in a population of older patients undergoing EGS. We created a population-based cohort study using linked administrative data in Ontario, Canada, that included community-dwelling individuals aged >65 years having EGS. Our main exposure was preoperative frailty, as defined by the Johns Hopkins Adjusted Clinical Groups frailty-defining diagnoses indicator. The Adjusted Clinical Groups frailty-defining diagnoses indicator is a binary variable that uses 12 clusters of frailty-defining diagnoses. Our main outcome measures were 1-year all-cause mortality (primary), intensive care unit admission, length of stay, institutional discharge, and costs of care (secondary). Of 77,184 patients, 19,779 (25.6%) were frail. Death within 1 year occurred in 6626 (33.5%) frail patients compared with 11,366 (19.8%) nonfrail patients. After adjustment for sociodemographic and surgical confounders, this resulted in a hazard ratio of 1.29 (95% confidence interval [CI] 1.25-1.33). The risk of death for frail patients varied significantly across the postoperative period and was particularly high immediately after surgery (hazard ratio on postoperative day 1 = 23.1, 95% CI 22.3-24.1). Frailty was adversely associated with all secondary outcomes, including a 5.82-fold increase in the adjusted odds of institutional discharge (95% CI 5.53-6.12). After EGS, frailty is associated with increased rates of mortality, institutional discharge, and resource use. Strategies that might improve perioperative outcomes in frail EGS patients need to be developed and tested.

  4. Feasibility and impact of a computer-guided consultation on guideline-based management of COPD in general practice.

    Science.gov (United States)

    Angus, Robert M; Thompson, Elizabeth B; Davies, Lisa; Trusdale, Ann; Hodgson, Chris; McKnight, Eddie; Davies, Andrew; Pearson, Mike G

    2012-12-01

    Applying guidelines is a universal challenge that is often not met. Intelligent software systems that facilitate real-time management during a clinical interaction may offer a solution. To determine if the use of a computer-guided consultation that facilitates the National Institute for Health and Clinical Excellence-based chronic obstructive pulmonary disease (COPD) guidance and prompts clinical decision-making is feasible in primary care and to assess its impact on diagnosis and management in reviews of COPD patients. Practice nurses, one-third of whom had no specific respiratory training, undertook a computer-guided review in the usual consulting room setting using a laptop computer with the screen visible to them and to the patient. A total of 293 patients (mean (SD) age 69.7 (10.1) years, 163 (55.6%) male) with a diagnosis of COPD were randomly selected from GP databases in 16 practices and assessed. Of 236 patients who had spirometry, 45 (19%) did not have airflow obstruction and the guided clinical history changed the primary diagnosis from COPD in a further 24 patients. In the 191 patients with confirmed COPD, the consultations prompted management changes including 169 recommendations for altered prescribing of inhalers (addition or discontinuation, inhaler dose or device). In addition, 47% of the 55 current smokers were referred for smoking cessation support, 12 (6%) for oxygen assessment, and 47 (24%) for pulmonary rehabilitation. Computer-guided consultations are practicable in general practice. Primary care COPD databases were confirmed to contain a significant proportion of incorrectly assigned patients. They resulted in interventions and the rationalisation of prescribing in line with recommendations. Only in 22 (12%) of those fully assessed was no management change suggested. The introduction of a computer-guided consultation offers the prospect of comprehensive guideline quality management.

  5. Patterns of Gray Matter Abnormalities in Idiopathic Generalized Epilepsy: A Meta-Analysis of Voxel-Based Morphology Studies.

    Directory of Open Access Journals (Sweden)

    Guo Bin

    Full Text Available We aimed to identify the consistent regions of gray matter volume (GMV abnormalities in idiopathic generalized epilepsy (IGE, and to study the difference of GMV abnormalities among IGE subsyndromes by applying activation likelihood estimation (ALE meta-analysis.A systematic review of VBM studies on GMV of patients with absence epilepsy (AE, juvenile myoclonic epilepsy (JME, IGE and controls indexed in PubMed and ScienceDirect from January 1999 to June 2016 was conducted. A total of 12 IGE studies, including 7 JME and 3 AE studies, were selected. Meta-analysis was performed on these studies by using the pooled and within-subtypes analysis (www.brainmap.org. Based on the above results, between-subtypes contrast analysis was carried out to detect the abnormal GMV regions common in and unique to each subtype as well.IGE demonstrated significant GMV increase in right ventral lateral nucleus (VL and right medial frontal gyrus, and significant GMV decrease in bilateral pulvinar. For JME, significant GMV increase was seen in right medial frontal gyrus, right anterior cingulate cortex (ACC, while significant GMV decrease was found in right pulvinar. In AE, the most significant GMV increase was found in right VL, and slight GMV reduction was seen in right medial dorsal nucleus, right subcallosal gyrus, left caudate and left precuneus. No overlapped and unique regions with significant GMV abnormalities were found between JME and AE.This meta-analysis demonstrated that thalamo-frontal network was a structure with significant GMV abnormality in IGE, and the IGE subsyndromes showed different GMV abnormal regions. These observations may provide instructions on the clinical diagnosis of IGE.

  6. Upper arm elevation and repetitive shoulder movements: a general population job exposure matrix based on expert ratings and technical measurements.

    Science.gov (United States)

    Dalbøge, Annett; Hansson, Gert-Åke; Frost, Poul; Andersen, Johan Hviid; Heilskov-Hansen, Thomas; Svendsen, Susanne Wulff

    2016-08-01

    We recently constructed a general population job exposure matrix (JEM), The Shoulder JEM, based on expert ratings. The overall aim of this study was to convert expert-rated job exposures for upper arm elevation and repetitive shoulder movements to measurement scales. The Shoulder JEM covers all Danish occupational titles, divided into 172 job groups. For 36 of these job groups, we obtained technical measurements (inclinometry) of upper arm elevation and repetitive shoulder movements. To validate the expert-rated job exposures against the measured job exposures, we used Spearman rank correlations and the explained variance[Formula: see text] according to linear regression analyses (36 job groups). We used the linear regression equations to convert the expert-rated job exposures for all 172 job groups into predicted measured job exposures. Bland-Altman analyses were used to assess the agreement between the predicted and measured job exposures. The Spearman rank correlations were 0.63 for upper arm elevation and 0.64 for repetitive shoulder movements. The expert-rated job exposures explained 64% and 41% of the variance of the measured job exposures, respectively. The corresponding calibration equations were y=0.5%time+0.16×expert rating and y=27°/s+0.47×expert rating. The mean differences between predicted and measured job exposures were zero due to calibration; the 95% limits of agreement were ±2.9% time for upper arm elevation >90° and ±33°/s for repetitive shoulder movements. The updated Shoulder JEM can be used to present exposure-response relationships on measurement scales. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. The EB factory project. I. A fast, neural-net-based, general purpose light curve classifier optimized for eclipsing binaries

    International Nuclear Information System (INIS)

    Paegert, Martin; Stassun, Keivan G.; Burger, Dan M.

    2014-01-01

    We describe a new neural-net-based light curve classifier and provide it with documentation as a ready-to-use tool for the community. While optimized for identification and classification of eclipsing binary stars, the classifier is general purpose, and has been developed for speed in the context of upcoming massive surveys such as the Large Synoptic Survey Telescope. A challenge for classifiers in the context of neural-net training and massive data sets is to minimize the number of parameters required to describe each light curve. We show that a simple and fast geometric representation that encodes the overall light curve shape, together with a chi-square parameter to capture higher-order morphology information results in efficient yet robust light curve classification, especially for eclipsing binaries. Testing the classifier on the ASAS light curve database, we achieve a retrieval rate of 98% and a false-positive rate of 2% for eclipsing binaries. We achieve similarly high retrieval rates for most other periodic variable-star classes, including RR Lyrae, Mira, and delta Scuti. However, the classifier currently has difficulty discriminating between different sub-classes of eclipsing binaries, and suffers a relatively low (∼60%) retrieval rate for multi-mode delta Cepheid stars. We find that it is imperative to train the classifier's neural network with exemplars that include the full range of light curve quality to which the classifier will be expected to perform; the classifier performs well on noisy light curves only when trained with noisy exemplars. The classifier source code, ancillary programs, a trained neural net, and a guide for use, are provided.

  8. Generalization of exponential based hyperelastic to hyper-viscoelastic model for investigation of mechanical behavior of rate dependent materials.

    Science.gov (United States)

    Narooei, K; Arman, M

    2018-03-01

    In this research, the exponential stretched based hyperelastic strain energy was generalized to the hyper-viscoelastic model using the heredity integral of deformation history to take into account the strain rate effects on the mechanical behavior of materials. The heredity integral was approximated by the approach of Goh et al. to determine the model parameters and the same estimation was used for constitutive modeling. To present the ability of the proposed hyper-viscoelastic model, the stress-strain response of the thermoplastic elastomer gel tissue at different strain rates from 0.001 to 100/s was studied. In addition to better agreement between the current model and experimental data in comparison to the extended Mooney-Rivlin hyper-viscoelastic model, a stable material behavior was predicted for pure shear and balance biaxial deformation modes. To present the engineering application of current model, the Kolsky bars impact test of gel tissue was simulated and the effects of specimen size and inertia on the uniform deformation were investigated. As the mechanical response of polyurea was provided over wide strain rates of 0.0016-6500/s, the current model was applied to fit the experimental data. The results were shown more accuracy could be expected from the current research than the extended Ogden hyper-viscoelastic model. In the final verification example, the pig skin experimental data was used to determine parameters of the hyper-viscoelastic model. Subsequently, a specimen of pig skin at different strain rates was loaded to a fixed strain and the change of stress with time (stress relaxation) was obtained. The stress relaxation results were revealed the peak stress increases by applied strain rate until the saturated loading rate and the equilibrium stress with magnitude of 0.281MPa could be reached. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Cost-effectiveness analysis of clinic-based chloral hydrate sedation versus general anaesthesia for paediatric ophthalmological procedures.

    Science.gov (United States)

    Burnett, Heather F; Lambley, Rosemary; West, Stephanie K; Ungar, Wendy J; Mireskandari, Kamiar

    2015-11-01

    The inability of some children to tolerate detailed eye examinations often necessitates general anaesthesia (GA). The objective was to assess the incremental cost effectiveness of paediatric eye examinations carried out in an outpatient sedation unit compared with GA. An episode of care cost-effectiveness analysis was conducted from a societal perspective. Model inputs were based on a retrospective cross-over cohort of Canadian children aged Costs ($CAN), adverse events and number of successful procedures were modelled in a decision analysis with one-way and probabilistic sensitivity analysis. The mean cost per patient was $406 (95% CI $401 to $411) for EUS and $1135 (95% CI $1125 to $1145) for EUA. The mean number of successful procedures per patient was 1.39 (95% CI 1.34 to 1.42) for EUS and 2.06 (95% CI 2.02 to 2.11) for EUA. EUA was $729 more costly on average than EUS (95% CI $719 to $738) but resulted in an additional 0.68 successful procedures per child. The result was robust to varying the cost assumptions. Cross-over designs offer a powerful way to assess costs and effectiveness of two interventions because patients serve as their own control. This study demonstrated significant savings when ophthalmological exams were carried out in a hospital outpatient clinic, although with slightly fewer procedures completed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  10. Inductive, Analogical, and Communicative Generalization

    Directory of Open Access Journals (Sweden)

    Adri Smaling

    2003-03-01

    Full Text Available Three forms of inductive generalization - statistical generalization, variation-based generalization and theory-carried generalization - are insufficient concerning case-to-case generalization, which is a form of analogical generalization. The quality of case-to-case generalization needs to be reinforced by setting up explicit analogical argumentation. To evaluate analogical argumentation six criteria are discussed. Good analogical reasoning is an indispensable support to forms of communicative generalization - receptive and responsive (participative generalization — as well as exemplary generalization.

  11. Somatic symptom profiles in the general population: a latent class analysis in a Danish population-based health survey

    Directory of Open Access Journals (Sweden)

    Eliasen M

    2017-08-01

    Full Text Available Marie Eliasen,1 Torben Jørgensen,1–3 Andreas Schröder,4 Thomas Meinertz Dantoft,1 Per Fink,4 Chalotte Heinsvig Poulsen,1,5 Nanna Borup Johansen,1 Lene Falgaard Eplov,5 Sine Skovbjerg,1 Svend Kreiner2 1Research Centre for Prevention and Health, Centre for Health, The Capital Region of Denmark, Glostrup, 2Department of Public Health, University of Copenhagen, Copenhagen, 3Department of Clinical Medicine, Aalborg University, Aalborg, 4Research Clinic for Functional Disorders and Psychosomatics, Aarhus University Hospital, Aarhus C, 5Mental Health Centre Copenhagen, The Capital Region of Denmark, Hellerup, Denmark Purpose: The aim of this study was to identify and describe somatic symptom profiles in the general adult population in order to enable further epidemiological research within multiple somatic symptoms.Methods: Information on 19 self-reported common somatic symptoms was achieved from a population-based questionnaire survey of 36,163 randomly selected adults in the Capital Region of Denmark (55.4% women. The participants stated whether they had been considerably bothered by each symptom within 14 days prior to answering the questionnaire. We used latent class analysis to identify the somatic symptom profiles. The profiles were further described by their association with age, sex, chronic disease, and self-perceived health.Results: We identified 10 different somatic symptom profiles defined by number, type, and site of the symptoms. The majority of the population (74.0% had a profile characterized by no considerable bothering symptoms, while a minor group of 3.9% had profiles defined by a high risk of multiple somatic symptoms. The remaining profiles were more likely to be characterized by a few specific symptoms. The profiles could further be described by their associations with age, sex, chronic disease, and self-perceived health.Conclusion: The identified somatic symptom profiles could be distinguished by number, type, and site of

  12. Preliminary Analysis of the General Performance and Mechanical Behavior of Irradiated FeCrAl Base Alloys and Weldments

    Energy Technology Data Exchange (ETDEWEB)

    Gussev, Maxim N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Field, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Briggs, Samuel A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Yamamoto, Yukinori [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-30

    The iron-based, iron-chromium-aluminum (FeCrAl) alloys are promising, robust materials for deployment in current and future nuclear power plants. This class of alloys demonstrates excellent performance in a range of environments and conditions, including high-temperature steam (>1000°C). Furthermore, these alloys have the potential to have prolonged survival under loss-of-coolant accident (LOCA) conditions compared to the more traditional cladding materials that are either Zr-based alloys or austenitic steels. However, one of the issues associated with FeCrAl alloys is cracking during welding. The present project investigates the possibility of mitigating welding-induced cracking via alloying and precise structure control of the weldments; in the frame work of the project, several advanced alloys were developed and are being investigated prior to and after neutron irradiation to provide insight into the radiation tolerance and mechanical performance of the weldments. The present report provides preliminary results on the post-irradiation characterization and mechanical tests performed during United States Fiscal Year (FY) 2016. Chapter 1 provides a general introduction, and Chapter 2 describes the alloy compositions, welding procedure, specimen geometry and manufacturing parameters. Also, a brief discussion of the irradiation at the High Flux Isotope Reactor (HFIR) is provided. Chapter 3 is devoted to the analysis of mechanical tests performed at the hot cell facility; tensile curves and mechanical properties are discussed in detail focusing on the irradiation temperature. Limited fractography results are also presented and analyzed. The discussion highlights the limitations of the testing within a hot cell. Chapter 4 underlines the advantages of in-situ testing and discusses the preliminary results obtained with newly developed miniature specimens. Specimens were moved to the Low Activation Materials Development and Analysis (LAMDA) laboratory and prepared for

  13. General Corrosion studies of a Titanium and Incoloy based alloys under ammoniacal medium and at 290 deg. C

    International Nuclear Information System (INIS)

    Gokhale, B.K.; Keny, S.J.; Kumbhar, A.G.; Rangarajan, S.; Bera, S.; Nuwad Jitendra; Kumar, Sanjukta A.; Wagh, D. N.; Pradhan, S.

    2012-09-01

    For their use in future PWR applications, the general corrosion behaviors of two modified alloys of titanium and Incoloy were studied at high temperature and high pressure (290 deg. C, 7400 Kpa) under ammoniated atmosphere and compared. Coupons were exposed to solutions of varying ammonia concentrations of (10, 50 and 100 ppm ) at 290 deg. C under non-deaerated conditions in a static autoclave for 20 days. Surface characteristics of exposed coupons were studied using XRF, SEM, EDAX and XPS. The solution in the autoclave was analyzed for its specific conductivity, pH and for the elemental concentrations leached from alloy. The exposed titanium based alloy showed deposition of white crystalline material (300-1000 nm size) on the surface. Depletion of Ti and increase in the oxygen concentration on the exposed surface was observed. This indicated dissolution of Ti in solution from surface at high temperature and pressure and its reaction with oxygen in solution to form oxide and its redeposition on surface. The oxide film compositions were found to change drastically between 10 and 50 ppm ammoniated solution. Ti was found to be enriched in the oxide film when the solution contained 50 ppm of ammonia whereas the opposite effect was observed at 10 ppm of ammonia. The presence Ti 4+ in oxide environment and traces of Cr 3+ were observed but no nitrogen or Zr was detected. Specific conductivity of the exposed solution was found to increase by 30 μS/cm and pH to decrease by 1.5 units. Slight leaching of Ti was observed in solution. No Zr was found in the leached solution. Presence of other elements like Al, Cr, Ni in the exposed solution indicated leaching of autoclave construction material (hastelloy). This alloy showed good resistance for corrosion under the experimental conditions. The exposed surface of Incoloy based alloy showed Ni, Cr, Cu and Mn on the surface with deposition of crystalline particles (200-300 nm size). The exposed surface also showed a decrease in Cr

  14. The generalized successive approximation and Padé Approximants method for solving an elasticity problem of based on the elastic ground with variable coefficients

    Directory of Open Access Journals (Sweden)

    Mustafa Bayram

    2017-01-01

    Full Text Available In this study, we have applied a generalized successive numerical technique to solve the elasticity problem of based on the elastic ground with variable coefficient. In the first stage, we have calculated the generalized successive approximation of being given BVP and in the second stage we have transformed it into Padé series. At the end of study a test problem has been given to clarify the method.

  15. Characteristics of service users and provider organisations associated with experience of out of hours general practitioner care in England: population based cross sectional postal questionnaire survey

    OpenAIRE

    Warren, Fiona C; Abel, Gary; Lyratzopoulos, Georgios; Elliott, Marc N; Richards, Suzanne; Barry, Heather E; Roland, Martin; Campbell, John L

    2015-01-01

    Objective: To investigate the experience of users of out of hours general practitioner services in England, UK. Design: Population based cross sectional postal questionnaire survey. Setting: General Practice Patient Survey 2012-13. Main outcome measures: Potential associations between sociodemographic factors (including ethnicity and ability to take time away from work during working hours to attend a healthcare consultation) and provider organisation type (not for profit, NHS, or commercial)...

  16. Changes in Functional Connectivity Associated with Direct Training and Generalization Effects of a Theory-Based Generative Naming Treatment

    Directory of Open Access Journals (Sweden)

    Swathi Kiran

    2014-04-01

    Nine PWA improved on the trained abstract words; seven PWA also showed generalization to concrete words in the same context-category. The region with the highest node degree in the trained abstract network across PWA was left inferior frontal gyrus pars triangularis (L IFGtri; while the highest node degree in the generalized concrete network was left precentral gyrus. Regions that showed increased connectivity for both training and generalization included L IFGtri, right middle frontal gyrus (MFG, and bilateral angular gyrus. Regions that showed increased connectivity regardless of whether or not treatment was given and whether or not treatment was successful included left MFG and bilateral superior frontal gyrus. Additionally, PWA who generalized showed more left than right hemisphere changes in both abstract and concrete networks; while PWA who improved on the trained abstract words, but did not generalize to concrete words showed more left than right hemisphere changes for the abstract network, but more right than left hemisphere changes for the concrete network. These results suggest that (a direct training and generalization are tapping into similar neural mechanisms, and (b changes in the left hemisphere coincide with better treatment outcomes.

  17. The effectiveness of three sets of school-based instructional materials and community training on the acquisition and generalization of community laundry skills by students with severe handicaps.

    Science.gov (United States)

    Morrow, S A; Bates, P E

    1987-01-01

    This study examined the effectiveness of three sets of school-based instructional materials and community training on acquisition and generalization of a community laundry skill by nine students with severe handicaps. School-based instruction involved artificial materials (pictures), simulated materials (cardboard replica of a community washing machine), and natural materials (modified home model washing machine). Generalization assessments were conducted at two different community laundromats, on two machines represented fully by the school-based instructional materials and two machines not represented fully by these materials. After three phases of school-based instruction, the students were provided ten community training trials in one laundromat setting and a final assessment was conducted in both the trained and untrained community settings. A multiple probe design across students was used to evaluate the effectiveness of the three types of school instruction and community training. After systematic training, most of the students increased their laundry performance with all three sets of school-based materials; however, generalization of these acquired skills was limited in the two community settings. Direct training in one of the community settings resulted in more efficient acquisition of the laundry skills and enhanced generalization to the untrained laundromat setting for most of the students. Results of this study are discussed in regard to the issue of school versus community-based instruction and recommendations are made for future research in this area.

  18. Identification of National Road Maintenance Needs Based on Strategic Plan of Directorate General of Bina Marga (2015-2019

    Directory of Open Access Journals (Sweden)

    Rizky Ardhiarini

    2016-05-01

    Full Text Available The enhancement of connectivity between the main corridors of the economy in South Sumatera, as a purpose of the Strategic Plan of Directorate General Bina Marga and also an objective of MP3EI, Master Plan for Acceleration and expansion of Indonesia's Economic Development which is an ambitious plan by the Indonesian government to accelerate the realization of becoming a developed country would be able to achieve on the requirement of good condition on performance and pavement. In order to support the optimal condition of the road, the identification of road management was conducted to determine the necessity of road maintenance based on technical conditions and importance level of development of the road traversed. The management program proposed is expected to be used as a baseline in determining the maintenance of the road network in South Sumatera from 2015 until 2019. This research used Multi-criteria Analysis (MCA method, which consists of: (1 road network performance, covering width of roadways, traffic flow, V/C ratio, travel speed and travel time of the vehicle, and (2 pavement condition, with IRI, SDI, and the proportion of good pavement condition as a parameter. Multi-criteria analysis used combined road condition assessment score and importance level of development of the area traversed by. This analysis was conducted from 2015 until 2019. The research concluded that maintenance necessity in 2015 were dominated by routine maintenance (95.86% of the total length, then in 2016 until 2019 the needs were dominated by routine maintenance (near 100% of the total length. As the maintenance applied, a vast amount of total road length fulfilled as an achievement target. The results are follows: (a 100% with width of roadways ≥ 7 meter, (b 97.83% with V/C ratio  60km/hour, (d 17.32% with travel time (TT 95%, (f 90.37% with IRI < 4 m/Km, and (g 91.59 with SDI < 50. Yet with the achievement of 100% of total road length with a

  19. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    International Nuclear Information System (INIS)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua; Anastasio, Mark A.; Low, Daniel A.

    2015-01-01

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets

  20. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua, E-mail: huli@radonc.wustl.edu [Department of Radiation Oncology, Washington University, St. Louis, Missouri 63110 (United States); Anastasio, Mark A. [Department of Biomedical Engineering, Washington University, St. Louis, Missouri 63110 (United States); Low, Daniel A. [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States)

    2015-02-15

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets