WorldWideScience

Sample records for missing preferences algorithms

  1. Collateral missing value imputation: a new robust missing value estimation algorithm for microarray data.

    Science.gov (United States)

    Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S

    2005-05-15

    Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE

  2. Two-pass imputation algorithm for missing value estimation in gene expression time series.

    Science.gov (United States)

    Tsiporkova, Elena; Boeva, Veselka

    2007-10-01

    Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different

  3. A New Missing Values Estimation Algorithm in Wireless Sensor Networks Based on Convolution

    Directory of Open Access Journals (Sweden)

    Feng Liu

    2013-04-01

    Full Text Available Nowadays, with the rapid development of Internet of Things (IoT applications, data missing phenomenon becomes very common in wireless sensor networks. This problem can greatly and directly threaten the stability and usability of the Internet of things applications which are constructed based on wireless sensor networks. How to estimate the missing value has attracted wide interest, and some solutions have been proposed. Different with the previous works, in this paper, we proposed a new convolution based missing value estimation algorithm. The convolution theory, which is usually used in the area of signal and image processing, can also be a practical and efficient way to estimate the missing sensor data. The results show that the proposed algorithm in this paper is practical and effective, and can estimate the missing value accurately.

  4. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  5. An Improved DINEOF Algorithm for Filling Missing Values in Spatio-Temporal Sea Surface Temperature Data.

    Science.gov (United States)

    Ping, Bo; Su, Fenzhen; Meng, Yunshan

    2016-01-01

    In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time.

  6. EV Charging Algorithm Implementation with User Price Preference

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Bin; Hu, Boyang; Qiu, Charlie; Chu, Peter; Gadh, Rajit

    2015-02-17

    in this paper, we propose and implement a smart Electric Vehicle (EV) charging algorithm to control the EV charging infrastructures according to users’ price preferences. EVSE (Electric Vehicle Supply Equipment), equipped with bidirectional communication devices and smart meters, can be remotely monitored by the proposed charging algorithm applied to EV control center and mobile app. On the server side, ARIMA model is utilized to fit historical charging load data and perform day-ahead prediction. A pricing strategy with energy bidding policy is proposed and implemented to generate a charging price list to be broadcasted to EV users through mobile app. On the user side, EV drivers can submit their price preferences and daily travel schedules to negotiate with Control Center to consume the expected energy and minimize charging cost simultaneously. The proposed algorithm is tested and validated through the experimental implementations in UCLA parking lots.

  7. How Are Mate Preferences Linked with Actual Mate Selection? Tests of Mate Preference Integration Algorithms Using Computer Simulations and Actual Mating Couples.

    Science.gov (United States)

    Conroy-Beam, Daniel; Buss, David M

    2016-01-01

    Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294) we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection.

  8. How Are Mate Preferences Linked with Actual Mate Selection? Tests of Mate Preference Integration Algorithms Using Computer Simulations and Actual Mating Couples.

    Directory of Open Access Journals (Sweden)

    Daniel Conroy-Beam

    Full Text Available Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294 we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection.

  9. How shared preferences in music create bonds between people: values as the missing link.

    Science.gov (United States)

    Boer, Diana; Fischer, Ronald; Strack, Micha; Bond, Michael H; Lo, Eva; Lam, Jason

    2011-09-01

    How can shared music preferences create social bonds between people? A process model is developed in which music preferences as value-expressive attitudes create social bonds via conveyed value similarity. The musical bonding model links two research streams: (a) music preferences as indicators of similarity in value orientations and (b) similarity in value orientations leading to social attraction. Two laboratory experiments and one dyadic field study demonstrated that music can create interpersonal bonds between young people because music preferences can be cues for similar or dissimilar value orientations, with similarity in values then contributing to social attraction. One study tested and ruled out an alternative explanation (via personality similarity), illuminating the differential impact of perceived value similarity versus personality similarity on social attraction. Value similarity is the missing link in explaining the musical bonding phenomenon, which seems to hold for Western and non-Western samples and in experimental and natural settings.

  10. Simple nuclear norm based algorithms for imputing missing data and forecasting in time series

    OpenAIRE

    Butcher, Holly Louise; Gillard, Jonathan William

    2017-01-01

    There has been much recent progress on the use of the nuclear norm for the so-called matrix completion problem (the problem of imputing missing values of a matrix). In this paper we investigate the use of the nuclear norm for modelling time series, with particular attention to imputing missing data and forecasting. We introduce a simple alternating projections type algorithm based on the nuclear norm for these tasks, and consider a number of practical examples.

  11. Fast and Rigorous Assignment Algorithm Multiple Preference and Calculation

    Directory of Open Access Journals (Sweden)

    Ümit Çiftçi

    2010-03-01

    Full Text Available The goal of paper is to develop an algorithm that evaluates students then places them depending on their desired choices according to dependant preferences. The developed algorithm is also used to implement software. The success and accuracy of the software as well as the algorithm are tested by applying it to ability test at Beykent University. This ability test is repeated several times in order to fill all available places at Fine Art Faculty departments in every academic year. It has been shown that this algorithm is very fast and rigorous after application of 2008-2009 and 2009-20010 academic years.Key Words: Assignment algorithm, student placement, ability test

  12. A New Missing Data Imputation Algorithm Applied to Electrical Data Loggers

    Directory of Open Access Journals (Sweden)

    Concepción Crespo Turrado

    2015-12-01

    Full Text Available Nowadays, data collection is a key process in the study of electrical power networks when searching for harmonics and a lack of balance among phases. In this context, the lack of data of any of the main electrical variables (phase-to-neutral voltage, phase-to-phase voltage, and current in each phase and power factor adversely affects any time series study performed. When this occurs, a data imputation process must be accomplished in order to substitute the data that is missing for estimated values. This paper presents a novel missing data imputation method based on multivariate adaptive regression splines (MARS and compares it with the well-known technique called multivariate imputation by chained equations (MICE. The results obtained demonstrate how the proposed method outperforms the MICE algorithm.

  13. A Weighing Algorithm for Checking Missing Components in a Pharmaceutical Line

    Directory of Open Access Journals (Sweden)

    Alessandro Silvestri

    2014-11-01

    image. The goal of the present work is the development of an algorithm able to optimize the production line of a pharmaceutical firm. In particular, the proposed weighing procedure allows both checking missing components in packaging and minimizing false rejects of packages by dynamic scales. The main problem is the presence at the same time, in the same package, of different components with different variable weights. The consequence is uncertainty in recognizing the absence of one or more components.

  14. 基于距离最大化和缺失数据聚类的填充算法%Missing value imputation algorithm based on distance maximization and missing data clustering

    Institute of Scientific and Technical Information of China (English)

    赵星; 王逊; 黄树成

    2018-01-01

    Through the improvement of the missing value filling algorithm based on K-means clustering,a filling algorithm based on distance maximization and missing data clustering is proposed in this paper.First of all,the original filling algorithm needs to enter the number of clusters in advance.To solve this problem,an improved K-means clustering algorithm is designed.It determines the cluster centers by the maximum distance between the data.It will automatically generate the number of clusters,and improve the efficiency of clustering.Secondly,the clustering distance function is improved.The improved algorithm can be used to cluster the missing value records,thus simplifying the steps of the original filling algorithm.Through experiments on STUDENT ALCOHOL CONSUMPTION data set,experimental results show that the proposed algorithm can improve efficiency and effectively fill the missing data at the same time.%通过对基于K-means聚类的缺失值填充算法的改进,文中提出了基于距离最大化和缺失数据聚类的填充算法.首先,针对原填充算法需要提前输入聚类个数这一缺点,设计了改进的K-means聚类算法:使用数据间的最大距离确定聚类中心,自动产生聚类个数,提高聚类效果;其次,对聚类的距离函数进行改进,采用部分距离度量方式,改进后的算法可以对含有缺失值的记录进行聚类,简化原填充算法步骤.通过对STUDENT ALCOHOL CONSUMPTION数据集的实验,结果证明了该算法能够在提高效率的同时,有效地填充缺失数据.

  15. Optimal power system generation scheduling by multi-objective genetic algorithms with preferences

    International Nuclear Information System (INIS)

    Zio, E.; Baraldi, P.; Pedroni, N.

    2009-01-01

    Power system generation scheduling is an important issue both from the economical and environmental safety viewpoints. The scheduling involves decisions with regards to the units start-up and shut-down times and to the assignment of the load demands to the committed generating units for minimizing the system operation costs and the emission of atmospheric pollutants. As many other real-world engineering problems, power system generation scheduling involves multiple, conflicting optimization criteria for which there exists no single best solution with respect to all criteria considered. Multi-objective optimization algorithms, based on the principle of Pareto optimality, can then be designed to search for the set of nondominated scheduling solutions from which the decision-maker (DM) must a posteriori choose the preferred alternative. On the other hand, often, information is available a priori regarding the preference values of the DM with respect to the objectives. When possible, it is important to exploit this information during the search so as to focus it on the region of preference of the Pareto-optimal set. In this paper, ways are explored to use this preference information for driving a multi-objective genetic algorithm towards the preferential region of the Pareto-optimal front. Two methods are considered: the first one extends the concept of Pareto dominance by biasing the chromosome replacement step of the algorithm by means of numerical weights that express the DM' s preferences; the second one drives the search algorithm by changing the shape of the dominance region according to linear trade-off functions specified by the DM. The effectiveness of the proposed approaches is first compared on a case study of literature. Then, a nonlinear, constrained, two-objective power generation scheduling problem is effectively tackled

  16. Bioinspired Computational Approach to Missing Value Estimation

    Directory of Open Access Journals (Sweden)

    Israel Edem Agbehadji

    2018-01-01

    Full Text Available Missing data occurs when values of variables in a dataset are not stored. Estimating these missing values is a significant step during the data cleansing phase of a big data management approach. The reason of missing data may be due to nonresponse or omitted entries. If these missing data are not handled properly, this may create inaccurate results during data analysis. Although a traditional method such as maximum likelihood method extrapolates missing values, this paper proposes a bioinspired method based on the behavior of birds, specifically the Kestrel bird. This paper describes the behavior and characteristics of the Kestrel bird, a bioinspired approach, in modeling an algorithm to estimate missing values. The proposed algorithm (KSA was compared with WSAMP, Firefly, and BAT algorithm. The results were evaluated using the mean of absolute error (MAE. A statistical test (Wilcoxon signed-rank test and Friedman test was conducted to test the performance of the algorithms. The results of Wilcoxon test indicate that time does not have a significant effect on the performance, and the quality of estimation between the paired algorithms was significant; the results of Friedman test ranked KSA as the best evolutionary algorithm.

  17. Tensor Completion for Estimating Missing Values in Visual Data

    KAUST Repository

    Liu, Ji

    2012-01-25

    In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between Fa

  18. Tensor Completion for Estimating Missing Values in Visual Data

    KAUST Repository

    Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping

    2012-01-01

    In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between Fa

  19. Tensor completion for estimating missing values in visual data.

    Science.gov (United States)

    Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping

    2013-01-01

    In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependent relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC an

  20. Nature Disaster Risk Evaluation with a Group Decision Making Method Based on Incomplete Hesitant Fuzzy Linguistic Preference Relations

    Directory of Open Access Journals (Sweden)

    Ming Tang

    2018-04-01

    Full Text Available Because the natural disaster system is a very comprehensive and large system, the disaster reduction scheme must rely on risk analysis. Experts’ knowledge and experiences play a critical role in disaster risk assessment. The hesitant fuzzy linguistic preference relation is an effective tool to express experts’ preference information when comparing pairwise alternatives. Owing to the lack of knowledge or a heavy workload, information may be missed in the hesitant fuzzy linguistic preference relation. Thus, an incomplete hesitant fuzzy linguistic preference relation is constructed. In this paper, we firstly discuss some properties of the additive consistent hesitant fuzzy linguistic preference relation. Next, the incomplete hesitant fuzzy linguistic preference relation, the normalized hesitant fuzzy linguistic preference relation, and the acceptable hesitant fuzzy linguistic preference relation are defined. Afterwards, three procedures to estimate the missing information are proposed. The first one deals with the situation in which there are only n − 1 known judgments involving all the alternatives; the second one is used to estimate the missing information of the hesitant fuzzy linguistic preference relation with more known judgments; while the third procedure is used to deal with ignorance situations in which there is at least one alternative with totally missing information. Furthermore, an algorithm for group decision making with incomplete hesitant fuzzy linguistic preference relations is given. Finally, we illustrate our model with a case study about flood disaster risk evaluation. A comparative analysis is presented to testify the advantage of our method.

  1. Nature Disaster Risk Evaluation with a Group Decision Making Method Based on Incomplete Hesitant Fuzzy Linguistic Preference Relations.

    Science.gov (United States)

    Tang, Ming; Liao, Huchang; Li, Zongmin; Xu, Zeshui

    2018-04-13

    Because the natural disaster system is a very comprehensive and large system, the disaster reduction scheme must rely on risk analysis. Experts' knowledge and experiences play a critical role in disaster risk assessment. The hesitant fuzzy linguistic preference relation is an effective tool to express experts' preference information when comparing pairwise alternatives. Owing to the lack of knowledge or a heavy workload, information may be missed in the hesitant fuzzy linguistic preference relation. Thus, an incomplete hesitant fuzzy linguistic preference relation is constructed. In this paper, we firstly discuss some properties of the additive consistent hesitant fuzzy linguistic preference relation. Next, the incomplete hesitant fuzzy linguistic preference relation, the normalized hesitant fuzzy linguistic preference relation, and the acceptable hesitant fuzzy linguistic preference relation are defined. Afterwards, three procedures to estimate the missing information are proposed. The first one deals with the situation in which there are only n-1 known judgments involving all the alternatives; the second one is used to estimate the missing information of the hesitant fuzzy linguistic preference relation with more known judgments; while the third procedure is used to deal with ignorance situations in which there is at least one alternative with totally missing information. Furthermore, an algorithm for group decision making with incomplete hesitant fuzzy linguistic preference relations is given. Finally, we illustrate our model with a case study about flood disaster risk evaluation. A comparative analysis is presented to testify the advantage of our method.

  2. Are decisions using cost-utility analyses robust to choice of SF-36/SF-12 preference-based algorithm?

    Directory of Open Access Journals (Sweden)

    Walton Surrey M

    2005-03-01

    Full Text Available Abstract Background Cost utility analysis (CUA using SF-36/SF-12 data has been facilitated by the development of several preference-based algorithms. The purpose of this study was to illustrate how decision-making could be affected by the choice of preference-based algorithms for the SF-36 and SF-12, and provide some guidance on selecting an appropriate algorithm. Methods Two sets of data were used: (1 a clinical trial of adult asthma patients; and (2 a longitudinal study of post-stroke patients. Incremental costs were assumed to be $2000 per year over standard treatment, and QALY gains realized over a 1-year period. Ten published algorithms were identified, denoted by first author: Brazier (SF-36, Brazier (SF-12, Shmueli, Fryback, Lundberg, Nichol, Franks (3 algorithms, and Lawrence. Incremental cost-utility ratios (ICURs for each algorithm, stated in dollars per quality-adjusted life year ($/QALY, were ranked and compared between datasets. Results In the asthma patients, estimated ICURs ranged from Lawrence's SF-12 algorithm at $30,769/QALY (95% CI: 26,316 to 36,697 to Brazier's SF-36 algorithm at $63,492/QALY (95% CI: 48,780 to 83,333. ICURs for the stroke cohort varied slightly more dramatically. The MEPS-based algorithm by Franks et al. provided the lowest ICUR at $27,972/QALY (95% CI: 20,942 to 41,667. The Fryback and Shmueli algorithms provided ICURs that were greater than $50,000/QALY and did not have confidence intervals that overlapped with most of the other algorithms. The ICUR-based ranking of algorithms was strongly correlated between the asthma and stroke datasets (r = 0.60. Conclusion SF-36/SF-12 preference-based algorithms produced a wide range of ICURs that could potentially lead to different reimbursement decisions. Brazier's SF-36 and SF-12 algorithms have a strong methodological and theoretical basis and tended to generate relatively higher ICUR estimates, considerations that support a preference for these algorithms over the

  3. Performance of algorithms that reconstruct missing transverse momentum in $\\sqrt{s}$= 8 TeV proton--proton collisions in the ATLAS detector

    CERN Document Server

    Aad, G.; Abdallah, Jalal; Abdinov, Ovsat; Abeloos, Baptiste; Aben, Rosemarie; Abolins, Maris; AbouZeid, Ossama; Abramowicz, Halina; Abreu, Henso; Abreu, Ricardo; Abulaiti, Yiming; Acharya, Bobby Samir; Adamczyk, Leszek; Adams, David; Adelman, Jahred; Adomeit, Stefanie; Adye, Tim; Affolder, Tony; Agatonovic-Jovin, Tatjana; Agricola, Johannes; Aguilar-Saavedra, Juan Antonio; Ahlen, Steven; Ahmadov, Faig; Aielli, Giulio; Akerstedt, Henrik; Åkesson, Torsten Paul Ake; Akimov, Andrei; Alberghi, Gian Luigi; Albert, Justin; Albrand, Solveig; Alconada Verzini, Maria Josefina; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexopoulos, Theodoros; Alhroob, Muhammad; Alimonti, Gianluca; Alio, Lion; Alison, John; Alkire, Steven Patrick; Allbrooke, Benedict; Allen, Benjamin William; Allport, Phillip; Aloisio, Alberto; Alonso, Alejandro; Alonso, Francisco; Alpigiani, Cristiano; Alvarez Gonzalez, Barbara; Άlvarez Piqueras, Damián; Alviggi, Mariagrazia; Amadio, Brian Thomas; Amako, Katsuya; Amaral Coutinho, Yara; Amelung, Christoph; Amidei, Dante; Amor Dos Santos, Susana Patricia; Amorim, Antonio; Amoroso, Simone; Amram, Nir; Amundsen, Glenn; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anders, John Kenneth; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Angelidakis, Stylianos; Angelozzi, Ivan; Anger, Philipp; Angerami, Aaron; Anghinolfi, Francis; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antonelli, Mario; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Arabidze, Giorgi; Arai, Yasuo; Araque, Juan Pedro; Arce, Ayana; Arduh, Francisco Anuar; Arguin, Jean-Francois; Argyropoulos, Spyridon; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnold, Hannah; Arratia, Miguel; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Artz, Sebastian; Asai, Shoji; Asbah, Nedaa; Ashkenazi, Adi; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkinson, Markus; Atlay, Naim Bora; Augsten, Kamil; Avolio, Giuseppe; Axen, Bradley; Ayoub, Mohamad Kassem; Azuelos, Georges; Baak, Max; Baas, Alessandra; Baca, Matthew John; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Bagiacchi, Paolo; Bagnaia, Paolo; Bai, Yu; Baines, John; Baker, Oliver Keith; Baldin, Evgenii; Balek, Petr; Balestri, Thomas; Balli, Fabrice; Balunas, William Keaton; Banas, Elzbieta; Banerjee, Swagato; Bannoura, Arwa A E; Barak, Liron; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barklow, Timothy; Barlow, Nick; Barnes, Sarah Louise; Barnett, Bruce; Barnett, Michael; Barnovska, Zuzana; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barranco Navarro, Laura; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Basalaev, Artem; Bassalat, Ahmed; Basye, Austin; Bates, Richard; Batista, Santiago Juan; Batley, Richard; Battaglia, Marco; Bauce, Matteo; Bauer, Florian; Bawa, Harinder Singh; Beacham, James; Beattie, Michael David; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Hans~Peter; Becker, Kathrin; Becker, Maurice; Beckingham, Matthew; Becot, Cyril; Beddall, Andrew; Beddall, Ayda; Bednyakov, Vadim; Bedognetti, Matteo; Bee, Christopher; Beemster, Lars; Beermann, Thomas; Begel, Michael; Behr, Janna Katharina; Belanger-Champagne, Camille; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belotskiy, Konstantin; Beltramello, Olga; Benary, Odette; Benchekroun, Driss; Bender, Michael; Bendtz, Katarina; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez Garcia, Jorge-Armando; Benjamin, Douglas; Bensinger, James; Bentvelsen, Stan; Beresford, Lydia; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Beringer, Jürg; Bernard, Clare; Bernard, Nathan Rogers; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertoli, Gabriele; Bertolucci, Federico; Bertsche, Carolyn; Bertsche, David; Besjes, Geert-Jan; Bessidskaia Bylund, Olga; Bessner, Martin Florian; Besson, Nathalie; Betancourt, Christopher; Bethke, Siegfried; Bevan, Adrian John; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianchini, Louis; Bianco, Michele; Biebel, Otmar; Biedermann, Dustin; Biesuz, Nicolo Vladi; Biglietti, Michela; Bilbao De Mendizabal, Javier; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biondi, Silvia; Bjergaard, David Martin; Black, Curtis; Black, James; Black, Kevin; Blackburn, Daniel; Blair, Robert; Blanchard, Jean-Baptiste; Blanco, Jacobo Ezequiel; Blazek, Tomas; Bloch, Ingo; Blocker, Craig; Blum, Walter; Blumenschein, Ulrike; Blunier, Sylvain; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Bock, Christopher; Boehler, Michael; Boerner, Daniela; Bogaerts, Joannes Andreas; Bogavac, Danijela; Bogdanchikov, Alexander; Bohm, Christian; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Boldyrev, Alexey; Bomben, Marco; Bona, Marcella; Boonekamp, Maarten; Borisov, Anatoly; Borissov, Guennadi; Bortfeldt, Jonathan; Bortolotto, Valerio; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boudreau, Joseph; Bouffard, Julian; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Bousson, Nicolas; Boutle, Sarah Kate; Boveia, Antonio; Boyd, James; Boyko, Igor; Bracinik, Juraj; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Breaden Madden, William Dmitri; Brendlinger, Kurt; Brennan, Amelia Jean; Brenner, Lydia; Brenner, Richard; Bressler, Shikma; Bristow, Timothy Michael; Britton, Dave; Britzger, Daniel; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brooijmans, Gustaaf; Brooks, Timothy; Brooks, William; Brosamer, Jacquelyn; Brost, Elizabeth; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Bruni, Alessia; Bruni, Graziano; Brunt, Benjamin; Bruschi, Marco; Bruscino, Nello; Bryant, Patrick; Bryngemark, Lene; Buanes, Trygve; Buat, Quentin; Buchholz, Peter; Buckley, Andrew; Budagov, Ioulian; Buehrer, Felix; Bugge, Lars; Bugge, Magnar Kopangen; Bulekov, Oleg; Bullock, Daniel; Burckhart, Helfried; Burdin, Sergey; Burgard, Carsten Daniel; Burghgrave, Blake; Burke, Stephen; Burmeister, Ingo; Busato, Emmanuel; Büscher, Daniel; Büscher, Volker; Bussey, Peter; Butler, John; Butt, Aatif Imtiaz; Buttar, Craig; Butterworth, Jonathan; Butti, Pierfrancesco; Buttinger, William; Buzatu, Adrian; Buzykaev, Aleksey; Cabrera Urbán, Susana; Caforio, Davide; Cairo, Valentina; Cakir, Orhan; Calace, Noemi; Calafiura, Paolo; Calandri, Alessandro; Calderini, Giovanni; Calfayan, Philippe; Caloba, Luiz; Calvet, David; Calvet, Samuel; Calvet, Thomas Philippe; Camacho Toro, Reina; Camarda, Stefano; Camarri, Paolo; Cameron, David; Caminal Armadans, Roger; Camincher, Clement; Campana, Simone; Campanelli, Mario; Campoverde, Angel; Canale, Vincenzo; Canepa, Anadi; Cano Bret, Marc; Cantero, Josu; Cantrill, Robert; Cao, Tingting; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Caputo, Regina; Carbone, Ryne Michael; Cardarelli, Roberto; Cardillo, Fabio; Carli, Ina; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Sascha; Carquin, Edson; Carrillo-Montoya, German D; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Casolino, Mirkoantonio; Casper, David William; Castaneda-Miranda, Elizabeth; Castelli, Angelantonio; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Caudron, Julien; Cavaliere, Viviana; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerda Alberich, Leonor; Cerio, Benjamin; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cerv, Matevz; Cervelli, Alberto; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Yat Long; Chang, Philip; Chapman, John Derek; Charlton, Dave; Chau, Chav Chhiv; Chavez Barajas, Carlos Alberto; Che, Siinn; Cheatham, Susan; Chegwidden, Andrew; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Karen; Chen, Shenjian; Chen, Shion; Chen, Xin; Chen, Ye; Cheng, Hok Chuen; Cheng, Yangyang; Cheplakov, Alexander; Cheremushkina, Evgenia; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Chevalier, Laurent; Chiarella, Vitaliano; Chiarelli, Giorgio; Chiodini, Gabriele; Chisholm, Andrew; Chislett, Rebecca Thalatta; Chitan, Adrian; Chizhov, Mihail; Choi, Kyungeon; Chouridou, Sofia; Chow, Bonnie Kar Bo; Christodoulou, Valentinos; Chromek-Burckhart, Doris; Chudoba, Jiri; Chuinard, Annabelle Julia; Chwastowski, Janusz; Chytka, Ladislav; Ciapetti, Guido; Ciftci, Abbas Kenan; Cinca, Diane; Cindro, Vladimir; Cioara, Irina Antonela; Ciocio, Alessandra; Cirotto, Francesco; Citron, Zvi Hirsh; Ciubancan, Mihai; Clark, Allan G; Clark, Brian Lee; Clark, Philip James; Clarke, Robert; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coffey, Laurel; Colasurdo, Luca; Cole, Brian; Cole, Stephen; Colijn, Auke-Pieter; Collot, Johann; Colombo, Tommaso; Compostella, Gabriele; Conde Muiño, Patricia; Coniavitis, Elias; Connell, Simon Henry; Connelly, Ian; Consorti, Valerio; Constantinescu, Serban; Conta, Claudio; Conti, Geraldine; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Corso-Radu, Alina; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Cottin, Giovanna; Cowan, Glen; Cox, Brian; Cranmer, Kyle; Crawley, Samuel Joseph; Cree, Graham; Crépé-Renaudin, Sabine; Crescioli, Francesco; Cribbs, Wayne Allen; Crispin Ortuzar, Mireia; Cristinziani, Markus; Croft, Vince; Crosetti, Giovanni; Cuhadar Donszelmann, Tulay; Cummings, Jane; Curatolo, Maria; Cúth, Jakub; Cuthbert, Cameron; Czirr, Hendrik; Czodrowski, Patrick; D'Auria, Saverio; D'Onofrio, Monica; Da Cunha Sargedas De Sousa, Mario Jose; Da Via, Cinzia; Dabrowski, Wladyslaw; Dafinca, Alexandru; Dai, Tiesheng; Dale, Orjan; Dallaire, Frederick; Dallapiccola, Carlo; Dam, Mogens; Dandoy, Jeffrey Rogers; Dang, Nguyen Phuong; Daniells, Andrew Christopher; Danninger, Matthias; Dano Hoffmann, Maria; Dao, Valerio; Darbo, Giovanni; Darmora, Smita; Dassoulas, James; Dattagupta, Aparajita; Davey, Will; David, Claire; Davidek, Tomas; Davies, Eleanor; Davies, Merlin; Davison, Peter; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Daya-Ishmukhametova, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Benedetti, Abraham; De Castro, Stefano; De Cecco, Sandro; De Groot, Nicolo; de Jong, Paul; De la Torre, Hector; De Lorenzi, Francesco; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dearnaley, William James; Debbe, Ramiro; Debenedetti, Chiara; Dedovich, Dmitri; Deigaard, Ingrid; Del Peso, Jose; Del Prete, Tarcisio; Delgove, David; Deliot, Frederic; Delitzsch, Chris Malena; Deliyergiyev, Maksym; Dell'Acqua, Andrea; Dell'Asta, Lidia; Dell'Orso, Mauro; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; Deluca, Carolina; DeMarco, David; Demers, Sarah; Demichev, Mikhail; Demilly, Aurelien; Denisov, Sergey; Denysiuk, Denys; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deterre, Cecile; Dette, Karola; Deviveiros, Pier-Olivier; Dewhurst, Alastair; Dhaliwal, Saminder; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Donato, Camilla; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Di Valentino, David; Diaconu, Cristinel; Diamond, Miriam; Dias, Flavia; Diaz, Marco Aurelio; Diehl, Edward; Dietrich, Janet; Diglio, Sara; Dimitrievska, Aleksandra; Dingfelder, Jochen; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; Djuvsland, Julia Isabell; Barros do Vale, Maria Aline; Dobos, Daniel; Dobre, Monica; Doglioni, Caterina; Dohmae, Takeshi; Dolejsi, Jiri; Dolezal, Zdenek; Dolgoshein, Boris; Donadelli, Marisilvia; Donati, Simone; Dondero, Paolo; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dova, Maria-Teresa; Doyle, Tony; Drechsler, Eric; Dris, Manolis; Du, Yanyan; Duarte-Campderros, Jorge; Dubreuil, Emmanuelle; Duchovni, Ehud; Duckeck, Guenter; Ducu, Otilia Anamaria; Duda, Dominik; Dudarev, Alexey; Duflot, Laurent; Duguid, Liam; Dührssen, Michael; Dunford, Monica; Duran Yildiz, Hatice; Düren, Michael; Durglishvili, Archil; Duschinger, Dirk; Dutta, Baishali; Dyndal, Mateusz; Eckardt, Christoph; Ecker, Katharina Maria; Edgar, Ryan Christopher; Edson, William; Edwards, Nicholas Charles; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Ekelof, Tord; El Kacimi, Mohamed; Ellajosyula, Venugopal; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Elliot, Alison; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Enari, Yuji; Endner, Oliver Chris; Endo, Masaki; Ennis, Joseph Stanford; Erdmann, Johannes; Ereditato, Antonio; Ernis, Gunar; Ernst, Jesse; Ernst, Michael; Errede, Steven; Ertel, Eugen; Escalier, Marc; Esch, Hendrik; Escobar, Carlos; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Ezhilov, Alexey; Fabbri, Laura; Facini, Gabriel; Fakhrutdinov, Rinat; Falciano, Speranza; Falla, Rebecca Jane; Faltova, Jana; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farina, Christian; Farooque, Trisha; Farrell, Steven; Farrington, Sinead; Farthouat, Philippe; Fassi, Farida; Fassnacht, Patrick; Fassouliotis, Dimitrios; Faucci Giannelli, Michele; Favareto, Andrea; Fayard, Louis; Fedin, Oleg; Fedorko, Wojciech; Feigl, Simon; Feligioni, Lorenzo; Feng, Cunfeng; Feng, Eric; Feng, Haolu; Fenyuk, Alexander; Feremenga, Last; Fernandez Martinez, Patricia; Fernandez Perez, Sonia; Ferrando, James; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiedler, Frank; Filipčič, Andrej; Filipuzzi, Marco; Filthaut, Frank; Fincke-Keeler, Margret; Finelli, Kevin Daniel; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Adam; Fischer, Cora; Fischer, Julia; Fisher, Wade Cameron; Flaschel, Nils; Fleck, Ivor; Fleischmann, Philipp; Fletcher, Gareth Thomas; Fletcher, Gregory; Fletcher, Rob Roy MacGregor; Flick, Tobias; Floderus, Anders; Flores Castillo, Luis; Flowerdew, Michael; Forcolin, Giulio Tiziano; Formica, Andrea; Forti, Alessandra; Fournier, Daniel; Fox, Harald; Fracchia, Silvia; Francavilla, Paolo; Franchini, Matteo; Francis, David; Franconi, Laura; Franklin, Melissa; Frate, Meghan; Fraternali, Marco; Freeborn, David; Fressard-Batraneanu, Silvia; Friedrich, Felix; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fusayasu, Takahiro; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gabrielli, Alessandro; Gabrielli, Andrea; Gach, Grzegorz; Gadatsch, Stefan; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Galhardo, Bruno; Gallas, Elizabeth; Gallop, Bruce; Gallus, Petr; Galster, Gorm Aske Gram Krohn; Gan, KK; Gao, Jun; Gao, Yanyan; Gao, Yongsheng; Garay Walls, Francisca; García, Carmen; García Navarro, José Enrique; Garcia-Sciveres, Maurice; Gardner, Robert; Garelli, Nicoletta; Garonne, Vincent; Gatti, Claudio; Gaudiello, Andrea; Gaudio, Gabriella; Gaur, Bakul; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Gecse, Zoltan; Gee, Norman; Geich-Gimbel, Christoph; Geisler, Manuel Patrice; Gemme, Claudia; Genest, Marie-Hélène; Geng, Cong; Gentile, Simonetta; George, Simon; Gerbaudo, Davide; Gershon, Avi; Ghasemi, Sara; Ghazlane, Hamid; Giacobbe, Benedetto; Giagu, Stefano; Giannetti, Paola; Gibbard, Bruce; Gibson, Stephen; Gignac, Matthew; Gilchriese, Murdock; Gillam, Thomas; Gillberg, Dag; Gilles, Geoffrey; Gingrich, Douglas; Giokaris, Nikos; Giordani, MarioPaolo; Giorgi, Filippo Maria; Giorgi, Francesco Michelangelo; Giraud, Pierre-Francois; Giromini, Paolo; Giugni, Danilo; Giuliani, Claudia; Giulini, Maddalena; Gjelsten, Børge Kile; Gkaitatzis, Stamatios; Gkialas, Ioannis; Gkougkousis, Evangelos Leonidas; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glaysher, Paul; Glazov, Alexandre; Goblirsch-Kolb, Maximilian; Goddard, Jack Robert; Godlewski, Jan; Goldfarb, Steven; Golling, Tobias; Golubkov, Dmitry; Gomes, Agostinho; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; González de la Hoz, Santiago; Gonzalez Parra, Garoe; Gonzalez-Sevilla, Sergio; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Goshaw, Alfred; Gössling, Claus; Gostkin, Mikhail Ivanovitch; Goudet, Christophe Raymond; Goujdami, Driss; Goussiou, Anna; Govender, Nicolin; Gozani, Eitan; Graber, Lars; Grabowska-Bold, Iwona; Gradin, Per Olov Joakim; Grafström, Per; Gramling, Johanna; Gramstad, Eirik; Grancagnolo, Sergio; Gratchev, Vadim; Gray, Heather; Graziani, Enrico; Greenwood, Zeno Dixon; Grefe, Christian; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Grevtsov, Kirill; Griffiths, Justin; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grivaz, Jean-Francois; Groh, Sabrina; Grohs, Johannes Philipp; Gross, Eilam; Grosse-Knetter, Joern; Grossi, Giulio Cornelio; Grout, Zara Jane; Guan, Liang; Guenther, Jaroslav; Guescini, Francesco; Guest, Daniel; Gueta, Orel; Guido, Elisa; Guillemin, Thibault; Guindon, Stefan; Gul, Umar; Gumpert, Christian; Guo, Jun; Guo, Yicheng; Gupta, Shaun; Gustavino, Giuliano; Gutierrez, Phillip; Gutierrez Ortiz, Nicolas Gilberto; Gutschow, Christian; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haber, Carl; Hadavand, Haleh Khani; Haddad, Nacim; Hadef, Asma; Haefner, Petra; Hageböck, Stephan; Hajduk, Zbigniew; Hakobyan, Hrachya; Haleem, Mahsana; Haley, Joseph; Hall, David; Halladjian, Garabed; Hallewell, Gregory David; Hamacher, Klaus; Hamal, Petr; Hamano, Kenji; Hamilton, Andrew; Hamity, Guillermo Nicolas; Hamnett, Phillip George; Han, Liang; Hanagaki, Kazunori; Hanawa, Keita; Hance, Michael; Haney, Bijan; Hanke, Paul; Hanna, Remie; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Maike Christina; Hansen, Peter Henrik; Hara, Kazuhiko; Hard, Andrew; Harenberg, Torsten; Hariri, Faten; Harkusha, Siarhei; Harrington, Robert; Harrison, Paul Fraser; Hartjes, Fred; Hasegawa, Makoto; Hasegawa, Yoji; Hasib, A; Hassani, Samira; Haug, Sigve; Hauser, Reiner; Hauswald, Lorenz; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Anthony David; Hayashi, Takayasu; Hayden, Daniel; Hays, Chris; Hays, Jonathan Michael; Hayward, Helen; Haywood, Stephen; Head, Simon; Heck, Tobias; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heim, Timon; Heinemann, Beate; Heinrich, Lukas; Hejbal, Jiri; Helary, Louis; Hellman, Sten; Helsens, Clement; Henderson, James; Henderson, Robert; Heng, Yang; Henkelmann, Steffen; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Herbert, Geoffrey Henry; Hernández Jiménez, Yesenia; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hesketh, Gavin Grant; Hessey, Nigel; Hetherly, Jeffrey Wayne; Hickling, Robert; Higón-Rodriguez, Emilio; Hill, Ewan; Hill, John; Hiller, Karl Heinz; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hinman, Rachel Reisner; Hirose, Minoru; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoenig, Friedrich; Hohlfeld, Marc; Hohn, David; Holmes, Tova Ray; Homann, Michael; Hong, Tae Min; Hooberman, Benjamin Henry; Hopkins, Walter; Horii, Yasuyuki; Horton, Arthur James; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howard, Jacob; Howarth, James; Hrabovsky, Miroslav; Hristova, Ivana; Hrivnac, Julius; Hryn'ova, Tetiana; Hrynevich, Aliaksei; Hsu, Catherine; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Hu, Diedi; Hu, Qipeng; Huang, Yanping; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Huhtinen, Mika; Hülsing, Tobias Alexander; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Ideal, Emma; Idrissi, Zineb; Iengo, Paolo; Igonkina, Olga; Iizawa, Tomoya; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Iurii; Iliadis, Dimitrios; Ilic, Nikolina; Ince, Tayfun; Introzzi, Gianluca; Ioannou, Pavlos; Iodice, Mauro; Iordanidou, Kalliopi; Ippolito, Valerio; Irles Quiles, Adrian; Isaksson, Charlie; Ishino, Masaya; Ishitsuka, Masaki; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Iturbe Ponce, Julia Mariana; Iuppa, Roberto; Ivarsson, Jenny; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jabbar, Samina; Jackson, Brett; Jackson, Matthew; Jackson, Paul; Jain, Vivek; Jakobi, Katharina Bianca; Jakobs, Karl; Jakobsen, Sune; Jakoubek, Tomas; Jamin, David Olivier; Jana, Dilip; Jansen, Eric; Jansky, Roland; Janssen, Jens; Janus, Michel; Jarlskog, Göran; Javadov, Namig; Javůrek, Tomáš; Jeanneau, Fabien; Jeanty, Laura; Jejelava, Juansher; Jeng, Geng-yuan; Jennens, David; Jenni, Peter; Jentzsch, Jennifer; Jeske, Carl; Jézéquel, Stéphane; Ji, Haoshuang; Jia, Jiangyong; Jiang, Hai; Jiang, Yi; Jiggins, Stephen; Jimenez Pena, Javier; Jin, Shan; Jinaru, Adam; Jinnouchi, Osamu; Johansson, Per; Johns, Kenneth; Johnson, William Joseph; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Sarah; Jones, Tim; Jongmanns, Jan; Jorge, Pedro; Jovicevic, Jelena; Ju, Xiangyang; Juste Rozas, Aurelio; Köhler, Markus Konrad; Kaci, Mohammed; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kahn, Sebastien Jonathan; Kajomovitz, Enrique; Kalderon, Charles William; Kaluza, Adam; Kama, Sami; Kamenshchikov, Andrey; Kanaya, Naoko; Kaneti, Steven; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kaplan, Laser Seymour; Kapliy, Anton; Kar, Deepak; Karakostas, Konstantinos; Karamaoun, Andrew; Karastathis, Nikolaos; Kareem, Mohammad Jawad; Karentzos, Efstathios; Karnevskiy, Mikhail; Karpov, Sergey; Karpova, Zoya; Karthik, Krishnaiyengar; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kasahara, Kota; Kashif, Lashkar; Kass, Richard; Kastanas, Alex; Kataoka, Yousuke; Kato, Chikuma; Katre, Akshay; Katzy, Judith; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kazama, Shingo; Kazanin, Vassili; Keeler, Richard; Kehoe, Robert; Keller, John; Kempster, Jacob Julian; Kentaro, Kawade; Keoshkerian, Houry; Kepka, Oldrich; Kerševan, Borut Paul; Kersten, Susanne; Keyes, Robert; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharlamov, Alexey; Khoo, Teng Jian; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kido, Shogo; Kim, Hee Yeun; Kim, Shinhong; Kim, Young-Kee; Kimura, Naoki; Kind, Oliver Maria; King, Barry; King, Matthew; King, Samuel Burton; Kirk, Julie; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kiss, Florian; Kiuchi, Kenji; Kivernyk, Oleh; Kladiva, Eduard; Klein, Matthew Henry; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klimek, Pawel; Klimentov, Alexei; Klingenberg, Reiner; Klinger, Joel Alexander; Klioutchnikova, Tatiana; Kluge, Eike-Erik; Kluit, Peter; Kluth, Stefan; Knapik, Joanna; Kneringer, Emmerich; Knoops, Edith; Knue, Andrea; Kobayashi, Aine; Kobayashi, Dai; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Koffas, Thomas; Koffeman, Els; Kogan, Lucy Anne; Kohlmann, Simon; Koi, Tatsumi; Kolanoski, Hermann; Kolb, Mathis; Koletsou, Iro; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Kondrashova, Nataliia; Köneke, Karsten; König, Adriaan; Kono, Takanori; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kopeliansky, Revital; Koperny, Stefan; Köpke, Lutz; Kopp, Anna Katharina; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Kortner, Oliver; Kortner, Sandra; Kosek, Tomas; Kostyukhin, Vadim; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumeli-Charalampidi, Athina; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kramarenko, Viktor; Kramberger, Gregor; Krasnopevtsev, Dimitriy; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, Jana; Kravchenko, Anton; Kretz, Moritz; Kretzschmar, Jan; Kreutzfeldt, Kristof; Krieger, Peter; Krizka, Karol; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Krumnack, Nils; Kruse, Amanda; Kruse, Mark; Kruskal, Michael; Kubota, Takashi; Kucuk, Hilal; Kuday, Sinan; Kuechler, Jan Thomas; Kuehn, Susanne; Kugel, Andreas; Kuger, Fabian; Kuhl, Andrew; Kuhl, Thorsten; Kukhtin, Victor; Kukla, Romain; Kulchitsky, Yuri; Kuleshov, Sergey; Kuna, Marine; Kunigo, Takuto; Kupco, Alexander; Kurashige, Hisaya; Kurochkin, Yurii; Kus, Vlastimil; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; Kwan, Tony; Kyriazopoulos, Dimitrios; La Rosa, Alessandro; La Rosa Navarro, Jose Luis; La Rotonda, Laura; Lacasta, Carlos; Lacava, Francesco; Lacey, James; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Lambourne, Luke; Lammers, Sabine; Lampen, Caleb; Lampl, Walter; Lançon, Eric; Landgraf, Ulrich; Landon, Murrough; Lang, Valerie Susanne; Lange, J örn Christian; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lanza, Agostino; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Lasagni Manghi, Federico; Lassnig, Mario; Laurelli, Paolo; Lavrijsen, Wim; Law, Alexander; Laycock, Paul; Lazovich, Tomo; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Menedeu, Eve; LeBlanc, Matthew Edgar; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Claire Alexandra; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Guillaume; Lefebvre, Michel; Legger, Federica; Leggett, Charles; Lehan, Allan; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leight, William Axel; Leisos, Antonios; Leister, Andrew Gerard; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lemmer, Boris; Leney, Katharine; Lenz, Tatjana; Lenzi, Bruno; Leone, Robert; Leone, Sandra; Leonidopoulos, Christos; Leontsinis, Stefanos; Leroy, Claude; Lester, Christopher; Levchenko, Mikhail; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levy, Mark; Lewis, Adrian; Leyko, Agnieszka; Leyton, Michael; Li, Bing; Li, Haifeng; Li, Ho Ling; Li, Lei; Li, Liang; Li, Shu; Li, Xingguo; Li, Yichen; Liang, Zhijun; Liao, Hongbo; Liberti, Barbara; Liblong, Aaron; Lichard, Peter; Lie, Ki; Liebal, Jessica; Liebig, Wolfgang; Limbach, Christian; Limosani, Antonio; Lin, Simon; Lin, Tai-Hua; Lindquist, Brian Edward; Lipeles, Elliot; Lipniacka, Anna; Lisovyi, Mykhailo; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Bo; Liu, Dong; Liu, Hao; Liu, Hongbin; Liu, Jian; Liu, Jianbei; Liu, Kun; Liu, Lulu; Liu, Miaoyuan; Liu, Minghui; Liu, Yanlin; Liu, Yanwen; Livan, Michele; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lo Sterzo, Francesco; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Loebinger, Fred; Loevschall-Jensen, Ask Emil; Loew, Kevin Michael; Loginov, Andrey; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Long, Brian Alexander; Long, Jonathan David; Long, Robin Eamonn; Looper, Kristina Anne; Lopes, Lourenco; Lopez Mateos, David; Lopez Paredes, Brais; Lopez Paz, Ivan; Lopez Solis, Alvaro; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Lösel, Philipp Jonathan; Lou, XinChou; Lounis, Abdenour; Love, Jeremy; Love, Peter; Lu, Haonan; Lu, Nan; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Luedtke, Christian; Luehring, Frederick; Lukas, Wolfgang; Luminari, Lamberto; Lundberg, Olof; Lund-Jensen, Bengt; Lynn, David; Lysak, Roman; Lytken, Else; Ma, Hong; Ma, Lian Liang; Maccarrone, Giovanni; Macchiolo, Anna; Macdonald, Calum Michael; Maček, Boštjan; Machado Miguens, Joana; Madaffari, Daniele; Madar, Romain; Maddocks, Harvey Jonathan; Mader, Wolfgang; Madsen, Alexander; Maeda, Junpei; Maeland, Steffen; Maeno, Tadashi; Maevskiy, Artem; Magradze, Erekle; Mahlstedt, Joern; Maiani, Camilla; Maidantchik, Carmen; Maier, Andreas Alexander; Maier, Thomas; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Malaescu, Bogdan; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Maltezos, Stavros; Malyukov, Sergei; Mamuzic, Judita; Mancini, Giada; Mandelli, Beatrice; Mandelli, Luciano; Mandić, Igor; Maneira, José; Manhaes de Andrade Filho, Luciano; Manjarres Ramos, Joany; Mann, Alexander; Mansoulie, Bruno; Mantifel, Rodger; Mantoani, Matteo; Manzoni, Stefano; Mapelli, Livio; March, Luis; Marchiori, Giovanni; Marcisovsky, Michal; Marjanovic, Marija; Marley, Daniel; Marroquim, Fernando; Marsden, Stephen Philip; Marshall, Zach; Marti, Lukas Fritz; Marti-Garcia, Salvador; Martin, Brian Thomas; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martinez, Mario; Martin-Haugh, Stewart; Martoiu, Victor Sorin; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massa, Lorenzo; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mättig, Peter; Mattmann, Johannes; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; Mazini, Rachid; Mazza, Simone Michele; Mc Fadden, Neil Christopher; Mc Goldrick, Garrin; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McFarlane, Kenneth; Mcfayden, Josh; Mchedlidze, Gvantsa; McMahon, Steve; McPherson, Robert; Medinnis, Michael; Meehan, Samuel; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meineck, Christian; Meirose, Bernhard; Mellado Garcia, Bruce Rafael; Meloni, Federico; Mengarelli, Alberto; Menke, Sven; Meoni, Evelin; Mercurio, Kevin Michael; Mergelmeyer, Sebastian; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Carsten; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Meyer Zu Theenhausen, Hanno; Middleton, Robin; Miglioranzi, Silvia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Milesi, Marco; Milic, Adriana; Miller, David; Mills, Corrinne; Milov, Alexander; Milstead, David; Minaenko, Andrey; Minami, Yuto; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mistry, Khilesh; Mitani, Takashi; Mitrevski, Jovan; Mitsou, Vasiliki A; Miucci, Antonio; Miyagawa, Paul; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Mochizuki, Kazuya; Mohapatra, Soumya; Mohr, Wolfgang; Molander, Simon; Moles-Valls, Regina; Monden, Ryutaro; Mondragon, Matthew Craig; Mönig, Klaus; Monk, James; Monnier, Emmanuel; Montalbano, Alyssa; Montejo Berlingen, Javier; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Morange, Nicolas; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Mori, Daniel; Mori, Tatsuya; Morii, Masahiro; Morinaga, Masahiro; Morisbak, Vanja; Moritz, Sebastian; Morley, Anthony Keith; Mornacchi, Giuseppe; Morris, John; Mortensen, Simon Stark; Morvaj, Ljiljana; Mosidze, Maia; Moss, Josh; Motohashi, Kazuki; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Muanza, Steve; Mudd, Richard; Mueller, Felix; Mueller, James; Mueller, Ralph Soeren Peter; Mueller, Thibaut; Muenstermann, Daniel; Mullen, Paul; Mullier, Geoffrey; Munoz Sanchez, Francisca Javiela; Murillo Quijada, Javier Alberto; Murray, Bill; Musheghyan, Haykuhi; Myagkov, Alexey; Myska, Miroslav; Nachman, Benjamin Philip; Nackenhorst, Olaf; Nadal, Jordi; Nagai, Koichi; Nagai, Ryo; Nagai, Yoshikazu; Nagano, Kunihiro; Nagasaka, Yasushi; Nagata, Kazuki; Nagel, Martin; Nagy, Elemer; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Namasivayam, Harisankar; Naranjo Garcia, Roger Felipe; Narayan, Rohin; Narrias Villar, Daniel Isaac; Naryshkin, Iouri; Naumann, Thomas; Navarro, Gabriela; Nayyar, Ruchika; Neal, Homer; Nechaeva, Polina; Neep, Thomas James; Nef, Pascal Daniel; Negri, Andrea; Negrini, Matteo; Nektarijevic, Snezana; Nellist, Clara; Nelson, Andrew; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neumann, Manuel; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen, Duong Hai; Nickerson, Richard; Nicolaidou, Rosy; Nicquevert, Bertrand; Nielsen, Jason; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Jon Kerr; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nisius, Richard; Nobe, Takuya; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nooney, Tamsin; Norberg, Scarlet; Nordberg, Markus; Novgorodova, Olga; Nowak, Sebastian; Nozaki, Mitsuaki; Nozka, Libor; Ntekas, Konstantinos; Nurse, Emily; Nuti, Francesco; O'grady, Fionnbarr; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Obermann, Theresa; Ocariz, Jose; Ochi, Atsuhiko; Ochoa, Ines; Ochoa-Ricoux, Juan Pedro; Oda, Susumu; Odaka, Shigeru; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohman, Henrik; Oide, Hideyuki; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Oleiro Seabra, Luis Filipe; Olivares Pino, Sebastian Andres; Oliveira Damazio, Denis; Olszewski, Andrzej; Olszowska, Jolanta; Onofre, António; Onogi, Kouta; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlando, Nicola; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Otero y Garzon, Gustavo; Otono, Hidetoshi; Ouchrif, Mohamed; Ould-Saada, Farid; Ouraou, Ahmimed; Oussoren, Koen Pieter; Ouyang, Qun; Ovcharova, Ana; Owen, Mark; Owen, Rhys Edward; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pachal, Katherine; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Pagáčová, Martina; Pagan Griso, Simone; Paige, Frank; Pais, Preema; Pajchel, Katarina; Palacino, Gabriel; Palestini, Sandro; Palka, Marek; Pallin, Dominique; Palma, Alberto; Panagiotopoulou, Evgenia; Pandini, Carlo Enrico; Panduro Vazquez, William; Pani, Priscilla; Panitkin, Sergey; Pantea, Dan; Paolozzi, Lorenzo; Papadopoulou, Theodora; Papageorgiou, Konstantinos; Paramonov, Alexander; Paredes Hernandez, Daniela; Parker, Michael Andrew; Parker, Kerry Ann; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pascuzzi, Vincent; Pasqualucci, Enrico; Passaggio, Stefano; Pastore, Fernanda; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Pauly, Thilo; Pearce, James; Pearson, Benjamin; Pedersen, Lars Egholm; Pedersen, Maiken; Pedraza Lopez, Sebastian; Pedro, Rute; Peleganchuk, Sergey; Pelikan, Daniel; Penc, Ondrej; Peng, Cong; Peng, Haiping; Penning, Bjoern; Penwell, John; Perepelitsa, Dennis; Perez Codina, Estel; Perini, Laura; Pernegger, Heinz; Perrella, Sabrina; Peschke, Richard; Peshekhonov, Vladimir; Peters, Krisztian; Peters, Yvonne; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petroff, Pierre; Petrolo, Emilio; Petrucci, Fabrizio; Pettersson, Nora Emilia; Peyaud, Alan; Pezoa, Raquel; Phillips, Peter William; Piacquadio, Giacinto; Pianori, Elisabetta; Picazio, Attilio; Piccaro, Elisa; Piccinini, Maurizio; Pickering, Mark Andrew; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pin, Arnaud Willy J; Pina, João Antonio; Pinamonti, Michele; Pinfold, James; Pingel, Almut; Pires, Sylvestre; Pirumov, Hayk; Pitt, Michael; Pizio, Caterina; Plazak, Lukas; Pleier, Marc-Andre; Pleskot, Vojtech; Plotnikova, Elena; Plucinski, Pawel; Pluth, Daniel; Poettgen, Ruth; Poggioli, Luc; Pohl, David-leon; Polesello, Giacomo; Poley, Anne-luise; Policicchio, Antonio; Polifka, Richard; Polini, Alessandro; Pollard, Christopher Samuel; Polychronakos, Venetios; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Pospisil, Stanislav; Potamianos, Karolos; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Pozdnyakov, Valery; Pozo Astigarraga, Mikel Eukeni; Pralavorio, Pascal; Pranko, Aliaksandr; Prell, Soeren; Price, Darren; Price, Lawrence; Primavera, Margherita; Prince, Sebastien; Proissl, Manuel; Prokofiev, Kirill; Prokoshin, Fedor; Protopapadaki, Eftychia-sofia; Protopopescu, Serban; Proudfoot, James; Przybycien, Mariusz; Puddu, Daniele; Puldon, David; Purohit, Milind; Puzo, Patrick; Qian, Jianming; Qin, Gang; Qin, Yang; Quadt, Arnulf; Quarrie, David; Quayle, William; Queitsch-Maitland, Michaela; Quilty, Donnchadha; Raddum, Silje; Radeka, Veljko; Radescu, Voica; Radhakrishnan, Sooraj Krishnan; Radloff, Peter; Rados, Pere; Ragusa, Francesco; Rahal, Ghita; Rajagopalan, Srinivasan; Rammensee, Michael; Rangel-Smith, Camila; Rauscher, Felix; Rave, Stefan; Ravenscroft, Thomas; Raymond, Michel; Read, Alexander Lincoln; Readioff, Nathan Peter; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Rehnisch, Laura; Reichert, Joseph; Reisin, Hernan; Rembser, Christoph; Ren, Huan; Rescigno, Marco; Resconi, Silvia; Rezanova, Olga; Reznicek, Pavel; Rezvani, Reyhaneh; Richter, Robert; Richter, Stefan; Richter-Was, Elzbieta; Ricken, Oliver; Ridel, Melissa; Rieck, Patrick; Riegel, Christian Johann; Rieger, Julia; Rifki, Othmane; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Ristić, Branislav; Ritsch, Elmar; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robson, Aidan; Roda, Chiara; Rodina, Yulia; Rodriguez Perez, Andrea; Roe, Shaun; Rogan, Christopher Sean; Røhne, Ole; Romaniouk, Anatoli; Romano, Marino; Romano Saez, Silvestre Marino; Romero Adam, Elena; Rompotis, Nikolaos; Ronzani, Manfredi; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Peyton; Rosenthal, Oliver; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rosten, Jonatan; Rosten, Rachel; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexandre; Rozen, Yoram; Ruan, Xifeng; Rubbo, Francesco; Rubinskiy, Igor; Rud, Viacheslav; Rudolph, Matthew Scott; Rühr, Frederik; Ruiz-Martinez, Aranzazu; Rurikova, Zuzana; Rusakovich, Nikolai; Ruschke, Alexander; Russell, Heather; Rutherfoord, John; Ruthmann, Nils; Ryabov, Yury; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Ryzhov, Andrey; Saavedra, Aldo; Sabato, Gabriele; Sacerdoti, Sabrina; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Saha, Puja; Sahinsoy, Merve; Saimpert, Matthias; Saito, Tomoyuki; Sakamoto, Hiroshi; Sakurai, Yuki; Salamanna, Giuseppe; Salamon, Andrea; Salazar Loyola, Javier Esteban; Salek, David; Sales De Bruin, Pedro Henrique; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sammel, Dirk; Sampsonidis, Dimitrios; Sanchez, Arturo; Sánchez, Javier; Sanchez Martinez, Victoria; Sandaker, Heidi; Sandbach, Ruth Laura; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Carlos; Sandstroem, Rikard; Sankey, Dave; Sannino, Mario; Sansoni, Andrea; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Santoyo Castillo, Itzebelt; Sapp, Kevin; Sapronov, Andrey; Saraiva, João; Sarrazin, Bjorn; Sasaki, Osamu; Sasaki, Yuichi; Sato, Koji; Sauvage, Gilles; Sauvan, Emmanuel; Savage, Graham; Savard, Pierre; Sawyer, Craig; Sawyer, Lee; Saxon, James; Sbarra, Carla; Sbrizzi, Antonio; Scanlon, Tim; Scannicchio, Diana; Scarcella, Mark; Scarfone, Valerio; Schaarschmidt, Jana; Schacht, Peter; Schaefer, Douglas; Schaefer, Ralph; Schaeffer, Jan; Schaepe, Steffen; Schaetzel, Sebastian; Schäfer, Uli; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R Dean; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Schiavi, Carlo; Schillo, Christian; Schioppa, Marco; Schlenker, Stefan; Schmieden, Kristof; Schmitt, Christian; Schmitt, Sebastian; Schmitt, Stefan; Schmitz, Simon; Schneider, Basil; Schnellbach, Yan Jie; Schnoor, Ulrike; Schoeffel, Laurent; Schoening, Andre; Schoenrock, Bradley Daniel; Schopf, Elisabeth; Schorlemmer, Andre Lukas; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schramm, Steven; Schreyer, Manuel; Schuh, Natascha; Schultens, Martin Johannes; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwarz, Thomas Andrew; Schwegler, Philipp; Schweiger, Hansdieter; Schwemling, Philippe; Schwienhorst, Reinhard; Schwindling, Jerome; Schwindt, Thomas; Sciolla, Gabriella; Scuri, Fabrizio; Scutti, Federico; Searcy, Jacob; Seema, Pienpen; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Sekhon, Karishma; Sekula, Stephen; Seliverstov, Dmitry; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Sessa, Marco; Seuster, Rolf; Severini, Horst; Sfiligoj, Tina; Sforza, Federico; Sfyrla, Anna; Shabalina, Elizaveta; Shaikh, Nabila Wahab; Shan, Lianyou; Shang, Ruo-yu; Shank, James; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Shaw, Savanna Marie; Shcherbakova, Anna; Shehu, Ciwake Yusufu; Sherwood, Peter; Shi, Liaoshan; Shimizu, Shima; Shimmin, Chase Owen; Shimojima, Makoto; Shiyakova, Mariya; Shmeleva, Alevtina; Shoaleh Saadi, Diane; Shochet, Mel; Shojaii, Seyedruhollah; Shrestha, Suyog; Shulga, Evgeny; Shupe, Michael; Sicho, Petr; Sidebo, Per Edvin; Sidiropoulou, Ourania; Sidorov, Dmitri; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silva, José; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simioni, Eduard; Simmons, Brinick; Simon, Dorian; Simon, Manuel; Simoniello, Rosa; Sinervo, Pekka; Sinev, Nikolai; Sioli, Maximiliano; Siragusa, Giovanni; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skinner, Malcolm Bruce; Skottowe, Hugh Philip; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Slawinska, Magdalena; Sliwa, Krzysztof; Smakhtin, Vladimir; Smart, Ben; Smestad, Lillian; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smith, Matthew; Smith, Russell; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snidero, Giacomo; Snyder, Scott; Sobie, Randall; Socher, Felix; Soffer, Abner; Soh, Dart-yin; Sokhrannyi, Grygorii; Solans Sanchez, Carlos; Solar, Michael; Soldatov, Evgeny; Soldevila, Urmila; Solodkov, Alexander; Soloshenko, Alexei; Solovyanov, Oleg; Solovyev, Victor; Sommer, Philip; Song, Hong Ye; Soni, Nitesh; Sood, Alexander; Sopczak, Andre; Sopko, Vit; Sorin, Veronica; Sosa, David; Sotiropoulou, Calliope Louisa; Soualah, Rachik; Soukharev, Andrey; South, David; Sowden, Benjamin; Spagnolo, Stefania; Spalla, Margherita; Spangenberg, Martin; Spanò, Francesco; Sperlich, Dennis; Spettel, Fabian; Spighi, Roberto; Spigo, Giancarlo; Spiller, Laurence Anthony; Spousta, Martin; St Denis, Richard Dante; Stabile, Alberto; Stahlman, Jonathan; Stamen, Rainer; Stamm, Soren; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stanescu-Bellu, Madalina; Stanitzki, Marcel Michael; Stapnes, Steinar; Starchenko, Evgeny; Stark, Giordon; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Stärz, Steffen; Staszewski, Rafal; Steinberg, Peter; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stewart, Graeme; Stillings, Jan Andre; Stockton, Mark; Stoebe, Michael; Stoicea, Gabriel; Stolte, Philipp; Stonjek, Stefan; Stradling, Alden; Straessner, Arno; Stramaglia, Maria Elena; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Strubig, Antonia; Stucci, Stefania Antonia; Stugu, Bjarne; Styles, Nicholas Adam; Su, Dong; Su, Jun; Subramaniam, Rajivalochan; Suchek, Stanislav; Sugaya, Yorihito; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Siyuan; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Susinno, Giancarlo; Sutton, Mark; Suzuki, Shota; Svatos, Michal; Swiatlowski, Maximilian; Sykora, Ivan; Sykora, Tomas; Ta, Duc; Taccini, Cecilia; Tackmann, Kerstin; Taenzer, Joe; Taffard, Anyes; Tafirout, Reda; Taiblum, Nimrod; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tam, Jason; Tan, Kong Guan; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Shuji; Tannenwald, Benjamin Bordy; Tapia Araya, Sebastian; Tapprogge, Stefan; Tarem, Shlomit; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tashiro, Takuya; Tassi, Enrico; Tavares Delgado, Ademar; Tayalati, Yahya; Taylor, Aaron; Taylor, Geoffrey; Taylor, Pierre Thor Elliot; Taylor, Wendy; Teischinger, Florian Alfred; Teixeira-Dias, Pedro; Temming, Kim Katrin; Temple, Darren; Ten Kate, Herman; Teng, Ping-Kun; Teoh, Jia Jian; Tepel, Fabian-Phillipp; Terada, Susumu; Terashi, Koji; Terron, Juan; Terzo, Stefano; Testa, Marianna; Teuscher, Richard; Theveneaux-Pelzer, Timothée; Thomas, Juergen; Thomas-Wilsker, Joshuha; Thompson, Emily; Thompson, Paul; Thompson, Ray; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Thomson, Mark; Tibbetts, Mark James; Ticse Torres, Royer Edson; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tiouchichine, Elodie; Tipton, Paul; Tisserant, Sylvain; Todome, Kazuki; Todorov, Theodore; Todorova-Nova, Sharka; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tolley, Emma; Tomlinson, Lee; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Baojia(Tony); Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Trefzger, Thomas; Tricoli, Alessandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Tripiana, Martin; Trischuk, William; Trocmé, Benjamin; Trofymov, Artur; Troncon, Clara; Trottier-McDonald, Michel; Trovatelli, Monica; Truong, Loan; Trzebinski, Maciej; Trzupek, Adam; Tseng, Jeffrey; Tsiareshka, Pavel; Tsipolitis, Georgios; Tsirintanis, Nikolaos; Tsiskaridze, Shota; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsui, Ka Ming; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsuno, Soshi; Tsybychev, Dmitri; Tudorache, Alexandra; Tudorache, Valentina; Tuna, Alexander Naip; Tupputi, Salvatore; Turchikhin, Semen; Turecek, Daniel; Turgeman, Daniel; Turra, Ruggero; Turvey, Andrew John; Tuts, Michael; Tylmad, Maja; Tyndel, Mike; Ueda, Ikuo; Ueno, Ryuichi; Ughetto, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Ungaro, Francesca; Unno, Yoshinobu; Unverdorben, Christopher; Urban, Jozef; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Usanova, Anna; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Valderanis, Chrysostomos; Valencic, Nika; Valentinetti, Sara; Valero, Alberto; Valery, Loic; Valkar, Stefan; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; Van Den Wollenberg, Wouter; Van Der Deijl, Pieter; van der Geer, Rogier; van der Graaf, Harry; van Eldik, Niels; van Gemmeren, Peter; Van Nieuwkoop, Jacobus; van Vulpen, Ivo; van Woerden, Marius Cornelis; Vanadia, Marco; Vandelli, Wainer; Vanguri, Rami; Vaniachine, Alexandre; Vardanyan, Gagik; Vari, Riccardo; Varnes, Erich; Varol, Tulin; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vazeille, Francois; Vazquez Schroeder, Tamara; Veatch, Jason; Veloce, Laurelle Maria; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Venturi, Manuela; Venturi, Nicola; Venturini, Alessio; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Viazlo, Oleksandr; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Vigne, Ralph; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinogradov, Vladimir; Vivarelli, Iacopo; Vlachos, Sotirios; Vladoiu, Dan; Vlasak, Michal; Vogel, Marcelo; Vokac, Petr; Volpi, Guido; Volpi, Matteo; von der Schmitt, Hans; von Toerne, Eckhard; Vorobel, Vit; Vorobev, Konstantin; Vos, Marcel; Voss, Rudiger; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vuillermet, Raphael; Vukotic, Ilija; Vykydal, Zdenek; Wagner, Peter; Wagner, Wolfgang; Wahlberg, Hernan; Wahrmund, Sebastian; Wakabayashi, Jun; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wallangen, Veronica; Wang, Chao; Wang, Chao; Wang, Fuquan; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Kuhan; Wang, Rui; Wang, Song-Ming; Wang, Tan; Wang, Tingting; Wang, Xiaoxiao; Wanotayaroj, Chaowaroj; Warburton, Andreas; Ward, Patricia; Wardrope, David Robert; Washbrook, Andrew; Watkins, Peter; Watson, Alan; Watson, Ian; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Ben; Webb, Samuel; Weber, Michele; Weber, Stefan Wolf; Webster, Jordan S; Weidberg, Anthony; Weinert, Benjamin; Weingarten, Jens; Weiser, Christian; Weits, Hartger; Wells, Phillippa; Wenaus, Torre; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Wessels, Martin; Wetter, Jeffrey; Whalen, Kathleen; Wharton, Andrew Mark; White, Andrew; White, Martin; White, Ryan; White, Sebastian; Whiteson, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik-Fuchs, Liv Antje Mari; Wildauer, Andreas; Wilkens, Henric George; Williams, Hugh; Williams, Sarah; Willis, Christopher; Willocq, Stephane; Wilson, John; Wingerter-Seez, Isabelle; Winklmeier, Frank; Winter, Benedict Tobias; Wittgen, Matthias; Wittkowski, Josephine; Wollstadt, Simon Jakob; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wozniak, Krzysztof; Wu, Mengqing; Wu, Miles; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wyatt, Terry Richard; Wynne, Benjamin; Xella, Stefania; Xu, Da; Xu, Lailin; Yabsley, Bruce; Yacoob, Sahal; Yakabe, Ryota; Yamaguchi, Daiki; Yamaguchi, Yohei; Yamamoto, Akira; Yamamoto, Shimpei; Yamanaka, Takashi; Yamauchi, Katsuya; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Hongtao; Yang, Yi; Yang, Zongchang; Yao, Weiming; Yap, Yee Chinn; Yasu, Yoshiji; Yatsenko, Elena; Yau Wong, Kaven Henry; Ye, Jingbo; Ye, Shuwei; Yeletskikh, Ivan; Yen, Andy L; Yildirim, Eda; Yorita, Kohei; Yoshida, Rikutaro; Yoshihara, Keisuke; Young, Charles; Young, Christopher John; Youssef, Saul; Yu, David Ren-Hwa; Yu, Jaehoon; Yu, Jiaming; Yu, Jie; Yuan, Li; Yuen, Stephanie P; Yusuff, Imran; Zabinski, Bartlomiej; Zaidan, Remi; Zaitsev, Alexander; Zakharchuk, Nataliia; Zalieckas, Justas; Zaman, Aungshuman; Zambito, Stefano; Zanello, Lucia; Zanzi, Daniele; Zeitnitz, Christian; Zeman, Martin; Zemla, Andrzej; Zeng, Jian Cong; Zeng, Qi; Zengel, Keith; Zenin, Oleg; Ženiš, Tibor; Zerwas, Dirk; Zhang, Dongliang; Zhang, Fangzhou; Zhang, Guangyi; Zhang, Huijun; Zhang, Jinlong; Zhang, Lei; Zhang, Rui; Zhang, Ruiqi; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Xiandong; Zhao, Yongke; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Chen; Zhou, Lei; Zhou, Li; Zhou, Mingliang; Zhou, Ning; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhukov, Konstantin; Zibell, Andre; Zieminska, Daria; Zimine, Nikolai; Zimmermann, Christoph; Zimmermann, Stephanie; Zinonos, Zinonas; Zinser, Markus; Ziolkowski, Michael; Živković, Lidija; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zurzolo, Giovanni; Zwalinski, Lukasz

    2017-04-13

    The reconstruction and calibration algorithms used to calculate missing transverse momentum ($E_{\\rm T}^{\\rm miss}$) with the ATLAS detector exploit energy deposits in the calorimeter and tracks reconstructed in the inner detector as well as the muon spectrometer. Various strategies are used to suppress effects arising from additional proton--proton interactions, called pileup, concurrent with the hard-scatter processes. Tracking information is used to distinguish contributions from the pileup interactions using their vertex separation along the beam axis. The performance of the $E_{\\rm T}^{\\rm miss}$ reconstruction algorithms, especially with respect to the amount of pileup, is evaluated using data collected in proton--proton collisions at a centre-of-mass energy of 8 TeV during 2012, and results are shown for a data sample corresponding to an integrated luminosity of 20.3 fb$^{-1}$. The results of simulation modelling of $E_{\\rm T}^{\\rm miss}$ in events containing a $Z$ bosondecaying to two charged leptons ...

  4. Application of Deconvolution Algorithm of Point Spread Function in Improving Image Quality: An Observer Preference Study on Chest Radiography.

    Science.gov (United States)

    Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho

    2018-01-01

    To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.

  5. Missing transverse energy performance of the CMS detector

    Energy Technology Data Exchange (ETDEWEB)

    Chatrchyan, Serguei [Yerevan Physics Inst. (Armenia); et al.

    2011-09-01

    During 2010 the LHC delivered pp collisions with a centre-of-mass energy of 7 TeV. In this paper, the results of comprehensive studies of missing transverse energy as measured by the CMS detector are presented. The results cover the measurements of the scale and resolution for missing transverse energy, and the effects of multiple pp interactions within the same bunch crossings on the scale and resolution. Anomalous measurements of missing transverse energy are studied, and algorithms for their identification are described. The performances of several reconstruction algorithms for calculating missing transverse energy are compared. An algorithm, called missing-transverse-energy significance, which estimates the compatibility of the reconstructed missing transverse energy with zero, is described, and its performance is demonstrated.

  6. Missing transverse energy performance of the CMS detector

    International Nuclear Information System (INIS)

    2011-01-01

    During 2010 the LHC delivered pp collisions with a centre-of-mass energy of 7 TeV. In this paper, the results of comprehensive studies of missing transverse energy as measured by the CMS detector are presented. The results cover the measurements of the scale and resolution for missing transverse energy, and the effects of multiple pp interactions within the same bunch crossings on the scale and resolution. Anomalous measurements of missing transverse energy are studied, and algorithms for their identification are described. The performance of several reconstruction algorithms for calculating missing transverse energy are compared. An algorithm, called missing-transverse-energy significance, which estimates the compatibility of the reconstructed missing transverse energy with zero, is described, and its performance is demonstrated.

  7. Performance of algorithms that reconstruct missing transverse momentum in √(s) = 8 TeV proton-proton collisions in the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Aad, G. [CPPM, Aix-Marseille Univ. et CNRS/IN2P3, Marseille (France); Abbott, B. [Oklahoma Univ., Norman, OK (United States). Homer L. Dodge Dept. of Physics and Astronomy; Abdallah, J. [Iowa Univ.,Iowa City, IA (United States); Collaboration: ATLAS Collaboration; and others

    2017-04-15

    The reconstruction and calibration algorithms used to calculate missing transverse momentum (E{sub T}{sup miss}) with the ATLAS detector exploit energy deposits in the calorimeter and tracks reconstructed in the inner detector as well as the muon spectrometer. Various strategies are used to suppress effects arising from additional proton-proton interactions, called pileup, concurrent with the hard-scatter processes. Tracking information is used to distinguish contributions from the pileup interactions using their vertex separation along the beam axis. The performance of the E{sub T}{sup miss} reconstruction algorithms, especially with respect to the amount of pileup, is evaluated using data collected in proton-proton collisions at a centre-of-mass energy of 8 TeV during 2012, and results are shown for a data sample corresponding to an integrated luminosity of 20.3 fb{sup -1}. The simulation and modelling of E{sub T}{sup miss} in events containing a Z boson decaying to two charged leptons (electrons or muons) or a W boson decaying to a charged lepton and a neutrino are compared to data. The acceptance for different event topologies, with and without high transverse momentum neutrinos, is shown for a range of threshold criteria for E{sub T}{sup miss}, and estimates of the systematic uncertainties in the E{sub T}{sup miss} measurements are presented. (orig.)

  8. Impact of Missing Passive Microwave Sensors on Multi-Satellite Precipitation Retrieval Algorithm

    Directory of Open Access Journals (Sweden)

    Bin Yong

    2015-01-01

    Full Text Available The impact of one or two missing passive microwave (PMW input sensors on the end product of multi-satellite precipitation products is an interesting but obscure issue for both algorithm developers and data users. On 28 January 2013, the Version-7 TRMM Multi-satellite Precipitation Analysis (TMPA products were reproduced and re-released by National Aeronautics and Space Administration (NASA Goddard Space Flight Center because the Advanced Microwave Sounding Unit-B (AMSU-B and the Special Sensor Microwave Imager-Sounder-F16 (SSMIS-F16 input data were unintentionally disregarded in the prior retrieval. Thus, this study investigates the sensitivity of TMPA algorithm results to missing PMW sensors by intercomparing the “early” and “late” Version-7 TMPA real-time (TMPA-RT precipitation estimates (i.e., without and with AMSU-B, SSMIS-F16 sensors with an independent high-density gauge network of 200 tipping-bucket rain gauges over the Chinese Jinghe river basin (45,421 km2. The retrieval counts and retrieval frequency of various PMW and Infrared (IR sensors incorporated into the TMPA system were also analyzed to identify and diagnose the impacts of sensor availability on the TMPA-RT retrieval accuracy. Results show that the incorporation of AMSU-B and SSMIS-F16 has substantially reduced systematic errors. The improvement exhibits rather strong seasonal and topographic dependencies. Our analyses suggest that one or two single PMW sensors might play a key role in affecting the end product of current combined microwave-infrared precipitation estimates. This finding supports algorithm developers’ current endeavor in spatiotemporally incorporating as many PMW sensors as possible in the multi-satellite precipitation retrieval system called Integrated Multi-satellitE Retrievals for Global Precipitation Measurement mission (IMERG. This study also recommends users of satellite precipitation products to switch to the newest Version-7 TMPA datasets and

  9. Principal Component Analysis of Process Datasets with Missing Values

    Directory of Open Access Journals (Sweden)

    Kristen A. Severson

    2017-07-01

    Full Text Available Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA, which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based on the singular value decomposition achieved the best results in the majority of test cases involving process datasets.

  10. Is It that Difficult to Find a Good Preference Order for the Incremental Algorithm?

    Science.gov (United States)

    Krahmer, Emiel; Koolen, Ruud; Theune, Mariet

    2012-01-01

    In a recent article published in this journal (van Deemter, Gatt, van der Sluis, & Power, 2012), the authors criticize the Incremental Algorithm (a well-known algorithm for the generation of referring expressions due to Dale & Reiter, 1995, also in this journal) because of its strong reliance on a pre-determined, domain-dependent Preference Order.…

  11. Missing value imputation: with application to handwriting data

    Science.gov (United States)

    Xu, Zhen; Srihari, Sargur N.

    2015-01-01

    Missing values make pattern analysis difficult, particularly with limited available data. In longitudinal research, missing values accumulate, thereby aggravating the problem. Here we consider how to deal with temporal data with missing values in handwriting analysis. In the task of studying development of individuality of handwriting, we encountered the fact that feature values are missing for several individuals at several time instances. Six algorithms, i.e., random imputation, mean imputation, most likely independent value imputation, and three methods based on Bayesian network (static Bayesian network, parameter EM, and structural EM), are compared with children's handwriting data. We evaluate the accuracy and robustness of the algorithms under different ratios of missing data and missing values, and useful conclusions are given. Specifically, static Bayesian network is used for our data which contain around 5% missing data to provide adequate accuracy and low computational cost.

  12. Clustering with Missing Values: No Imputation Required

    Science.gov (United States)

    Wagstaff, Kiri

    2004-01-01

    Clustering algorithms can identify groups in large data sets, such as star catalogs and hyperspectral images. In general, clustering methods cannot analyze items that have missing data values. Common solutions either fill in the missing values (imputation) or ignore the missing data (marginalization). Imputed values are treated as just as reliable as the truly observed data, but they are only as good as the assumptions used to create them. In contrast, we present a method for encoding partially observed features as a set of supplemental soft constraints and introduce the KSC algorithm, which incorporates constraints into the clustering process. In experiments on artificial data and data from the Sloan Digital Sky Survey, we show that soft constraints are an effective way to enable clustering with missing values.

  13. Autism spectrum disorders and fetal hypoxia in a population-based cohort: Accounting for missing exposures via Estimation-Maximization algorithm

    Directory of Open Access Journals (Sweden)

    Yasui Yutaka

    2011-01-01

    Full Text Available Abstract Background Autism spectrum disorders (ASD are associated with complications of pregnancy that implicate fetal hypoxia (FH; the excess of ASD in male gender is poorly understood. We tested the hypothesis that risk of ASD is related to fetal hypoxia and investigated whether this effect is greater among males. Methods Provincial delivery records (PDR identified the cohort of all 218,890 singleton live births in the province of Alberta, Canada, between 01-01-98 and 12-31-04. These were followed-up for ASD via ICD-9 diagnostic codes assigned by physician billing until 03-31-08. Maternal and obstetric risk factors, including FH determined from blood tests of acidity (pH, were extracted from PDR. The binary FH status was missing in approximately half of subjects. Assuming that characteristics of mothers and pregnancies would be correlated with FH, we used an Estimation-Maximization algorithm to estimate HF-ASD association, allowing for both missing-at-random (MAR and specific not-missing-at-random (NMAR mechanisms. Results Data indicated that there was excess risk of ASD among males who were hypoxic at birth, not materially affected by adjustment for potential confounding due to birth year and socio-economic status: OR 1.13, 95%CI: 0.96, 1.33 (MAR assumption. Limiting analysis to full-term males, the adjusted OR under specific NMAR assumptions spanned 95%CI of 1.0 to 1.6. Conclusion Our results are consistent with a weak effect of fetal hypoxia on risk of ASD among males. E-M algorithm is an efficient and flexible tool for modeling missing data in the studied setting.

  14. Getting patients in the door: medical appointment reminder preferences.

    Science.gov (United States)

    Crutchfield, Trisha M; Kistler, Christine E

    2017-01-01

    Between 23% and 34% of outpatient appointments are missed annually. Patients who frequently miss medical appointments have poorer health outcomes and are less likely to use preventive health care services. Missed appointments result in unnecessary costs and organizational inefficiencies. Appointment reminders may help reduce missed appointments; particular types may be more effective than other types. We used a survey with a discrete choice experiment (DCE) to learn why individuals miss appointments and to assess appointment reminder preferences. We enrolled a national sample of adults from an online survey panel to complete demographic and appointment habit questions as well as a 16-task DCE designed in Sawtooth Software's Discover tool. We assessed preferences for four reminder attributes - initial reminder type, arrival of initial reminder, reminder content, and number of reminders. We derived utilities and importance scores. We surveyed 251 adults nationally, with a mean age of 43 (range 18-83) years: 51% female, 84% White, and 8% African American. Twenty-three percent of individuals missed one or more appointments in the past 12 months. Two primary reasons given for missing an appointment include transportation problems (28%) and forgetfulness (26%). Participants indicated the initial reminder type (21%) was the most important attribute, followed by the number of reminders (10%). Overall, individuals indicated a preference for a single reminder, arriving via email, phone call, or text message, delivered less than 2 weeks prior to an appointment. Preferences for reminder content were less clear. The number of missed appointments and reasons for missing appointments are consistent with prior research. Patient-centered appointment reminders may improve appointment attendance by addressing some of the reasons individuals report missing appointments and by meeting patients' needs. Future research is necessary to determine if preferred reminders used in practice

  15. Sample-Based Extreme Learning Machine with Missing Data

    Directory of Open Access Journals (Sweden)

    Hang Gao

    2015-01-01

    Full Text Available Extreme learning machine (ELM has been extensively studied in machine learning community during the last few decades due to its high efficiency and the unification of classification, regression, and so forth. Though bearing such merits, existing ELM algorithms cannot efficiently handle the issue of missing data, which is relatively common in practical applications. The problem of missing data is commonly handled by imputation (i.e., replacing missing values with substituted values according to available information. However, imputation methods are not always effective. In this paper, we propose a sample-based learning framework to address this issue. Based on this framework, we develop two sample-based ELM algorithms for classification and regression, respectively. Comprehensive experiments have been conducted in synthetic data sets, UCI benchmark data sets, and a real world fingerprint image data set. As indicated, without introducing extra computational complexity, the proposed algorithms do more accurate and stable learning than other state-of-the-art ones, especially in the case of higher missing ratio.

  16. Autoregressive-model-based missing value estimation for DNA microarray time series data.

    Science.gov (United States)

    Choong, Miew Keen; Charbit, Maurice; Yan, Hong

    2009-01-01

    Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.

  17. Integrative missing value estimation for microarray data.

    Science.gov (United States)

    Hu, Jianjun; Li, Haifeng; Waterman, Michael S; Zhou, Xianghong Jasmine

    2006-10-12

    Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. We present the integrative Missing Value Estimation method (iMISS) by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS) imputation algorithm by up to 15% improvement in our benchmark tests. We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.

  18. Dealing with gene expression missing data.

    Science.gov (United States)

    Brás, L P; Menezes, J C

    2006-05-01

    Compared evaluation of different methods is presented for estimating missing values in microarray data: weighted K-nearest neighbours imputation (KNNimpute), regression-based methods such as local least squares imputation (LLSimpute) and partial least squares imputation (PLSimpute) and Bayesian principal component analysis (BPCA). The influence in prediction accuracy of some factors, such as methods' parameters, type of data relationships used in the estimation process (i.e. row-wise, column-wise or both), missing rate and pattern and type of experiment [time series (TS), non-time series (NTS) or mixed (MIX) experiments] is elucidated. Improvements based on the iterative use of data (iterative LLS and PLS imputation--ILLSimpute and IPLSimpute), the need to perform initial imputations (modified PLS and Helland PLS imputation--MPLSimpute and HPLSimpute) and the type of relationships employed (KNNarray, LLSarray, HPLSarray and alternating PLS--APLSimpute) are proposed. Overall, it is shown that data set properties (type of experiment, missing rate and pattern) affect the data similarity structure, therefore influencing the methods' performance. LLSimpute and ILLSimpute are preferable in the presence of data with a stronger similarity structure (TS and MIX experiments), whereas PLS-based methods (MPLSimpute, IPLSimpute and APLSimpute) are preferable when estimating NTS missing data.

  19. Integrative missing value estimation for microarray data

    Directory of Open Access Journals (Sweden)

    Zhou Xianghong

    2006-10-01

    Full Text Available Abstract Background Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. Results We present the integrative Missing Value Estimation method (iMISS by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS imputation algorithm by up to 15% improvement in our benchmark tests. Conclusion We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.

  20. A Review of Methods for Missing Data.

    Science.gov (United States)

    Pigott, Therese D.

    2001-01-01

    Reviews methods for handling missing data in a research study. Model-based methods, such as maximum likelihood using the EM algorithm and multiple imputation, hold more promise than ad hoc methods. Although model-based methods require more specialized computer programs and assumptions about the nature of missing data, these methods are appropriate…

  1. Missing value imputation for epistatic MAPs

    LENUS (Irish Health Repository)

    Ryan, Colm

    2010-04-20

    Abstract Background Epistatic miniarray profiling (E-MAPs) is a high-throughput approach capable of quantifying aggravating or alleviating genetic interactions between gene pairs. The datasets resulting from E-MAP experiments typically take the form of a symmetric pairwise matrix of interaction scores. These datasets have a significant number of missing values - up to 35% - that can reduce the effectiveness of some data analysis techniques and prevent the use of others. An effective method for imputing interactions would therefore increase the types of possible analysis, as well as increase the potential to identify novel functional interactions between gene pairs. Several methods have been developed to handle missing values in microarray data, but it is unclear how applicable these methods are to E-MAP data because of their pairwise nature and the significantly larger number of missing values. Here we evaluate four alternative imputation strategies, three local (Nearest neighbor-based) and one global (PCA-based), that have been modified to work with symmetric pairwise data. Results We identify different categories for the missing data based on their underlying cause, and show that values from the largest category can be imputed effectively. We compare local and global imputation approaches across a variety of distinct E-MAP datasets, showing that both are competitive and preferable to filling in with zeros. In addition we show that these methods are effective in an E-MAP from a different species, suggesting that pairwise imputation techniques will be increasingly useful as analogous epistasis mapping techniques are developed in different species. We show that strongly alleviating interactions are significantly more difficult to predict than strongly aggravating interactions. Finally we show that imputed interactions, generated using nearest neighbor methods, are enriched for annotations in the same manner as measured interactions. Therefore our method potentially

  2. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-06-01

    Full Text Available Abstrak __________________________________________________________________________________________ Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data. Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada 2 (dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui 4 (empat langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah.   Abstract __________________________________________________________________________________________ Model mixture can estimate proportion of recovering patient  and function of patient survival do not recover. At this study, model mixture developed to analyse cure rate bases on missing data. There are some method which applicable to analyse missing data. One of method which can be applied is Algoritma EM, This method based on 2 ( two step, that is: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is approach of iteration to study model from data with value loses through 4 ( four step, yaitu(1 select;chooses initial gathering from parameter for a model, ( 2 determines expectation value for data to lose, ( 3 induce newfangled parameter

  3. An approach to decision-making with triangular fuzzy reciprocal preference relations and its application

    Science.gov (United States)

    Meng, Fanyong

    2018-02-01

    Triangular fuzzy reciprocal preference relations (TFRPRs) are powerful tools to denoting decision-makers' fuzzy judgments, which permit the decision-makers to apply triangular fuzzy ratio rather than real numbers to express their judgements. Consistency analysis is one of the most crucial issues in preference relations that can guarantee the reasonable ranking order. However, all previous consistency concepts cannot well address this type of preference relations. Based on the operational laws on triangular fuzzy numbers, this paper introduces an additive consistency concept for TFRPRs by using quasi TFRPRs, which can be seen as a natural extension of the crisp case. Using this consistency concept, models to judging the additive consistency of TFRPRs and to estimating missing values in complete TFRPRs are constructed. Then, an algorithm to decision-making with TFRPRs is developed. Finally, two numerical examples are offered to illustrate the application of the proposed procedure, and comparison analysis is performed.

  4. TOPSIS-based consensus model for group decision-making with incomplete interval fuzzy preference relations.

    Science.gov (United States)

    Liu, Fang; Zhang, Wei-Guo

    2014-08-01

    Due to the vagueness of real-world environments and the subjective nature of human judgments, it is natural for experts to estimate their judgements by using incomplete interval fuzzy preference relations. In this paper, based on the technique for order preference by similarity to ideal solution method, we present a consensus model for group decision-making (GDM) with incomplete interval fuzzy preference relations. To do this, we first define a new consistency measure for incomplete interval fuzzy preference relations. Second, a goal programming model is proposed to estimate the missing interval preference values and it is guided by the consistency property. Third, an ideal interval fuzzy preference relation is constructed by using the induced ordered weighted averaging operator, where the associated weights of characterizing the operator are based on the defined consistency measure. Fourth, a similarity degree between complete interval fuzzy preference relations and the ideal one is defined. The similarity degree is related to the associated weights, and used to aggregate the experts' preference relations in such a way that more importance is given to ones with the higher similarity degree. Finally, a new algorithm is given to solve the GDM problem with incomplete interval fuzzy preference relations, which is further applied to partnership selection in formation of virtual enterprises.

  5. Preference learning for cognitive modeling: a case study on entertainment preferences

    DEFF Research Database (Denmark)

    Yannakakis, Georgios; Maragoudakis, Manolis; Hallam, John

    2009-01-01

    Learning from preferences, which provide means for expressing a subject's desires, constitutes an important topic in machine learning research. This paper presents a comparative study of four alternative instance preference learning algorithms (both linear and nonlinear). The case study...... investigated is to learn to predict the expressed entertainment preferences of children when playing physical games built on their personalized playing features (entertainment modeling). Two of the approaches are derived from the literature--the large-margin algorithm (LMA) and preference learning...... with Gaussian processes--while the remaining two are custom-designed approaches for the problem under investigation: meta-LMA and neuroevolution. Preference learning techniques are combined with feature set selection methods permitting the construction of effective preference models, given suitable individual...

  6. Missing value imputation in DNA microarrays based on conjugate gradient method.

    Science.gov (United States)

    Dorri, Fatemeh; Azmi, Paeiz; Dorri, Faezeh

    2012-02-01

    Analysis of gene expression profiles needs a complete matrix of gene array values; consequently, imputation methods have been suggested. In this paper, an algorithm that is based on conjugate gradient (CG) method is proposed to estimate missing values. k-nearest neighbors of the missed entry are first selected based on absolute values of their Pearson correlation coefficient. Then a subset of genes among the k-nearest neighbors is labeled as the best similar ones. CG algorithm with this subset as its input is then used to estimate the missing values. Our proposed CG based algorithm (CGimpute) is evaluated on different data sets. The results are compared with sequential local least squares (SLLSimpute), Bayesian principle component analysis (BPCAimpute), local least squares imputation (LLSimpute), iterated local least squares imputation (ILLSimpute) and adaptive k-nearest neighbors imputation (KNNKimpute) methods. The average of normalized root mean squares error (NRMSE) and relative NRMSE in different data sets with various missing rates shows CGimpute outperforms other methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Algorithms for Learning Preferences for Sets of Objects

    Science.gov (United States)

    Wagstaff, Kiri L.; desJardins, Marie; Eaton, Eric

    2010-01-01

    A method is being developed that provides for an artificial-intelligence system to learn a user's preferences for sets of objects and to thereafter automatically select subsets of objects according to those preferences. The method was originally intended to enable automated selection, from among large sets of images acquired by instruments aboard spacecraft, of image subsets considered to be scientifically valuable enough to justify use of limited communication resources for transmission to Earth. The method is also applicable to other sets of objects: examples of sets of objects considered in the development of the method include food menus, radio-station music playlists, and assortments of colored blocks for creating mosaics. The method does not require the user to perform the often-difficult task of quantitatively specifying preferences; instead, the user provides examples of preferred sets of objects. This method goes beyond related prior artificial-intelligence methods for learning which individual items are preferred by the user: this method supports a concept of setbased preferences, which include not only preferences for individual items but also preferences regarding types and degrees of diversity of items in a set. Consideration of diversity in this method involves recognition that members of a set may interact with each other in the sense that when considered together, they may be regarded as being complementary, redundant, or incompatible to various degrees. The effects of such interactions are loosely summarized in the term portfolio effect. The learning method relies on a preference representation language, denoted DD-PREF, to express set-based preferences. In DD-PREF, a preference is represented by a tuple that includes quality (depth) functions to estimate how desired a specific value is, weights for each feature preference, the desired diversity of feature values, and the relative importance of diversity versus depth. The system applies statistical

  8. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  9. Learning-Based Adaptive Imputation Methodwith kNN Algorithm for Missing Power Data

    Directory of Open Access Journals (Sweden)

    Minkyung Kim

    2017-10-01

    Full Text Available This paper proposes a learning-based adaptive imputation method (LAI for imputing missing power data in an energy system. This method estimates the missing power data by using the pattern that appears in the collected data. Here, in order to capture the patterns from past power data, we newly model a feature vector by using past data and its variations. The proposed LAI then learns the optimal length of the feature vector and the optimal historical length, which are significant hyper parameters of the proposed method, by utilizing intentional missing data. Based on a weighted distance between feature vectors representing a missing situation and past situation, missing power data are estimated by referring to the k most similar past situations in the optimal historical length. We further extend the proposed LAI to alleviate the effect of unexpected variation in power data and refer to this new approach as the extended LAI method (eLAI. The eLAI selects a method between linear interpolation (LI and the proposed LAI to improve accuracy under unexpected variations. Finally, from a simulation under various energy consumption profiles, we verify that the proposed eLAI achieves about a 74% reduction of the average imputation error in an energy system, compared to the existing imputation methods.

  10. Bus Timetabling as a Fuzzy Multiobjective Optimization Problem Using Preference-based Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Surafel Luleseged Tilahun

    2012-05-01

    Full Text Available Transportation plays a vital role in the development of a country and the car is the most commonly used means. However, in third world countries long waiting time for public buses is a common problem, especially when people need to switch buses. The problem becomes critical when one considers buses joining different villages and cities. Theoretically this problem can be solved by assigning more buses on the route, which is not possible due to economical problem. Another option is to schedule the buses so that customers who want to switch buses at junction cities need not have to wait long. This paper discusses how to model single frequency routes bus timetabling as a fuzzy multiobjective optimization problem and how to solve it using preference-based genetic algorithm by assigning appropriate fuzzy preference to the need of the customers. The idea will be elaborated with an example.

  11. Survival prediction algorithms miss significant opportunities for improvement if used for case selection in trauma quality improvement programs.

    Science.gov (United States)

    Heim, Catherine; Cole, Elaine; West, Anita; Tai, Nigel; Brohi, Karim

    2016-09-01

    Quality improvement (QI) programs have shown to reduce preventable mortality in trauma care. Detailed review of all trauma deaths is a time and resource consuming process and calculated probability of survival (Ps) has been proposed as audit filter. Review is limited on deaths that were 'expected to survive'. However no Ps-based algorithm has been validated and no study has examined elements of preventability associated with deaths classified as 'expected'. The objective of this study was to examine whether trauma performance review can be streamlined using existing mortality prediction tools without missing important areas for improvement. We conducted a retrospective study of all trauma deaths reviewed by our trauma QI program. Deaths were classified into non-preventable, possibly preventable, probably preventable or preventable. Opportunities for improvement (OPIs) involve failure in the process of care and were classified into clinical and system deviations from standards of care. TRISS and PS were used for calculation of probability of survival. Peer-review charts were reviewed by a single investigator. Over 8 years, 626 patients were included. One third showed elements of preventability and 4% were preventable. Preventability occurred across the entire range of the calculated Ps band. Limiting review to unexpected deaths would have missed over 50% of all preventability issues and a third of preventable deaths. 37% of patients showed opportunities for improvement (OPIs). Neither TRISS nor PS allowed for reliable identification of OPIs and limiting peer-review to patients with unexpected deaths would have missed close to 60% of all issues in care. TRISS and PS fail to identify a significant proportion of avoidable deaths and miss important opportunities for process and system improvement. Based on this, all trauma deaths should be subjected to expert panel review in order to aim at a maximal output of performance improvement programs. Copyright © 2016 Elsevier

  12. Outlier Removal in Model-Based Missing Value Imputation for Medical Datasets

    Directory of Open Access Journals (Sweden)

    Min-Wei Huang

    2018-01-01

    Full Text Available Many real-world medical datasets contain some proportion of missing (attribute values. In general, missing value imputation can be performed to solve this problem, which is to provide estimations for the missing values by a reasoning process based on the (complete observed data. However, if the observed data contain some noisy information or outliers, the estimations of the missing values may not be reliable or may even be quite different from the real values. The aim of this paper is to examine whether a combination of instance selection from the observed data and missing value imputation offers better performance than performing missing value imputation alone. In particular, three instance selection algorithms, DROP3, GA, and IB3, and three imputation algorithms, KNNI, MLP, and SVM, are used in order to find out the best combination. The experimental results show that that performing instance selection can have a positive impact on missing value imputation over the numerical data type of medical datasets, and specific combinations of instance selection and imputation methods can improve the imputation results over the mixed data type of medical datasets. However, instance selection does not have a definitely positive impact on the imputation result for categorical medical datasets.

  13. Multiple imputation by chained equations for systematically and sporadically missing multilevel data.

    Science.gov (United States)

    Resche-Rigon, Matthieu; White, Ian R

    2018-06-01

    In multilevel settings such as individual participant data meta-analysis, a variable is 'systematically missing' if it is wholly missing in some clusters and 'sporadically missing' if it is partly missing in some clusters. Previously proposed methods to impute incomplete multilevel data handle either systematically or sporadically missing data, but frequently both patterns are observed. We describe a new multiple imputation by chained equations (MICE) algorithm for multilevel data with arbitrary patterns of systematically and sporadically missing variables. The algorithm is described for multilevel normal data but can easily be extended for other variable types. We first propose two methods for imputing a single incomplete variable: an extension of an existing method and a new two-stage method which conveniently allows for heteroscedastic data. We then discuss the difficulties of imputing missing values in several variables in multilevel data using MICE, and show that even the simplest joint multilevel model implies conditional models which involve cluster means and heteroscedasticity. However, a simulation study finds that the proposed methods can be successfully combined in a multilevel MICE procedure, even when cluster means are not included in the imputation models.

  14. Sparse subspace clustering for data with missing entries and high-rank matrix completion.

    Science.gov (United States)

    Fan, Jicong; Chow, Tommy W S

    2017-09-01

    Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Investigating preferences for color-shape combinations with gaze driven optimization method based on evolutionary algorithms.

    Science.gov (United States)

    Holmes, Tim; Zanker, Johannes M

    2013-01-01

    Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioral measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA), which has been demonstrated as a tool to identify aesthetic preferences (Holmes and Zanker, 2012). In the present study, the GDEA was used to investigate the preferred combination of color and shape which have been promoted in the Bauhaus arts school. We used the same three shapes (square, circle, triangle) used by Kandinsky (1923), with the three color palette from the original experiment (A), an extended seven color palette (B), and eight different shape orientation (C). Participants were instructed to look for their preferred circle, triangle or square in displays with eight stimuli of different shapes, colors and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested six participants extensively on the different conditions and found consistent preferences for color-shape combinations for individuals, but little evidence at the group level for clear color/shape preference consistent with Kandinsky's claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of color and shapes, but also that these associations are robust within a single individual. These individual differences go some way toward challenging the claims of the universal preference for color/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the

  16. Investigating preferences for colour-shape combinations with gaze driven optimization method based on evolutionary algorithms.

    Directory of Open Access Journals (Sweden)

    Tim eHolmes

    2013-12-01

    Full Text Available Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioural measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA, which has been used as a tool to identify aesthetic preferences (Holmes & Zanker, 2012. In the present study, the GDEA was used to investigate the preferred combination of colour and shape which have been promoted in the Bauhaus arts school. We used the same 3 shapes (square, circle, triangle used by Kandinsky (1923, with the 3 colour palette from the original experiment (A, an extended 7 colour palette (B, and 8 different shape orientation (C. Participants were instructed to look for their preferred circle, triangle or square in displays with 8 stimuli of different shapes, colours and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested 6 participants extensively on the different conditions and found consistent preferences for individuals, but little evidence at the group level for preference consistent with Kandinsky’s claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of colour and shapes, but also that these associations are robust within a single individual. These individual differences go some way towards challenging the claims of the universal preference for colour/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the vast potential of the GDEA in experimental aesthetics

  17. A dynamic model of the marriage market-part 1: matching algorithm based on age preference and availability.

    Science.gov (United States)

    Matthews, A P; Garenne, M L

    2013-09-01

    The matching algorithm in a dynamic marriage market model is described in this first of two companion papers. Iterative Proportional Fitting is used to find a marriage function (an age distribution of new marriages for both sexes), in a stable reference population, that is consistent with the one-sex age distributions of new marriages, and includes age preference. The one-sex age distributions (which are the marginals of the two-sex distribution) are based on the Picrate model, and age preference on a normal distribution, both of which may be adjusted by choice of parameter values. For a population that is perturbed from the reference state, the total number of new marriages is found as the harmonic mean of target totals for men and women obtained by applying reference population marriage rates to the perturbed population. The marriage function uses the age preference function, assumed to be the same for the reference and the perturbed populations, to distribute the total number of new marriages. The marriage function also has an availability factor that varies as the population changes with time, where availability depends on the supply of unmarried men and women. To simplify exposition, only first marriage is treated, and the algorithm is illustrated by application to Zambia. In the second paper, remarriage and dissolution are included. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. An Improved Fuzzy Based Missing Value Estimation in DNA Microarray Validated by Gene Ranking

    Directory of Open Access Journals (Sweden)

    Sujay Saha

    2016-01-01

    Full Text Available Most of the gene expression data analysis algorithms require the entire gene expression matrix without any missing values. Hence, it is necessary to devise methods which would impute missing data values accurately. There exist a number of imputation algorithms to estimate those missing values. This work starts with a microarray dataset containing multiple missing values. We first apply the modified version of the fuzzy theory based existing method LRFDVImpute to impute multiple missing values of time series gene expression data and then validate the result of imputation by genetic algorithm (GA based gene ranking methodology along with some regular statistical validation techniques, like RMSE method. Gene ranking, as far as our knowledge, has not been used yet to validate the result of missing value estimation. Firstly, the proposed method has been tested on the very popular Spellman dataset and results show that error margins have been drastically reduced compared to some previous works, which indirectly validates the statistical significance of the proposed method. Then it has been applied on four other 2-class benchmark datasets, like Colorectal Cancer tumours dataset (GDS4382, Breast Cancer dataset (GSE349-350, Prostate Cancer dataset, and DLBCL-FL (Leukaemia for both missing value estimation and ranking the genes, and the results show that the proposed method can reach 100% classification accuracy with very few dominant genes, which indirectly validates the biological significance of the proposed method.

  19. $E_{\\text{T}}^{\\text{miss}}$ performance in the ATLAS detector using 2015-2016 LHC p-p collisions

    CERN Document Server

    The ATLAS collaboration

    2018-01-01

    The reconstruction and calibration algorithms used to measure missing transverse momentum ($E_{\\text{T}}^{\\text{miss}}$) with the ATLAS detector utilise energy deposits within the calorimeter and tracks reconstructed in the inner detector and the muon spectrometer. The performance of the $E_{\\text{T}}^{\\text{miss}}$ reconstruction algorithms is evaluated using data collected in proton--proton collisions in 2015 and 2016 at a centre-of-mass energy of $13$ TeV. Results are shown for a data sample corresponding to an integrated luminosity of $36~$fb$^{-1}$. The performance of $E_{\\text{T}}^{\\text{miss}}$ built with jets reconstructed using a particle flow algorithm is presented and compared to that built with calorimeter jets. Various strategies are used to suppress effects arising from additional proton--proton interactions, called pileup. The tracking and vertexing information is used to distinguish contributions from pileup entering the $E_{\\text{T}}^{\\text{miss}}$ calculation. The modelling of $E_{\\text{T}}^...

  20. Amplification of DNA mixtures--Missing data approach

    DEFF Research Database (Denmark)

    Tvedebrink, Torben; Eriksen, Poul Svante; Mogensen, Helle Smidt

    2008-01-01

    This paper presents a model for the interpretation of results of STR typing of DNA mixtures based on a multivariate normal distribution of peak areas. From previous analyses of controlled experiments with mixed DNA samples, we exploit the linear relationship between peak heights and peak areas...... DNA samples, it is only possible to observe the cumulative peak heights and areas. Complying with this latent structure, we use the EM-algorithm to impute the missing variables based on a compound symmetry model. That is the measurements are subject to intra- and inter-loci correlations not depending...... on the actual alleles of the DNA profiles. Due to factorization of the likelihood, properties of the normal distribution and use of auxiliary variables, an ordinary implementation of the EM-algorithm solves the missing data problem. We estimate the parameters in the model based on a training data set. In order...

  1. Low Multilinear Rank Approximation of Tensors and Application in Missing Traffic Data

    Directory of Open Access Journals (Sweden)

    Huachun Tan

    2014-02-01

    Full Text Available The problem of missing data in multiway arrays (i.e., tensors is common in many fields such as bibliographic data analysis, image processing, and computer vision. We consider the problems of approximating a tensor by another tensor with low multilinear rank in the presence of missing data and possibly reconstructing it (i.e., tensor completion. In this paper, we propose a weighted Tucker model which models only the known elements for capturing the latent structure of the data and reconstructing the missing elements. To treat the nonuniqueness of the proposed weighted Tucker model, a novel gradient descent algorithm based on a Grassmann manifold, which is termed Tucker weighted optimization (Tucker-Wopt, is proposed for guaranteeing the global convergence to a local minimum of the problem. Based on extensive experiments, Tucker-Wopt is shown to successfully reconstruct tensors with noise and up to 95% missing data. Furthermore, the experiments on traffic flow volume data demonstrate the usefulness of our algorithm on real-world application.

  2. AUTOMATIC CLASSIFICATION OF VARIABLE STARS IN CATALOGS WITH MISSING DATA

    International Nuclear Information System (INIS)

    Pichara, Karim; Protopapas, Pavlos

    2013-01-01

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same

  3. AUTOMATIC CLASSIFICATION OF VARIABLE STARS IN CATALOGS WITH MISSING DATA

    Energy Technology Data Exchange (ETDEWEB)

    Pichara, Karim [Computer Science Department, Pontificia Universidad Católica de Chile, Santiago (Chile); Protopapas, Pavlos [Institute for Applied Computational Science, Harvard University, Cambridge, MA (United States)

    2013-11-10

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.

  4. K-Nearest Neighbor Intervals Based AP Clustering Algorithm for Large Incomplete Data

    Directory of Open Access Journals (Sweden)

    Cheng Lu

    2015-01-01

    Full Text Available The Affinity Propagation (AP algorithm is an effective algorithm for clustering analysis, but it can not be directly applicable to the case of incomplete data. In view of the prevalence of missing data and the uncertainty of missing attributes, we put forward a modified AP clustering algorithm based on K-nearest neighbor intervals (KNNI for incomplete data. Based on an Improved Partial Data Strategy, the proposed algorithm estimates the KNNI representation of missing attributes by using the attribute distribution information of the available data. The similarity function can be changed by dealing with the interval data. Then the improved AP algorithm can be applicable to the case of incomplete data. Experiments on several UCI datasets show that the proposed algorithm achieves impressive clustering results.

  5. Maximizing the nurses' preferences in nurse scheduling problem: mathematical modeling and a meta-heuristic algorithm

    Science.gov (United States)

    Jafari, Hamed; Salmasi, Nasser

    2015-09-01

    The nurse scheduling problem (NSP) has received a great amount of attention in recent years. In the NSP, the goal is to assign shifts to the nurses in order to satisfy the hospital's demand during the planning horizon by considering different objective functions. In this research, we focus on maximizing the nurses' preferences for working shifts and weekends off by considering several important factors such as hospital's policies, labor laws, governmental regulations, and the status of nurses at the end of the previous planning horizon in one of the largest hospitals in Iran i.e., Milad Hospital. Due to the shortage of available nurses, at first, the minimum total number of required nurses is determined. Then, a mathematical programming model is proposed to solve the problem optimally. Since the proposed research problem is NP-hard, a meta-heuristic algorithm based on simulated annealing (SA) is applied to heuristically solve the problem in a reasonable time. An initial feasible solution generator and several novel neighborhood structures are applied to enhance performance of the SA algorithm. Inspired from our observations in Milad hospital, random test problems are generated to evaluate the performance of the SA algorithm. The results of computational experiments indicate that the applied SA algorithm provides solutions with average percentage gap of 5.49 % compared to the upper bounds obtained from the mathematical model. Moreover, the applied SA algorithm provides significantly better solutions in a reasonable time than the schedules provided by the head nurses.

  6. Community detection using preference networks

    Science.gov (United States)

    Tasgin, Mursel; Bingol, Haluk O.

    2018-04-01

    Community detection is the task of identifying clusters or groups of nodes in a network where nodes within the same group are more connected with each other than with nodes in different groups. It has practical uses in identifying similar functions or roles of nodes in many biological, social and computer networks. With the availability of very large networks in recent years, performance and scalability of community detection algorithms become crucial, i.e. if time complexity of an algorithm is high, it cannot run on large networks. In this paper, we propose a new community detection algorithm, which has a local approach and is able to run on large networks. It has a simple and effective method; given a network, algorithm constructs a preference network of nodes where each node has a single outgoing edge showing its preferred node to be in the same community with. In such a preference network, each connected component is a community. Selection of the preferred node is performed using similarity based metrics of nodes. We use two alternatives for this purpose which can be calculated in 1-neighborhood of nodes, i.e. number of common neighbors of selector node and its neighbors and, the spread capability of neighbors around the selector node which is calculated by the gossip algorithm of Lind et.al. Our algorithm is tested on both computer generated LFR networks and real-life networks with ground-truth community structure. It can identify communities accurately in a fast way. It is local, scalable and suitable for distributed execution on large networks.

  7. Imputation of missing data in time series for air pollutants

    Science.gov (United States)

    Junger, W. L.; Ponce de Leon, A.

    2015-02-01

    Missing data are major concerns in epidemiological studies of the health effects of environmental air pollutants. This article presents an imputation-based method that is suitable for multivariate time series data, which uses the EM algorithm under the assumption of normal distribution. Different approaches are considered for filtering the temporal component. A simulation study was performed to assess validity and performance of proposed method in comparison with some frequently used methods. Simulations showed that when the amount of missing data was as low as 5%, the complete data analysis yielded satisfactory results regardless of the generating mechanism of the missing data, whereas the validity began to degenerate when the proportion of missing values exceeded 10%. The proposed imputation method exhibited good accuracy and precision in different settings with respect to the patterns of missing observations. Most of the imputations obtained valid results, even under missing not at random. The methods proposed in this study are implemented as a package called mtsdi for the statistical software system R.

  8. Robust K-Median and K-Means Clustering Algorithms for Incomplete Data

    Directory of Open Access Journals (Sweden)

    Jinhua Li

    2016-01-01

    Full Text Available Incomplete data with missing feature values are prevalent in clustering problems. Traditional clustering methods first estimate the missing values by imputation and then apply the classical clustering algorithms for complete data, such as K-median and K-means. However, in practice, it is often hard to obtain accurate estimation of the missing values, which deteriorates the performance of clustering. To enhance the robustness of clustering algorithms, this paper represents the missing values by interval data and introduces the concept of robust cluster objective function. A minimax robust optimization (RO formulation is presented to provide clustering results, which are insensitive to estimation errors. To solve the proposed RO problem, we propose robust K-median and K-means clustering algorithms with low time and space complexity. Comparisons and analysis of experimental results on both artificially generated and real-world incomplete data sets validate the robustness and effectiveness of the proposed algorithms.

  9. A hybrid frame concealment algorithm for H.264/AVC.

    Science.gov (United States)

    Yan, Bo; Gharavi, Hamid

    2010-01-01

    In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.

  10. Imputing data that are missing at high rates using a boosting algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Cauthen, Katherine Regina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lambert, Gregory [Apple Inc., Cupertino, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lefantzi, Sophia [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2016-09-01

    Traditional multiple imputation approaches may perform poorly for datasets with high rates of missingness unless many m imputations are used. This paper implements an alternative machine learning-based approach to imputing data that are missing at high rates. Here, we use boosting to create a strong learner from a weak learner fitted to a dataset missing many observations. This approach may be applied to a variety of types of learners (models). The approach is demonstrated by application to a spatiotemporal dataset for predicting dengue outbreaks in India from meteorological covariates. A Bayesian spatiotemporal CAR model is boosted to produce imputations, and the overall RMSE from a k-fold cross-validation is used to assess imputation accuracy.

  11. The Network Completion Problem: Inferring Missing Nodes and Edges in Networks

    Energy Technology Data Exchange (ETDEWEB)

    Kim, M; Leskovec, J

    2011-11-14

    Network structures, such as social networks, web graphs and networks from systems biology, play important roles in many areas of science and our everyday lives. In order to study the networks one needs to first collect reliable large scale network data. While the social and information networks have become ubiquitous, the challenge of collecting complete network data still persists. Many times the collected network data is incomplete with nodes and edges missing. Commonly, only a part of the network can be observed and we would like to infer the unobserved part of the network. We address this issue by studying the Network Completion Problem: Given a network with missing nodes and edges, can we complete the missing part? We cast the problem in the Expectation Maximization (EM) framework where we use the observed part of the network to fit a model of network structure, and then we estimate the missing part of the network using the model, re-estimate the parameters and so on. We combine the EM with the Kronecker graphs model and design a scalable Metropolized Gibbs sampling approach that allows for the estimation of the model parameters as well as the inference about missing nodes and edges of the network. Experiments on synthetic and several real-world networks show that our approach can effectively recover the network even when about half of the nodes in the network are missing. Our algorithm outperforms not only classical link-prediction approaches but also the state of the art Stochastic block modeling approach. Furthermore, our algorithm easily scales to networks with tens of thousands of nodes.

  12. Strangeness freeze-out: role of system size and missing resonances

    Directory of Open Access Journals (Sweden)

    Chatterjee Sandeep

    2018-01-01

    Full Text Available The conventional approach to treat strangeness freezeout has been to consider a unified freezeout scheme where strangeness freezes out along with the nonstrange hadrons (1CFO, with or without an additional parameter accounting for out-of-equilibrium strangeness production (γS. Several alternate scenarios have been formulated lately. Here, we will focus on flavor dependent freezeout with early freezeout of strangeness (2CFO in comparison to 1CFO and its variants with respect to the roles played by the system size and missing resonances predicted by different theoretical approaches but yet to be seen in experiments. In contrast to the performance of 1CFO with/without γS that is insensitive to system size, 2CFO exhibits a clear system size dependence-while for Pb+Pb the χ2/NDF is around 0-2, for smaller system size in p+Pb and p+p, the χ2/NDF> 5 and larger than 1CFO+γS. This clearly shows a system size dependence of the preference for the freezeout scheme, while 2CFO is preferred in Pb+Pb, 1CFO+γS is preferred in p+Pb and p+p. We have further investigated the role of the missing resonances on strangeness freezeout across SPS to LHC beam energies.

  13. Analysis of Individual Preferences for Tuning Noise-Reduction Algorithms

    NARCIS (Netherlands)

    Houben, Rolph; Dijkstra, Tjeerd M. H.; Dreschler, Wouter A.

    2012-01-01

    There is little research on user preference for different settings of noise reduction, especially for individual users. We therefore measured individual preferences for pairs of audio streams differing in the trade-off between noise reduction and speech distortion. A logistic probability model was

  14. Uncertainty and Preference Modelling for Multiple Criteria Vehicle Evaluation

    Directory of Open Access Journals (Sweden)

    Qiuping Yang

    2010-12-01

    Full Text Available A general framework for vehicle assessment is proposed based on both mass survey information and the evidential reasoning (ER approach. Several methods for uncertainty and preference modeling are developed within the framework, including the measurement of uncertainty caused by missing information, the estimation of missing information in original surveys, the use of nonlinear functions for data mapping, and the use of nonlinear functions as utility function to combine distributed assessments into a single index. The results of the investigation show that various measures can be used to represent the different preferences of decision makers towards the same feedback from respondents. Based on the ER approach, credible and informative analysis can be conducted through the complete understanding of the assessment problem in question and the full exploration of available information.

  15. Optimization of the test intervals of a nuclear safety system by genetic algorithms, solution clustering and fuzzy preference assignment

    International Nuclear Information System (INIS)

    Zio, E.; Bazzo, R.

    2010-01-01

    In this paper, a procedure is developed for identifying a number of representative solutions manageable for decision-making in a multiobjective optimization problem concerning the test intervals of the components of a safety system of a nuclear power plant. Pareto Front solutions are identified by a genetic algorithm and then clustered by subtractive clustering into 'families'. On the basis of the decision maker's preferences, each family is then synthetically represented by a 'head of the family' solution. This is done by introducing a scoring system that ranks the solutions with respect to the different objectives: a fuzzy preference assignment is employed to this purpose. Level Diagrams are then used to represent, analyze and interpret the Pareto Fronts reduced to the head-of-the-family solutions

  16. Measurement of Missing Tranverse Energy

    CERN Document Server

    The ATLAS Collaboration

    2009-01-01

    This note discusses the overall ATLAS detector performance for the reconstruction of the missing transverse energy, ETmiss. Two reconstruction algorithms are discussed and their performance is evaluated for a variety of simulated physics processes which probe different topologies and different total transverse energy regimes. In addition, effects of fake ETmiss, resulting from instrumental effects and from false reconstructions are investigated. Finally, studies with first data, corresponding to an integrated luminosity of 100 pb-1, are suggested which can be used to assess and calibrate the ETmiss performance at the startup of data taking.

  17. Hesitant Probabilistic Multiplicative Preference Relations in Group Decision Making

    Directory of Open Access Journals (Sweden)

    Zia Bashir

    2018-03-01

    Full Text Available The preference of one alternative over another is a useful way to express the opinion of the decision-maker. In the process of group decision-making, preference relations are used in preference modeling of the alternatives under given criteria. The probability is an important tool to deal with uncertainty and, in many scenarios of decision-making problems, the probabilities of different events affect the decision-making process directly. In order to deal with this issue, the hesitant probabilistic multiplicative preference relation (HPMPR is defined in this paper. Furthermore, consistency of the HPMPR and consensus among decision makers are studied here. In this respect, many algorithms are developed to achieve consistency of HPMPRs, reasonable consensus between decision-makers and a final algorithm is proposed comprehending all other algorithms, presenting a complete decision support model for group decision-making. Lastly, we present a case study with complete illustration of the proposed model and discuss the effects of probabilities on decision-making validating the importance of the introduction of probability in hesitant multiplicative preference relations.

  18. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes

    Directory of Open Access Journals (Sweden)

    Lotz Meredith J

    2008-01-01

    Full Text Available Abstract Background Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures × time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. Results We found that the optimal imputation algorithms (LSA, LLS, and BPCA are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Conclusion Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA

  19. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes.

    Science.gov (United States)

    Brock, Guy N; Shaffer, John R; Blakesley, Richard E; Lotz, Meredith J; Tseng, George C

    2008-01-10

    Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures x time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. We found that the optimal imputation algorithms (LSA, LLS, and BPCA) are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS) scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS) scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA) are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA) performed better on mcroarray data with lower complexity

  20. Missed opportunities for HPV immunization among young adult women

    Science.gov (United States)

    Oliveira, Carlos R.; Rock, Robert M.; Shapiro, Eugene D.; Xu, Xiao; Lundsberg, Lisbet; Zhang, Liye B.; Gariepy, Aileen; Illuzzi, Jessica L.; Sheth, Sangini S.

    2018-01-01

    .61 [95% confidence interval, 1.08–2.41], P < .02). Women who reported a non-English- or non-Spanish-preferred language had lower adjusted odds of having a missed opportunity (adjusted odds ratio, 0.25 [95% confidence interval, 0.07–0.87], P = .03). No other patient characteristics assessed in this study were significantly associated with having a missed opportunity. CONCLUSION A majority of young-adult women in this study had missed opportunities for human papillomavirus immunization, and significant racial disparity was observed. The greatest frequency of missed opportunities occurred with visits for either contraception or for sexually transmitted disease screening. PMID:29223597

  1. Animation of planning algorithms

    OpenAIRE

    Sun, Fan

    2014-01-01

    Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...

  2. Color preference in red–green dichromats

    Science.gov (United States)

    Álvaro, Leticia; Moreira, Humberto; Lillo, Julio; Franklin, Anna

    2015-01-01

    Around 2% of males have red–green dichromacy, which is a genetic disorder of color vision where one type of cone photoreceptor is missing. Here we investigate the color preferences of dichromats. We aim (i) to establish whether the systematic and reliable color preferences of normal trichromatic observers (e.g., preference maximum at blue, minimum at yellow-green) are affected by dichromacy and (ii) to test theories of color preference with a dichromatic sample. Dichromat and normal trichromat observers named and rated how much they liked saturated, light, dark, and focal colors twice. Trichromats had the expected pattern of preference. Dichromats had a reliable pattern of preference that was different to trichromats, with a preference maximum rather than minimum at yellow and a much weaker preference for blue than trichromats. Color preference was more affected in observers who lacked the cone type sensitive to long wavelengths (protanopes) than in those who lacked the cone type sensitive to medium wavelengths (deuteranopes). Trichromats’ preferences were summarized effectively in terms of cone-contrast between color and background, and yellow-blue cone-contrast could account for dichromats’ pattern of preference, with some evidence for residual red–green activity in deuteranopes’ preference. Dichromats’ color naming also could account for their color preferences, with colors named more accurately and quickly being more preferred. This relationship between color naming and preference also was present for trichromat males but not females. Overall, the findings provide novel evidence on how dichromats experience color, advance the understanding of why humans like some colors more than others, and have implications for general theories of aesthetics. PMID:26170287

  3. Preferences in Data Production Planning

    Science.gov (United States)

    Golden, Keith; Brafman, Ronen; Pang, Wanlin

    2005-01-01

    This paper discusses the data production problem, which consists of transforming a set of (initial) input data into a set of (goal) output data. There are typically many choices among input data and processing algorithms, each leading to significantly different end products. To discriminate among these choices, the planner supports an input language that provides a number of constructs for specifying user preferences over data (and plan) properties. We discuss these preference constructs, how we handle them to guide search, and additional challenges in the area of preference management that this important application domain offers.

  4. Performance of Jet Algorithms in CMS

    CERN Document Server

    CMS Collaboration

    The CMS Combined Software and Analysis Challenge 2007 (CSA07) is well underway and expected to produce a wealth of physics analyses to be applied to the first incoming detector data in 2008. The JetMET group of CMS supports four different jet clustering algorithms for the CSA07 Monte Carlo samples, with two different parameterizations each: \\fastkt, \\siscone, \\midpoint, and \\itcone. We present several studies comparing the performance of these algorithms using QCD dijet and \\ttbar Monte Carlo samples. We specifically observe that the \\siscone algorithm performs equal to or better than the \\midpoint algorithm in all presented studies and propose that \\siscone be adopted as the preferred cone-based jet clustering algorithm in future CMS physics analyses, as it is preferred by theorists for its infrared- and collinear-safety to all orders of perturbative QCD. We furthermore encourage the use of the \\fastkt algorithm which is found to perform as good as any other algorithm under study, features dramatically reduc...

  5. Student Preferences for Instructional Methods in an Accounting Curriculum

    Science.gov (United States)

    Abeysekera, Indra

    2015-01-01

    Student preferences among instructional methods are largely unexplored across the accounting curriculum. The algorithmic rigor of courses and the societal culture can influence these preferences. This study explored students' preferences of instructional methods for learning in six courses of the accounting curriculum that differ in algorithmic…

  6. Compensatory versus noncompensatory models for predicting consumer preferences

    Directory of Open Access Journals (Sweden)

    Anja Dieckmann

    2009-04-01

    Full Text Available Standard preference models in consumer research assume that people weigh and add all attributes of the available options to derive a decision, while there is growing evidence for the use of simplifying heuristics. Recently, a greedoid algorithm has been developed (Yee, Dahan, Hauser and Orlin, 2007; Kohli and Jedidi, 2007 to model lexicographic heuristics from preference data. We compare predictive accuracies of the greedoid approach and standard conjoint analysis in an online study with a rating and a ranking task. The lexicographic model derived from the greedoid algorithm was better at predicting ranking compared to rating data, but overall, it achieved lower predictive accuracy for hold-out data than the compensatory model estimated by conjoint analysis. However, a considerable minority of participants was better predicted by lexicographic strategies. We conclude that the new algorithm will not replace standard tools for analyzing preferences, but can boost the study of situational and individual differences in preferential choice processes.

  7. The features of endometrium receptor apparatus in women with missed abortion

    Directory of Open Access Journals (Sweden)

    Александра Николаевна Тищенко

    2015-05-01

    Full Text Available A new complex of modern methods of investigation, both the endometrium and immune system changes in local and hormonal status is developed in this article, rehabilitation of reproductive function in patients with the missed abortion is proposed.Methods. The study involved 124 women, including 64 women with diagnosed missed abortion. The comparison group included 30 women with physiological pregnancy who were performed artificial abortion. 30 healthy women in pregravidae period were examined. The clinical and anamnestic, histological and immunological features of the endometrium and its functional activity were investigated.Results. Pathogenetic mechanisms of menstrual dysfunction were defined in women with the missed abortion. The changes in the secretory transformation of the endometrium, a sharp decrease in its receptor and functional activity, confirming the lack of development of the luteal phase are observed. Based on the research, an algorithm for the restoration of menstrual function in women in the postgravidae period is developed and implemented in practice.Conclusions. The data confirmed the feasibility of implementing the proposed examination and treatment of women with the missed abortion

  8. Optimal simultaneous superpositioning of multiple structures with missing data.

    Science.gov (United States)

    Theobald, Douglas L; Steindel, Phillip A

    2012-08-01

    Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.

  9. The Effect of Answering in a Preferred Versus a Non-Preferred Survey Mode on Measurement

    Directory of Open Access Journals (Sweden)

    Jolene Smyth

    2014-12-01

    Full Text Available Previous research has shown that offering respondents their preferred mode can increase response rates, but the effect of doing so on how respondents process and answer survey questions (i.e., measurement is unclear. In this paper, we evaluate whether changes in question format have different effects on data quality for those responding in their preferred mode than for those responding in a non-preferred mode for three question types (multiple answer, open-ended, and grid. Respondents were asked about their preferred mode in a 2008 survey and were recontacted in 2009. In the recontact survey, respondents were randomly assigned to one of two modes such that some responded in their preferred mode and others did not. They were also randomly assigned to one of two questionnaire forms in which the format of individual questions was varied. On the multiple answer and open-ended items, those who answered in a non-preferred mode seemed to take advantage of opportunities to satisfice when the question format allowed or encouraged it (e.g., selecting fewer items in the check-all than the forced-choice format and being more likely to skip the open-ended item when it had a larger answer box, while those who answered in a preferred mode did not. There was no difference on a grid formatted item across those who did and did not respond by their preferred mode, but results indicate that a fully labeled grid reduced item missing rates vis-à-vis a grid with only column heading labels. Results provide insight into the effect of tailoring to mode preference on commonly used questionnaire design features.

  10. Accounting for one-channel depletion improves missing value imputation in 2-dye microarray data.

    Science.gov (United States)

    Ritz, Cecilia; Edén, Patrik

    2008-01-19

    For 2-dye microarray platforms, some missing values may arise from an un-measurably low RNA expression in one channel only. Information of such "one-channel depletion" is so far not included in algorithms for imputation of missing values. Calculating the mean deviation between imputed values and duplicate controls in five datasets, we show that KNN-based imputation gives a systematic bias of the imputed expression values of one-channel depleted spots. Evaluating the correction of this bias by cross-validation showed that the mean square deviation between imputed values and duplicates were reduced up to 51%, depending on dataset. By including more information in the imputation step, we more accurately estimate missing expression values.

  11. Missing money and missing markets: Reliability, capacity auctions and interconnectors

    International Nuclear Information System (INIS)

    Newbery, David

    2016-01-01

    In the energy trilemma of reliability, sustainability and affordability, politicians treat reliability as over-riding. The EU assumes the energy-only Target Electricity Model will deliver reliability but the UK argues that a capacity remuneration mechanism is needed. This paper argues that capacity auctions tend to over-procure capacity, exacerbating the missing money problem they were designed to address. The bias is further exacerbated by failing to address some of the missing market problems also neglected in the debate. It examines the case for, criticisms of, and outcome of the first GB capacity auction and problems of trading between different capacity markets. - Highlights: •Energy-only markets can work if they avoid missing money and missing market problems. •Policy makers over-estimate the cost of so-called “loss of load events”. •Policy makers tend to over-procure capacity, exacerbating the missing money problem. •Rectifying missing market problems simplifies trade between different capacity markets. •Addressing missing market problems makes under-procurement cheaper than over-procurement.

  12. Multilevel Weighted Support Vector Machine for Classification on Healthcare Data with Missing Values.

    Directory of Open Access Journals (Sweden)

    Talayeh Razzaghi

    Full Text Available This work is motivated by the needs of predictive analytics on healthcare data as represented by Electronic Medical Records. Such data is invariably problematic: noisy, with missing entries, with imbalance in classes of interests, leading to serious bias in predictive modeling. Since standard data mining methods often produce poor performance measures, we argue for development of specialized techniques of data-preprocessing and classification. In this paper, we propose a new method to simultaneously classify large datasets and reduce the effects of missing values. It is based on a multilevel framework of the cost-sensitive SVM and the expected maximization imputation method for missing values, which relies on iterated regression analyses. We compare classification results of multilevel SVM-based algorithms on public benchmark datasets with imbalanced classes and missing values as well as real data in health applications, and show that our multilevel SVM-based method produces fast, and more accurate and robust classification results.

  13. The Missing Tooth: Case Illustrations of a Child's Assembled, Out-of-School Authorship

    Science.gov (United States)

    Winters, Kari-Lynn

    2012-01-01

    Case illustrations of a six-year-old boy's adventures with a missing tooth are used in this paper to re-define a broader notion of authorship. Drawing on theories of social semiotics, New Literacy Studies (NLS), and critical positioning, this notion of authorship not only interweaves the boy's preferred modes of meaning-making and communication,…

  14. Recent Advancements in Lightning Jump Algorithm Work

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  15. Principled Missing Data Treatments.

    Science.gov (United States)

    Lang, Kyle M; Little, Todd D

    2018-04-01

    We review a number of issues regarding missing data treatments for intervention and prevention researchers. Many of the common missing data practices in prevention research are still, unfortunately, ill-advised (e.g., use of listwise and pairwise deletion, insufficient use of auxiliary variables). Our goal is to promote better practice in the handling of missing data. We review the current state of missing data methodology and recent missing data reporting in prevention research. We describe antiquated, ad hoc missing data treatments and discuss their limitations. We discuss two modern, principled missing data treatments: multiple imputation and full information maximum likelihood, and we offer practical tips on how to best employ these methods in prevention research. The principled missing data treatments that we discuss are couched in terms of how they improve causal and statistical inference in the prevention sciences. Our recommendations are firmly grounded in missing data theory and well-validated statistical principles for handling the missing data issues that are ubiquitous in biosocial and prevention research. We augment our broad survey of missing data analysis with references to more exhaustive resources.

  16. Cox regression with missing covariate data using a modified partial likelihood method

    DEFF Research Database (Denmark)

    Martinussen, Torben; Holst, Klaus K.; Scheike, Thomas H.

    2016-01-01

    Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard...

  17. Automatic design of decision-tree induction algorithms

    CERN Document Server

    Barros, Rodrigo C; Freitas, Alex A

    2015-01-01

    Presents a detailed study of the major design components that constitute a top-down decision-tree induction algorithm, including aspects such as split criteria, stopping criteria, pruning, and the approaches for dealing with missing values. Whereas the strategy still employed nowadays is to use a 'generic' decision-tree induction algorithm regardless of the data, the authors argue on the benefits that a bias-fitting strategy could bring to decision-tree induction, in which the ultimate goal is the automatic generation of a decision-tree induction algorithm tailored to the application domain o

  18. Tensor Completion Algorithms in Big Data Analytics

    OpenAIRE

    Song, Qingquan; Ge, Hancheng; Caverlee, James; Hu, Xia

    2017-01-01

    Tensor completion is a problem of filling the missing or unobserved entries of partially observed tensors. Due to the multidimensional character of tensors in describing complex datasets, tensor completion algorithms and their applications have received wide attention and achievement in areas like data mining, computer vision, signal processing, and neuroscience. In this survey, we provide a modern overview of recent advances in tensor completion algorithms from the perspective of big data an...

  19. An Online Causal Inference Framework for Modeling and Designing Systems Involving User Preferences: A State-Space Approach

    Directory of Open Access Journals (Sweden)

    Ibrahim Delibalta

    2017-01-01

    Full Text Available We provide a causal inference framework to model the effects of machine learning algorithms on user preferences. We then use this mathematical model to prove that the overall system can be tuned to alter those preferences in a desired manner. A user can be an online shopper or a social media user, exposed to digital interventions produced by machine learning algorithms. A user preference can be anything from inclination towards a product to a political party affiliation. Our framework uses a state-space model to represent user preferences as latent system parameters which can only be observed indirectly via online user actions such as a purchase activity or social media status updates, shares, blogs, or tweets. Based on these observations, machine learning algorithms produce digital interventions such as targeted advertisements or tweets. We model the effects of these interventions through a causal feedback loop, which alters the corresponding preferences of the user. We then introduce algorithms in order to estimate and later tune the user preferences to a particular desired form. We demonstrate the effectiveness of our algorithms through experiments in different scenarios.

  20. Probabilistic linkage to enhance deterministic algorithms and reduce data linkage errors in hospital administrative data.

    Science.gov (United States)

    Hagger-Johnson, Gareth; Harron, Katie; Goldstein, Harvey; Aldridge, Robert; Gilbert, Ruth

    2017-06-30

     BACKGROUND: The pseudonymisation algorithm used to link together episodes of care belonging to the same patients in England (HESID) has never undergone any formal evaluation, to determine the extent of data linkage error. To quantify improvements in linkage accuracy from adding probabilistic linkage to existing deterministic HESID algorithms. Inpatient admissions to NHS hospitals in England (Hospital Episode Statistics, HES) over 17 years (1998 to 2015) for a sample of patients (born 13/28th of months in 1992/1998/2005/2012). We compared the existing deterministic algorithm with one that included an additional probabilistic step, in relation to a reference standard created using enhanced probabilistic matching with additional clinical and demographic information. Missed and false matches were quantified and the impact on estimates of hospital readmission within one year were determined. HESID produced a high missed match rate, improving over time (8.6% in 1998 to 0.4% in 2015). Missed matches were more common for ethnic minorities, those living in areas of high socio-economic deprivation, foreign patients and those with 'no fixed abode'. Estimates of the readmission rate were biased for several patient groups owing to missed matches, which was reduced for nearly all groups. CONCLUSION: Probabilistic linkage of HES reduced missed matches and bias in estimated readmission rates, with clear implications for commissioning, service evaluation and performance monitoring of hospitals. The existing algorithm should be modified to address data linkage error, and a retrospective update of the existing data would address existing linkage errors and their implications.

  1. Maximum likelihood positioning algorithm for high-resolution PET scanners

    International Nuclear Information System (INIS)

    Gross-Weege, Nicolas; Schug, David; Hallen, Patrick; Schulz, Volkmar

    2016-01-01

    Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods: The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II D PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML

  2. Multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement

    Science.gov (United States)

    Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing

    2018-02-01

    For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.

  3. The performance of the ATLAS missing transverse momentum high-level trigger in 2015 pp collisions at $13$ TeV

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00534627

    The performance of the ATLAS missing transverse momentum (${E_\\text{T}^\\text{miss}}$) high-level trigger during 2015 operation is presented. In 2015, the Large Hadron Collider operated at a higher centre-of-mass energy and shorter bunch spacing ($\\sqrt{s} = 13$ TeV and $25$ ns, respectively) than in previous operation. In future operation, the Large Hadron Collider will operate at even higher instantaneous luminosity ($\\mathcal{O}(10^{34} \\text{ cm$^{-2}$ s$^{-1}$}$) and produce a higher average number of interactions per bunch crossing, $\\langle \\mu \\rangle$. These operating conditions will pose significant challenges to the ${E_\\text{T}^\\text{miss}}$ trigger efficiency and rate. An overview of the new algorithms implemented to address these challenges, and of the existing algorithms is given. An integrated luminosity of $1.4 \\text{ fb$^{-1}$}$ with $\\langle \\mu \\rangle = 14$ was collected from pp collisions of the Large Hadron Collider by the ATLAS detector during October and November 2015 and was used to s...

  4. Handling missing data for the identification of charged particles in a multilayer detector: A comparison between different imputation methods

    Energy Technology Data Exchange (ETDEWEB)

    Riggi, S., E-mail: sriggi@oact.inaf.it [INAF - Osservatorio Astrofisico di Catania (Italy); Riggi, D. [Keras Strategy - Milano (Italy); Riggi, F. [Dipartimento di Fisica e Astronomia - Università di Catania (Italy); INFN, Sezione di Catania (Italy)

    2015-04-21

    Identification of charged particles in a multilayer detector by the energy loss technique may also be achieved by the use of a neural network. The performance of the network becomes worse when a large fraction of information is missing, for instance due to detector inefficiencies. Algorithms which provide a way to impute missing information have been developed over the past years. Among the various approaches, we focused on normal mixtures’ models in comparison with standard mean imputation and multiple imputation methods. Further, to account for the intrinsic asymmetry of the energy loss data, we considered skew-normal mixture models and provided a closed form implementation in the Expectation-Maximization (EM) algorithm framework to handle missing patterns. The method has been applied to a test case where the energy losses of pions, kaons and protons in a six-layers’ Silicon detector are considered as input neurons to a neural network. Results are given in terms of reconstruction efficiency and purity of the various species in different momentum bins.

  5. Radiologists' preferences for digital mammographic display. The International Digital Mammography Development Group.

    Science.gov (United States)

    Pisano, E D; Cole, E B; Major, S; Zong, S; Hemminger, B M; Muller, K E; Johnston, R E; Walsh, R; Conant, E; Fajardo, L L; Feig, S A; Nishikawa, R M; Yaffe, M J; Williams, M B; Aylward, S R

    2000-09-01

    To determine the preferences of radiologists among eight different image processing algorithms applied to digital mammograms obtained for screening and diagnostic imaging tasks. Twenty-eight images representing histologically proved masses or calcifications were obtained by using three clinically available digital mammographic units. Images were processed and printed on film by using manual intensity windowing, histogram-based intensity windowing, mixture model intensity windowing, peripheral equalization, multiscale image contrast amplification (MUSICA), contrast-limited adaptive histogram equalization, Trex processing, and unsharp masking. Twelve radiologists compared the processed digital images with screen-film mammograms obtained in the same patient for breast cancer screening and breast lesion diagnosis. For the screening task, screen-film mammograms were preferred to all digital presentations, but the acceptability of images processed with Trex and MUSICA algorithms were not significantly different. All printed digital images were preferred to screen-film radiographs in the diagnosis of masses; mammograms processed with unsharp masking were significantly preferred. For the diagnosis of calcifications, no processed digital mammogram was preferred to screen-film mammograms. When digital mammograms were preferred to screen-film mammograms, radiologists selected different digital processing algorithms for each of three mammographic reading tasks and for different lesion types. Soft-copy display will eventually allow radiologists to select among these options more easily.

  6. Performance of the CMS Jets and Missing Transverse Energy Trigger at LHC Run 2

    CERN Document Server

    Nachtman, Jane; Dordevic, Milos; Kaya, Mithat; Kaya, Ozlem; Kirschenmann, Henning; Zhang, Fengwangdong

    2017-01-01

    In preparation for collecting proton-proton collisions from the LHC at a center-of-mass energy of 13 TeV and rate of 40MHz with increasing instantaneous luminosity, the CMS collaboration prepared an array of triggers utilizing jets and missing transverse energy for searches for new physics at the energy frontier as well as for SM precision measurements. The CMS trigger system must be able to sift through the collision events in order to extract events of interest at a rate of 1kHz, applying sophisticated algorithms adapted for fast and effective operation. Particularly important is the calibration of the trigger objects, as corrections to the measured energy may be substantial. Equally important is the development of improved reconstruction algorithms to mitigate negative effects due to high numbers of overlapping proton-proton collisions and increased levels of beam-related effects. Work by the CMS collaboration on upgrading the high-level trigger for jets and missing transverse energy for the upgraded LHC o...

  7. A study of using smartphone to detect and identify construction workers' near-miss falls based on ANN

    Science.gov (United States)

    Zhang, Mingyuan; Cao, Tianzhuo; Zhao, Xuefeng

    2018-03-01

    As an effective fall accident preventive method, insight into near-miss falls provides an efficient solution to find out the causes of fall accidents, classify the type of near-miss falls and control the potential hazards. In this context, the paper proposes a method to detect and identify near-miss falls that occur when a worker walks in a workplace based on artificial neural network (ANN). The energy variation generated by workers who meet with near-miss falls is measured by sensors embedded in smart phone. Two experiments were designed to train the algorithm to identify various types of near-miss falls and test the recognition accuracy, respectively. At last, a test was conducted by workers wearing smart phones as they walked around a simulated construction workplace. The motion data was collected, processed and inputted to the trained ANN to detect and identify near-miss falls. Thresholds were obtained to measure the relationship between near-miss falls and fall accidents in a quantitate way. This approach, which integrates smart phone and ANN, will help detect near-miss fall events, identify hazardous elements and vulnerable workers, providing opportunities to eliminate dangerous conditions in a construction site or to alert possible victims that need to change their behavior before the occurrence of a fall accident.

  8. Tolerance to missing data using a likelihood ratio based classifier for computer-aided classification of breast cancer

    International Nuclear Information System (INIS)

    Bilska-Wolak, Anna O; Floyd, Carey E Jr

    2004-01-01

    While mammography is a highly sensitive method for detecting breast tumours, its ability to differentiate between malignant and benign lesions is low, which may result in as many as 70% of unnecessary biopsies. The purpose of this study was to develop a highly specific computer-aided diagnosis algorithm to improve classification of mammographic masses. A classifier based on the likelihood ratio was developed to accommodate cases with missing data. Data for development included 671 biopsy cases (245 malignant), with biopsy-proved outcome. Sixteen features based on the BI-RADS TM lexicon and patient history had been recorded for the cases, with 1.3 ± 1.1 missing feature values per case. Classifier evaluation methods included receiver operating characteristic and leave-one-out bootstrap sampling. The classifier achieved 32% specificity at 100% sensitivity on the 671 cases with 16 features that had missing values. Utilizing just the seven features present for all cases resulted in decreased performance at 100% sensitivity with average 19% specificity. No cases and no feature data were omitted during classifier development, showing that it is more beneficial to utilize cases with missing values than to discard incomplete cases that cannot be handled by many algorithms. Classification of mammographic masses was commendable at high sensitivity levels, indicating that benign cases could be potentially spared from biopsy

  9. A novel application of PageRank and user preference algorithms for assessing the relative performance of track athletes in competition.

    Science.gov (United States)

    Beggs, Clive B; Shepherd, Simon J; Emmonds, Stacey; Jones, Ben

    2017-01-01

    Ranking enables coaches, sporting authorities, and pundits to determine the relative performance of individual athletes and teams in comparison to their peers. While ranking is relatively straightforward in sports that employ traditional leagues, it is more difficult in sports where competition is fragmented (e.g. athletics, boxing, etc.), with not all competitors competing against each other. In such situations, complex points systems are often employed to rank athletes. However, these systems have the inherent weakness that they frequently rely on subjective assessments in order to gauge the calibre of the competitors involved. Here we show how two Internet derived algorithms, the PageRank (PR) and user preference (UP) algorithms, when utilised with a simple 'who beat who' matrix, can be used to accurately rank track athletes, avoiding the need for subjective assessment. We applied the PR and UP algorithms to the 2015 IAAF Diamond League men's 100m competition and compared their performance with the Keener, Colley and Massey ranking algorithms. The top five places computed by the PR and UP algorithms, and the Diamond League '2016' points system were all identical, with the Kendall's tau distance between the PR standings and '2016' points system standings being just 15, indicating that only 5.9% of pairs differed in their order between these two lists. By comparison, the UP and '2016' standings displayed a less strong relationship, with a tau distance of 95, indicating that 37.6% of the pairs differed in their order. When compared with the standings produced using the Keener, Colley and Massey algorithms, the PR standings appeared to be closest to the Keener standings (tau distance = 67, 26.5% pair order disagreement), whereas the UP standings were more similar to the Colley and Massey standings, with the tau distances between these ranking lists being only 48 (19.0% pair order disagreement) and 59 (23.3% pair order disagreement) respectively. In particular, the

  10. A novel application of PageRank and user preference algorithms for assessing the relative performance of track athletes in competition.

    Directory of Open Access Journals (Sweden)

    Clive B Beggs

    Full Text Available Ranking enables coaches, sporting authorities, and pundits to determine the relative performance of individual athletes and teams in comparison to their peers. While ranking is relatively straightforward in sports that employ traditional leagues, it is more difficult in sports where competition is fragmented (e.g. athletics, boxing, etc., with not all competitors competing against each other. In such situations, complex points systems are often employed to rank athletes. However, these systems have the inherent weakness that they frequently rely on subjective assessments in order to gauge the calibre of the competitors involved. Here we show how two Internet derived algorithms, the PageRank (PR and user preference (UP algorithms, when utilised with a simple 'who beat who' matrix, can be used to accurately rank track athletes, avoiding the need for subjective assessment. We applied the PR and UP algorithms to the 2015 IAAF Diamond League men's 100m competition and compared their performance with the Keener, Colley and Massey ranking algorithms. The top five places computed by the PR and UP algorithms, and the Diamond League '2016' points system were all identical, with the Kendall's tau distance between the PR standings and '2016' points system standings being just 15, indicating that only 5.9% of pairs differed in their order between these two lists. By comparison, the UP and '2016' standings displayed a less strong relationship, with a tau distance of 95, indicating that 37.6% of the pairs differed in their order. When compared with the standings produced using the Keener, Colley and Massey algorithms, the PR standings appeared to be closest to the Keener standings (tau distance = 67, 26.5% pair order disagreement, whereas the UP standings were more similar to the Colley and Massey standings, with the tau distances between these ranking lists being only 48 (19.0% pair order disagreement and 59 (23.3% pair order disagreement respectively. In

  11. Approximate dynamic programming approaches for appointment scheduling with patient preferences.

    Science.gov (United States)

    Li, Xin; Wang, Jin; Fung, Richard Y K

    2018-04-01

    During the appointment booking process in out-patient departments, the level of patient satisfaction can be affected by whether or not their preferences can be met, including the choice of physicians and preferred time slot. In addition, because the appointments are sequential, considering future possible requests is also necessary for a successful appointment system. This paper proposes a Markov decision process model for optimizing the scheduling of sequential appointments with patient preferences. In contrast to existing models, the evaluation of a booking decision in this model focuses on the extent to which preferences are satisfied. Characteristics of the model are analysed to develop a system for formulating booking policies. Based on these characteristics, two types of approximate dynamic programming algorithms are developed to avoid the curse of dimensionality. Experimental results suggest directions for further fine-tuning of the model, as well as improving the efficiency of the two proposed algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Comparing Online Algorithms for Bin Packing Problems

    DEFF Research Database (Denmark)

    Epstein, Leah; Favrholdt, Lene Monrad; Kohrt, Jens Svalgaard

    2012-01-01

    The relative worst-order ratio is a measure of the quality of online algorithms. In contrast to the competitive ratio, this measure compares two online algorithms directly instead of using an intermediate comparison with an optimal offline algorithm. In this paper, we apply the relative worst-ord......-order ratio to online algorithms for several common variants of the bin packing problem. We mainly consider pairs of algorithms that are not distinguished by the competitive ratio and show that the relative worst-order ratio prefers the intuitively better algorithm of each pair....

  13. Cascade-robustness optimization of coupling preference in interconnected networks

    International Nuclear Information System (INIS)

    Zhang, Xue-Jun; Xu, Guo-Qiang; Zhu, Yan-Bo; Xia, Yong-Xiang

    2016-01-01

    Highlights: • A specific memetic algorithm was proposed to optimize coupling links. • A small toy model was investigated to examine the underlying mechanism. • The MA optimized strategy exhibits a moderate assortative pattern. • A novel coupling coefficient index was proposed to quantify coupling preference. - Abstract: Recently, the robustness of interconnected networks has attracted extensive attentions, one of which is to investigate the influence of coupling preference. In this paper, the memetic algorithm (MA) is employed to optimize the coupling links of interconnected networks. Afterwards, a comparison is made between MA optimized coupling strategy and traditional assortative, disassortative and random coupling preferences. It is found that the MA optimized coupling strategy with a moderate assortative value shows an outstanding performance against cascading failures on both synthetic scale-free interconnected networks and real-world networks. We then provide an explanation for this phenomenon from a micro-scope point of view and propose a coupling coefficient index to quantify the coupling preference. Our work is helpful for the design of robust interconnected networks.

  14. Missed pills: frequency, reasons, consequences and solutions.

    Science.gov (United States)

    Chabbert-Buffet, Nathalie; Jamin, Christian; Lete, Iñaki; Lobo, Paloma; Nappi, Rossella E; Pintiaux, Axelle; Häusler, Günther; Fiala, Christian

    2017-06-01

    Oral hormonal contraception is an effective contraceptive method as long as regular daily intake is maintained. However, a daily routine is a constraint for many women and can lead to missed pills, pill discontinuation and/or unintended pregnancy. This article describes the frequency of inconsistent use, the consequences, the risk factors and the possible solutions. The article comprises a narrative review of the literature. Forgetting one to three pills per cycle is a frequent problem among 15-51% of users, generally adolescents. The reasons for this are age, inability to establish a routine, pill unavailability, side effects, loss of motivation and lack of involvement in the initial decision to use oral contraceptives. The consequences are 'escape ovulations' and, possibly, unintended pregnancy. Solutions are either to use a long-acting method or, for women who prefer to take oral contraceptives, use a continuous or long-cycle regimen to reduce the risks of follicular development and thus the likelihood of ovulation and unintended pregnancy. A progestogen with a long half-life can increase ovarian suppression. For women deciding to use oral contraceptives, a shortened or eliminated hormone-free interval and a progestogen with a long half-life may be an option to reduce the negative consequences of missed oral contraceptive pills.

  15. Linear Time Local Approximation Algorithm for Maximum Stable Marriage

    Directory of Open Access Journals (Sweden)

    Zoltán Király

    2013-08-01

    Full Text Available We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley. Some consequences to the Hospitals/Residents problem are also discussed.

  16. Model-Free Adaptive Control Algorithm with Data Dropout Compensation

    Directory of Open Access Journals (Sweden)

    Xuhui Bu

    2012-01-01

    Full Text Available The convergence of model-free adaptive control (MFAC algorithm can be guaranteed when the system is subject to measurement data dropout. The system output convergent speed gets slower as dropout rate increases. This paper proposes a MFAC algorithm with data compensation. The missing data is first estimated using the dynamical linearization method, and then the estimated value is introduced to update control input. The convergence analysis of the proposed MFAC algorithm is given, and the effectiveness is also validated by simulations. It is shown that the proposed algorithm can compensate the effect of the data dropout, and the better output performance can be obtained.

  17. Measuring Consumer Preferences Using Conjoint Poker

    NARCIS (Netherlands)

    O. Toubia; M.G. de Jong (Martijn); D. Stieger; J.H. Fuller (John)

    2012-01-01

    textabstractWe develop and test an incentive-compatible Conjoint Poker (CP) game. The preference data collected in the context of this game are comparable to incentive-compatible choice-based conjoint (CBC) analysis data. We develop a statistical efficiency measure and an algorithm to construct

  18. Preference Learning and Ranking by Pairwise Comparison

    Science.gov (United States)

    Fürnkranz, Johannes; Hüllermeier, Eyke

    This chapter provides an overview of recent work on preference learning and ranking via pairwise classification. The learning by pairwise comparison (LPC) paradigm is the natural machine learning counterpart to the relational approach to preference modeling and decision making. From a machine learning point of view, LPC is especially appealing as it decomposes a possibly complex prediction problem into a certain number of learning problems of the simplest type, namely binary classification. We explain how to approach different preference learning problems, such as label and instance ranking, within the framework of LPC. We primarily focus on methodological aspects, but also address theoretical questions as well as algorithmic and complexity issues.

  19. An Extensible Component-Based Multi-Objective Evolutionary Algorithm Framework

    DEFF Research Database (Denmark)

    Sørensen, Jan Corfixen; Jørgensen, Bo Nørregaard

    2017-01-01

    The ability to easily modify the problem definition is currently missing in Multi-Objective Evolutionary Algorithms (MOEA). Existing MOEA frameworks do not support dynamic addition and extension of the problem formulation. The existing frameworks require a re-specification of the problem definition...

  20. Where to go on your next trip? Optimizing travel destinations based on user preferences

    NARCIS (Netherlands)

    Kiseleva, Y.; Mueller, M.J.I.; Bernardi, L.; Davis, C.; Kovacek, I.; Einarsen, M.S.; Kamps, J.; Tuzhilin, A.; Hiemstra, D.

    2015-01-01

    Recommendation based on user preferences is a common task for e-commerce websites. New recommendation algorithms are often evaluated by offline comparison to baseline algorithms such as recommending random or the most popular items. Here, we investigate how these algorithms themselves perform and

  1. Where to go on your next trip?: Optimizing travel destinations based on user preferences

    NARCIS (Netherlands)

    Kiseleva, Y.; Mueller, M.J.I.; Bernardi, L.; Davis, C.; Kovacek, I.; Einarsen, M.S.; Kamps, J.; Tuzhilin, A.; Hiemstra, D.

    2015-01-01

    Recommendation based on user preferences is a common task for e-commerce websites. New recommendation algorithms are often evaluated by offline comparison to baseline algorithms such as recommending random or the most popular items. Here, we investigate how these algorithms themselves perform and

  2. Missing value imputation for microarray gene expression data using histone acetylation information

    Directory of Open Access Journals (Sweden)

    Feng Jihua

    2008-05-01

    Full Text Available Abstract Background It is an important pre-processing step to accurately estimate missing values in microarray data, because complete datasets are required in numerous expression profile analysis in bioinformatics. Although several methods have been suggested, their performances are not satisfactory for datasets with high missing percentages. Results The paper explores the feasibility of doing missing value imputation with the help of gene regulatory mechanism. An imputation framework called histone acetylation information aided imputation method (HAIimpute method is presented. It incorporates the histone acetylation information into the conventional KNN(k-nearest neighbor and LLS(local least square imputation algorithms for final prediction of the missing values. The experimental results indicated that the use of acetylation information can provide significant improvements in microarray imputation accuracy. The HAIimpute methods consistently improve the widely used methods such as KNN and LLS in terms of normalized root mean squared error (NRMSE. Meanwhile, the genes imputed by HAIimpute methods are more correlated with the original complete genes in terms of Pearson correlation coefficients. Furthermore, the proposed methods also outperform GOimpute, which is one of the existing related methods that use the functional similarity as the external information. Conclusion We demonstrated that the using of histone acetylation information could greatly improve the performance of the imputation especially at high missing percentages. This idea can be generalized to various imputation methods to facilitate the performance. Moreover, with more knowledge accumulated on gene regulatory mechanism in addition to histone acetylation, the performance of our approach can be further improved and verified.

  3. Bayesian nonparametric generative models for causal inference with missing at random covariates.

    Science.gov (United States)

    Roy, Jason; Lum, Kirsten J; Zeldow, Bret; Dworkin, Jordan D; Re, Vincent Lo; Daniels, Michael J

    2018-03-26

    We propose a general Bayesian nonparametric (BNP) approach to causal inference in the point treatment setting. The joint distribution of the observed data (outcome, treatment, and confounders) is modeled using an enriched Dirichlet process. The combination of the observed data model and causal assumptions allows us to identify any type of causal effect-differences, ratios, or quantile effects, either marginally or for subpopulations of interest. The proposed BNP model is well-suited for causal inference problems, as it does not require parametric assumptions about the distribution of confounders and naturally leads to a computationally efficient Gibbs sampling algorithm. By flexibly modeling the joint distribution, we are also able to impute (via data augmentation) values for missing covariates within the algorithm under an assumption of ignorable missingness, obviating the need to create separate imputed data sets. This approach for imputing the missing covariates has the additional advantage of guaranteeing congeniality between the imputation model and the analysis model, and because we use a BNP approach, parametric models are avoided for imputation. The performance of the method is assessed using simulation studies. The method is applied to data from a cohort study of human immunodeficiency virus/hepatitis C virus co-infected patients. © 2018, The International Biometric Society.

  4. Ship detection in satellite imagery using rank-order greyscale hit-or-miss transforms

    Energy Technology Data Exchange (ETDEWEB)

    Harvey, Neal R [Los Alamos National Laboratory; Porter, Reid B [Los Alamos National Laboratory; Theiler, James [Los Alamos National Laboratory

    2010-01-01

    Ship detection from satellite imagery is something that has great utility in various communities. Knowing where ships are and their types provides useful intelligence information. However, detecting and recognizing ships is a difficult problem. Existing techniques suffer from too many false-alarms. We describe approaches we have taken in trying to build ship detection algorithms that have reduced false alarms. Our approach uses a version of the grayscale morphological Hit-or-Miss transform. While this is well known and used in its standard form, we use a version in which we use a rank-order selection for the dilation and erosion parts of the transform, instead of the standard maximum and minimum operators. This provides some slack in the fitting that the algorithm employs and provides a method for tuning the algorithm's performance for particular detection problems. We describe our algorithms, show the effect of the rank-order parameter on the algorithm's performance and illustrate the use of this approach for real ship detection problems with panchromatic satellite imagery.

  5. Evaluating progressive-rendering algorithms in appearance design tasks.

    Science.gov (United States)

    Jiawei Ou; Karlik, Ondrej; Křivánek, Jaroslav; Pellacini, Fabio

    2013-01-01

    Progressive rendering is becoming a popular alternative to precomputational approaches to appearance design. However, progressive algorithms create images exhibiting visual artifacts at early stages. A user study investigated these artifacts' effects on user performance in appearance design tasks. Novice and expert subjects performed lighting and material editing tasks with four algorithms: random path tracing, quasirandom path tracing, progressive photon mapping, and virtual-point-light rendering. Both the novices and experts strongly preferred path tracing to progressive photon mapping and virtual-point-light rendering. None of the participants preferred random path tracing to quasirandom path tracing or vice versa; the same situation held between progressive photon mapping and virtual-point-light rendering. The user workflow didn’t differ significantly with the four algorithms. The Web Extras include a video showing how four progressive-rendering algorithms converged (at http://youtu.be/ck-Gevl1e9s), the source code used, and other supplementary materials.

  6. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  7. New or ν missing energy

    DEFF Research Database (Denmark)

    Franzosi, Diogo Buarque; Frandsen, Mads T.; Shoemaker, Ian M.

    2016-01-01

    flavor structures. Monojet data alone can be used to infer the mass of the "missing particle" from the shape of the missing energy distribution. In particular, 13 TeV LHC data will have sensitivity to DM masses greater than $\\sim$ 1 TeV. In addition to the monojet channel, NSI can be probed in multi......Missing energy signals such as monojets are a possible signature of Dark Matter (DM) at colliders. However, neutrino interactions beyond the Standard Model may also produce missing energy signals. In order to conclude that new "missing particles" are observed the hypothesis of BSM neutrino......-lepton searches which we find to yield stronger limits at heavy mediator masses. The sensitivity offered by these multi-lepton channels provide a method to reject or confirm the DM hypothesis in missing energy searches....

  8. Design of a Ground-Launched Ballistic Missile Interceptor Using a Genetic Algorithm

    National Research Council Canada - National Science Library

    Anderson, Murray

    1999-01-01

    ...) minimize maximum U-loading. In 50 generations the genetic algorithm was able to develop two basic types of external aerodynamic designs that performed nearly the same, with miss distances less than 1.0 foot...

  9. Scalable Data Quality for Big Data: The Pythia Framework for Handling Missing Values.

    Science.gov (United States)

    Cahsai, Atoshum; Anagnostopoulos, Christos; Triantafillou, Peter

    2015-09-01

    Solving the missing-value (MV) problem with small estimation errors in large-scale data environments is a notoriously resource-demanding task. The most widely used MV imputation approaches are computationally expensive because they explicitly depend on the volume and the dimension of the data. Moreover, as datasets and their user community continuously grow, the problem can only be exacerbated. In an attempt to deal with such a problem, in our previous work, we introduced a novel framework coined Pythia, which employs a number of distributed data nodes (cohorts), each of which contains a partition of the original dataset. To perform MV imputation, the Pythia, based on specific machine and statistical learning structures (signatures), selects the most appropriate subset of cohorts to perform locally a missing value substitution algorithm (MVA). This selection relies on the principle that particular subset of cohorts maintains the most relevant partition of the dataset. In addition to this, as Pythia uses only part of the dataset for imputation and accesses different cohorts in parallel, it improves efficiency, scalability, and accuracy compared to a single machine (coined Godzilla), which uses the entire massive dataset to compute imputation requests. Although this article is an extension of our previous work, we particularly investigate the robustness of the Pythia framework and show that the Pythia is independent from any MVA and signature construction algorithms. In order to facilitate our research, we considered two well-known MVAs (namely K-nearest neighbor and expectation-maximization imputation algorithms), as well as two machine and neural computational learning signature construction algorithms based on adaptive vector quantization and competitive learning. We prove comprehensive experiments to assess the performance of the Pythia against Godzilla and showcase the benefits stemmed from this framework.

  10. Semiparametric Theory and Missing Data

    CERN Document Server

    Tsiatis, Anastasios A

    2006-01-01

    Missing data arise in almost all scientific disciplines. In many cases, missing data in an analysis is treated in a casual and ad-hoc manner, leading to invalid inferences and erroneous conclusions. This book summarizes knowledge regarding the theory of estimation for semiparametric models with missing data.

  11. FCMPSO: An Imputation for Missing Data Features in Heart Disease Classification

    Science.gov (United States)

    Salleh, Mohd Najib Mohd; Ashikin Samat, Nurul

    2017-08-01

    The application of data mining and machine learning in directing clinical research into possible hidden knowledge is becoming greatly influential in medical areas. Heart Disease is a killer disease around the world, and early prevention through efficient methods can help to reduce the mortality number. Medical data may contain many uncertainties, as they are fuzzy and vague in nature. Nonetheless, imprecise features data such as no values and missing values can affect quality of classification results. Nevertheless, the other complete features are still capable to give information in certain features. Therefore, an imputation approach based on Fuzzy C-Means and Particle Swarm Optimization (FCMPSO) is developed in preprocessing stage to help fill in the missing values. Then, the complete dataset is trained in classification algorithm, Decision Tree. The experiment is trained with Heart Disease dataset and the performance is analysed using accuracy, precision, and ROC values. Results show that the performance of Decision Tree is increased after the application of FCMSPO for imputation.

  12. Medical image segmentation using genetic algorithms.

    Science.gov (United States)

    Maulik, Ujjwal

    2009-03-01

    Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.

  13. Weighted Description Logics Preference Formulas for Multiattribute Negotiation

    Science.gov (United States)

    Ragone, Azzurra; di Noia, Tommaso; Donini, Francesco M.; di Sciascio, Eugenio; Wellman, Michael P.

    We propose a framework to compute the utility of an agreement w.r.t a preference set in a negotiation process. In particular, we refer to preferences expressed as weighted formulas in a decidable fragment of First-order Logic and agreements expressed as a formula. We ground our framework in Description Logics (DL) endowed with disjunction, to be compliant with Semantic Web technologies. A logic based approach to preference representation allows, when a background knowledge base is exploited, to relax the often unrealistic assumption of additive independence among attributes. We provide suitable definitions of the problem and present algorithms to compute utility in our setting. We also validate our approach through an experimental evaluation.

  14. "Asia's missing women" as a problem in applied evolutionary psychology?

    Science.gov (United States)

    Brooks, Robert

    2012-12-20

    In many parts of Asia, the Middle East and North Africa, women and children are so undervalued, neglected, abused, and so often killed, that sex ratios are now strongly male biased. In recent decades, sex-biased abortion has exacerbated the problem. In this article I highlight several important insights from evolutionary biology into both the origin and the severe societal consequences of "Asia's missing women", paying particular attention to interactions between evolution, economics and culture. Son preferences and associated cultural practices like patrilineal inheritance, patrilocality and the Indian Hindu dowry system arise among the wealthy and powerful elites for reasons consistent with models of sex-biased parental investment. Those practices then spread via imitation as technology gets cheaper and economic development allows the middle class to grow rapidly. I will consider evidence from India, China and elsewhere that grossly male-biased sex ratios lead to increased crime, violence, local warfare, political instability, drug abuse, prostitution and trafficking of women. The problem of Asia's missing women presents a challenge for applied evolutionary psychology to help us understand and ameliorate sex ratio biases and their most severe consequences.

  15. Cache and energy efficient algorithms for Nussinov's RNA Folding.

    Science.gov (United States)

    Zhao, Chunchun; Sahni, Sartaj

    2017-12-06

    An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.

  16. Flexible Imputation of Missing Data

    CERN Document Server

    van Buuren, Stef

    2012-01-01

    Missing data form a problem in every scientific discipline, yet the techniques required to handle them are complicated and often lacking. One of the great ideas in statistical science--multiple imputation--fills gaps in the data with plausible values, the uncertainty of which is coded in the data itself. It also solves other problems, many of which are missing data problems in disguise. Flexible Imputation of Missing Data is supported by many examples using real data taken from the author's vast experience of collaborative research, and presents a practical guide for handling missing data unde

  17. Minkowski metrics in creating universal ranking algorithms

    Directory of Open Access Journals (Sweden)

    Andrzej Ameljańczyk

    2014-06-01

    Full Text Available The paper presents a general procedure for creating the rankings of a set of objects, while the relation of preference based on any ranking function. The analysis was possible to use the ranking functions began by showing the fundamental drawbacks of commonly used functions in the form of a weighted sum. As a special case of the ranking procedure in the space of a relation, the procedure based on the notion of an ideal element and generalized Minkowski distance from the element was proposed. This procedure, presented as universal ranking algorithm, eliminates most of the disadvantages of ranking functions in the form of a weighted sum.[b]Keywords[/b]: ranking functions, preference relation, ranking clusters, categories, ideal point, universal ranking algorithm

  18. Residue preference mapping of ligand fragments in the Protein Data Bank.

    Science.gov (United States)

    Wang, Lirong; Xie, Zhaojun; Wipf, Peter; Xie, Xiang-Qun

    2011-04-25

    The interaction between small molecules and proteins is one of the major concerns for structure-based drug design because the principles of protein-ligand interactions and molecular recognition are not thoroughly understood. Fortunately, the analysis of protein-ligand complexes in the Protein Data Bank (PDB) enables unprecedented possibilities for new insights. Herein, we applied molecule-fragmentation algorithms to split the ligands extracted from PDB crystal structures into small fragments. Subsequently, we have developed a ligand fragment and residue preference mapping (LigFrag-RPM) algorithm to map the profiles of the interactions between these fragments and the 20 proteinogenic amino acid residues. A total of 4032 fragments were generated from 71 798 PDB ligands by a ring cleavage (RC) algorithm. Among these ligand fragments, 315 unique fragments were characterized with the corresponding fragment-residue interaction profiles by counting residues close to these fragments. The interaction profiles revealed that these fragments have specific preferences for certain types of residues. The applications of these interaction profiles were also explored and evaluated in case studies, showing great potential for the study of protein-ligand interactions and drug design. Our studies demonstrated that the fragment-residue interaction profiles generated from the PDB ligand fragments can be used to detect whether these fragments are in their favorable or unfavorable environments. The algorithm for a ligand fragment and residue preference mapping (LigFrag-RPM) developed here also has the potential to guide lead chemistry modifications as well as binding residues predictions.

  19. Airport Traffic Conflict Detection and Resolution Algorithm Evaluation

    Science.gov (United States)

    Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Ballard, Kathryn M.; Otero, Sharon D.; Barker, Glover D.

    2016-01-01

    Two conflict detection and resolution (CD&R) algorithms for the terminal maneuvering area (TMA) were evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. One CD&R algorithm, developed at NASA, was designed to enhance surface situation awareness and provide cockpit alerts of potential conflicts during runway, taxi, and low altitude air-to-air operations. The second algorithm, Enhanced Traffic Situation Awareness on the Airport Surface with Indications and Alerts (SURF IA), was designed to increase flight crew awareness of the runway environment and facilitate an appropriate and timely response to potential conflict situations. The purpose of the study was to evaluate the performance of the aircraft-based CD&R algorithms during various runway, taxiway, and low altitude scenarios, multiple levels of CD&R system equipage, and various levels of horizontal position accuracy. Algorithm performance was assessed through various metrics including the collision rate, nuisance and missed alert rate, and alert toggling rate. The data suggests that, in general, alert toggling, nuisance and missed alerts, and unnecessary maneuvering occurred more frequently as the position accuracy was reduced. Collision avoidance was more effective when all of the aircraft were equipped with CD&R and maneuvered to avoid a collision after an alert was issued. In order to reduce the number of unwanted (nuisance) alerts when taxiing across a runway, a buffer is needed between the hold line and the alerting zone so alerts are not generated when an aircraft is behind the hold line. All of the results support RTCA horizontal position accuracy requirements for performing a CD&R function to reduce the likelihood and severity of runway incursions and collisions.

  20. Tractable Pareto Optimization of Temporal Preferences

    Science.gov (United States)

    Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent

    2003-01-01

    This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.

  1. A Gaze-Driven Evolutionary Algorithm to Study Aesthetic Evaluation of Visual Symmetry

    Directory of Open Access Journals (Sweden)

    Alexis D. J. Makin

    2016-03-01

    Full Text Available Empirical work has shown that people like visual symmetry. We used a gaze-driven evolutionary algorithm technique to answer three questions about symmetry preference. First, do people automatically evaluate symmetry without explicit instruction? Second, is perfect symmetry the best stimulus, or do people prefer a degree of imperfection? Third, does initial preference for symmetry diminish after familiarity sets in? Stimuli were generated as phenotypes from an algorithmic genotype, with genes for symmetry (coded as deviation from a symmetrical template, deviation–symmetry, DS gene and orientation (0° to 90°, orientation, ORI gene. An eye tracker identified phenotypes that were good at attracting and retaining the gaze of the observer. Resulting fitness scores determined the genotypes that passed to the next generation. We recorded changes to the distribution of DS and ORI genes over 20 generations. When participants looked for symmetry, there was an increase in high-symmetry genes. When participants looked for the patterns they preferred, there was a smaller increase in symmetry, indicating that people tolerated some imperfection. Conversely, there was no increase in symmetry during free viewing, and no effect of familiarity or orientation. This work demonstrates the viability of the evolutionary algorithm approach as a quantitative measure of aesthetic preference.

  2. An algorithm for optimal fusion of atlases with different labeling protocols

    DEFF Research Database (Denmark)

    Eugenio Iglesias, Juan; Sabuncu, Mert Rory; Aganj, Iman

    2015-01-01

    In this paper we present a novel label fusion algorithm suited for scenarios in which different manual delineation protocols with potentially disparate structures have been used to annotate the training scans (hereafter referred to as "atlases"). Such scenarios arise when atlases have missing str...

  3. Miss Lora juveelikauplus = Miss Lora jewellery store

    Index Scriptorium Estoniae

    2009-01-01

    Narvas Fama kaubanduskeskuses (Tallinna mnt. 19c) asuva juveelikaupluse Miss Lora sisekujundusest. Sisearhitektid Annes Arro ja Hanna Karits. Poe sisu - vitriinkapid, vaip, valgustid - on valmistatud eritellimusel. Sisearhitektide tähtsamate tööde loetelu

  4. DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Tewfik Ahmed H

    2006-01-01

    Full Text Available Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.

  5. Inductive learning of thyroid functional states using the ID3 algorithm. The effect of poor examples on the learning result.

    Science.gov (United States)

    Forsström, J

    1992-01-01

    The ID3 algorithm for inductive learning was tested using preclassified material for patients suspected to have a thyroid illness. Classification followed a rule-based expert system for the diagnosis of thyroid function. Thus, the knowledge to be learned was limited to the rules existing in the knowledge base of that expert system. The learning capability of the ID3 algorithm was tested with an unselected learning material (with some inherent missing data) and with a selected learning material (no missing data). The selected learning material was a subgroup which formed a part of the unselected learning material. When the number of learning cases was increased, the accuracy of the program improved. When the learning material was large enough, an increase in the learning material did not improve the results further. A better learning result was achieved with the selected learning material not including missing data as compared to unselected learning material. With this material we demonstrate a weakness in the ID3 algorithm: it can not find available information from good example cases if we add poor examples to the data.

  6. Multi-objective engineering design using preferences

    Science.gov (United States)

    Sanchis, J.; Martinez, M.; Blasco, X.

    2008-03-01

    System design is a complex task when design parameters have to satisy a number of specifications and objectives which often conflict with those of others. This challenging problem is called multi-objective optimization (MOO). The most common approximation consists in optimizing a single cost index with a weighted sum of objectives. However, once weights are chosen the solution does not guarantee the best compromise among specifications, because there is an infinite number of solutions. A new approach can be stated, based on the designer's experience regarding the required specifications and the associated problems. This valuable information can be translated into preferences for design objectives, and will lead the search process to the best solution in terms of these preferences. This article presents a new method, which enumerates these a priori objective preferences. As a result, a single objective is built automatically and no weight selection need be performed. Problems occuring because of the multimodal nature of the generated single cost index are managed with genetic algorithms (GAs).

  7. Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm.

    Science.gov (United States)

    Pickett, Stephen D; Green, Darren V S; Hunt, David L; Pardoe, David A; Hughes, Ian

    2011-01-13

    Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure-activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods.

  8. “Asia's Missing Women” as a Problem in Applied Evolutionary Psychology?

    Directory of Open Access Journals (Sweden)

    Robert Brooks

    2012-12-01

    Full Text Available In many parts of Asia, the Middle East and North Africa, women and children are so undervalued, neglected, abused, and so often killed, that sex ratios are now strongly male biased. In recent decades, sex-biased abortion has exacerbated the problem. In this article I highlight several important insights from evolutionary biology into both the origin and the severe societal consequences of “Asia's missing women”, paying particular attention to interactions between evolution, economics and culture. Son preferences and associated cultural practices like patrilineal inheritance, patrilocality and the Indian Hindu dowry system arise among the wealthy and powerful elites for reasons consistent with models of sex-biased parental investment. Those practices then spread via imitation as technology gets cheaper and economic development allows the middle class to grow rapidly. I will consider evidence from India, China and elsewhere that grossly male-biased sex ratios lead to increased crime, violence, local warfare, political instability, drug abuse, prostitution and trafficking of women. The problem of Asia's missing women presents a challenge for applied evolutionary psychology to help us understand and ameliorate sex ratio biases and their most severe consequences.

  9. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    Science.gov (United States)

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  10. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.

    Science.gov (United States)

    Allen, Genevera I; Tibshirani, Robert

    2010-06-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.

  11. A Memetic Differential Evolution Algorithm Based on Dynamic Preference for Constrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Ning Dong

    2014-01-01

    functions are executed, and comparisons with five state-of-the-art algorithms are made. The results illustrate that the proposed algorithm is competitive with and in some cases superior to the compared ones in terms of the quality, efficiency, and the robustness of the obtained results.

  12. Missed deadline notification in best-effort schedulers

    Science.gov (United States)

    Banachowski, Scott A.; Wu, Joel; Brandt, Scott A.

    2003-12-01

    It is common to run multimedia and other periodic, soft real-time applications on general-purpose computer systems. These systems use best-effort scheduling algorithms that cannot guarantee applications will receive responsive scheduling to meet deadline or timing requirements. We present a simple mechanism called Missed Deadline Notification (MDN) that allows applications to notify the system when they do not receive their desired level of responsiveness. Consisting of a single system call with no arguments, this simple interface allows the operating system to provide better support for soft real-time applications without any a priori information about their timing or resource needs. We implemented MDN in three different schedulers: Linux, BEST, and BeRate. We describe these implementations and their performance when running real-time applications and discuss policies to prevent applications from abusing MDN to gain extra resources.

  13. Handling risk attitudes for preference learning and intelligent decision support

    DEFF Research Database (Denmark)

    Franco de los Ríos, Camilo; Hougaard, Jens Leth; Nielsen, Kurt

    2015-01-01

    Intelligent decision support should allow integrating human knowledge with efficient algorithms for making interpretable and useful recommendations on real world decision problems. Attitudes and preferences articulate and come together under a decision process that should be explicitly modeled...

  14. Metal artifact reduction in x-ray computed tomography by using analytical DBP-type algorithm

    Science.gov (United States)

    Wang, Zhen; Kudo, Hiroyuki

    2012-03-01

    This paper investigates a common metal artifacts problem in X-ray computed tomography (CT). The artifacts in reconstructed image may render image non-diagnostic because of inaccuracy beam hardening correction from high attenuation objects, satisfactory image could not be reconstructed from projections with missing or distorted data. In traditionally analytical metal artifact reduction (MAR) method, firstly subtract the metallic object part of projection data from the original obtained projection, secondly complete the subtracted part in original projection by using various interpolating method, thirdly reconstruction from the interpolated projection by filtered back-projection (FBP) algorithm. The interpolation error occurred during the second step can make unrealistic assumptions about the missing data, leading to DC shift artifact in the reconstructed images. We proposed a differentiated back-projection (DBP) type MAR method by instead of FBP algorithm with DBP algorithm in third step. In FBP algorithm the interpolated projection will be filtered on each projection view angle before back-projection, as a result the interpolation error is propagated to whole projection. However, the property of DBP algorithm provide a chance to do filter after the back-projection in a Hilbert filter direction, as a result the interpolation error affection would be reduce and there is expectation on improving quality of reconstructed images. In other word, if we choose the DBP algorithm instead of the FBP algorithm, less contaminated projection data with interpolation error would be used in reconstruction. A simulation study was performed to evaluate the proposed method using a given phantom.

  15. Physicians' preferences for asthma guidelines implementation.

    Science.gov (United States)

    Kang, Min-Koo; Kim, Byung-Keun; Kim, Tae-Wan; Kim, Sae-Hoon; Kang, Hye-Ryun; Park, Heung-Woo; Chang, Yoon-Seok; Kim, Sun-Sin; Min, Kyung-Up; Kim, You-Young; Cho, Sang-Heon

    2010-10-01

    Patient care based on asthma guidelines is cost-effective and leads to improved treatment outcomes. However, ineffective implementation strategies interfere with the use of these recommendations in clinical practice. This study investigated physicians' preferences for asthma guidelines, including content, supporting evidence, learning strategies, format, and placement in the clinical workplace. We obtained information through a questionnaire survey. The questionnaire was distributed to physicians attending continuing medical education courses and sent to other physicians by airmail, e-mail, and facsimile. A total of 183 physicians responded (male to female ratio, 2.3:1; mean age, 40.4±9.9 years); 89.9% of respondents were internists or pediatricians, and 51.7% were primary care physicians. Physicians preferred information that described asthma medications, classified the disease according to severity and level of control, and provided methods of evaluation/treatment/monitoring and management of acute exacerbation. The most effective strategies for encouraging the use of the guidelines were through continuing medical education and discussions with colleagues. Physicians required supporting evidence in the form of randomized controlled trials and expert consensus. They preferred that the guidelines be presented as algorithms or flow charts/flow diagrams on plastic sheets, pocket cards, or in electronic medical records. This study identified the items of the asthma guidelines preferred by physicians in Korea. Asthma guidelines with physicians' preferences would encourage their implementation in clinical practice.

  16. Controlling misses and false alarms in a machine learning framework for predicting uniformity of printed pages

    Science.gov (United States)

    Nguyen, Minh Q.; Allebach, Jan P.

    2015-01-01

    In our previous work1 , we presented a block-based technique to analyze printed page uniformity both visually and metrically. The features learned from the models were then employed in a Support Vector Machine (SVM) framework to classify the pages into one of the two categories of acceptable and unacceptable quality. In this paper, we introduce a set of tools for machine learning in the assessment of printed page uniformity. This work is primarily targeted to the printing industry, specifically the ubiquitous laser, electrophotographic printer. We use features that are well-correlated with the rankings of expert observers to develop a novel machine learning framework that allows one to achieve the minimum "false alarm" rate, subject to a chosen "miss" rate. Surprisingly, most of the research that has been conducted on machine learning does not consider this framework. During the process of developing a new product, test engineers will print hundreds of test pages, which can be scanned and then analyzed by an autonomous algorithm. Among these pages, most may be of acceptable quality. The objective is to find the ones that are not. These will provide critically important information to systems designers, regarding issues that need to be addressed in improving the printer design. A "miss" is defined to be a page that is not of acceptable quality to an expert observer that the prediction algorithm declares to be a "pass". Misses are a serious problem, since they represent problems that will not be seen by the systems designers. On the other hand, "false alarms" correspond to pages that an expert observer would declare to be of acceptable quality, but which are flagged by the prediction algorithm as "fails". In a typical printer testing and development scenario, such pages would be examined by an expert, and found to be of acceptable quality after all. "False alarm" pages result in extra pages to be examined by expert observers, which increases labor cost. But "false

  17. Handling missing values in the MDS-UPDRS.

    Science.gov (United States)

    Goetz, Christopher G; Luo, Sheng; Wang, Lu; Tilley, Barbara C; LaPelle, Nancy R; Stebbins, Glenn T

    2015-10-01

    This study was undertaken to define the number of missing values permissible to render valid total scores for each Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS) part. To handle missing values, imputation strategies serve as guidelines to reject an incomplete rating or create a surrogate score. We tested a rigorous, scale-specific, data-based approach to handling missing values for the MDS-UPDRS. From two large MDS-UPDRS datasets, we sequentially deleted item scores, either consistently (same items) or randomly (different items) across all subjects. Lin's Concordance Correlation Coefficient (CCC) compared scores calculated without missing values with prorated scores based on sequentially increasing missing values. The maximal number of missing values retaining a CCC greater than 0.95 determined the threshold for rendering a valid prorated score. A second confirmatory sample was selected from the MDS-UPDRS international translation program. To provide valid part scores applicable across all Hoehn and Yahr (H&Y) stages when the same items are consistently missing, one missing item from Part I, one from Part II, three from Part III, but none from Part IV can be allowed. To provide valid part scores applicable across all H&Y stages when random item entries are missing, one missing item from Part I, two from Part II, seven from Part III, but none from Part IV can be allowed. All cutoff values were confirmed in the validation sample. These analyses are useful for constructing valid surrogate part scores for MDS-UPDRS when missing items fall within the identified threshold and give scientific justification for rejecting partially completed ratings that fall below the threshold. © 2015 International Parkinson and Movement Disorder Society.

  18. Missed hormonal contraceptives: new recommendations.

    Science.gov (United States)

    Guilbert, Edith; Black, Amanda; Dunn, Sheila; Senikas, Vyta

    2008-11-01

    To provide evidence-based guidance for women and their health care providers on the management of missed or delayed hormonal contraceptive doses in order to prevent unintended pregnancy. Medline, PubMed, and the Cochrane Database were searched for articles published in English, from 1974 to 2007, about hormonal contraceptive methods that are available in Canada and that may be missed or delayed. Relevant publications and position papers from appropriate reproductive health and family planning organizations were also reviewed. The quality of evidence is rated using the criteria developed by the Canadian Task Force on Preventive Health Care. This committee opinion will help health care providers offer clear information to women who have not been adherent in using hormonal contraception with the purpose of preventing unintended pregnancy. The Society of Obstetricians and Gynaecologists of Canada. SUMMARY STATEMENTS: 1. Instructions for what women should do when they miss hormonal contraception have been complex and women do not understand them correctly. (I) 2. The highest risk of ovulation occurs when the hormone-free interval is prolonged for more than seven days, either by delaying the start of combined hormonal contraceptives or by missing active hormone doses during the first or third weeks of combined oral contraceptives. (II) Ovulation rarely occurs after seven consecutive days of combined oral contraceptive use. (II) RECOMMENDATIONS: 1. Health care providers should give clear, simple instructions, both written and oral, on missed hormonal contraceptive pills as part of contraceptive counselling. (III-A) 2. Health care providers should provide women with telephone/electronic resources for reference in the event of missed or delayed hormonal contraceptives. (III-A) 3. In order to avoid an increased risk of unintended pregnancy, the hormone-free interval should not exceed seven days in combined hormonal contraceptive users. (II-A) 4. Back-up contraception should

  19. The prevention and handling of the missing data

    OpenAIRE

    Kang, Hyun

    2013-01-01

    Even in a well-designed and controlled study, missing data occurs in almost all research. Missing data can reduce the statistical power of a study and can produce biased estimates, leading to invalid conclusions. This manuscript reviews the problems and types of missing data, along with the techniques for handling missing data. The mechanisms by which missing data occurs are illustrated, and the methods for handling the missing data are discussed. The paper concludes with recommendations for ...

  20. The soundtrack of substance use: music preference and adolescent smoking and drinking.

    Science.gov (United States)

    Mulder, Juul; Ter Bogt, Tom F M; Raaijmakers, Quinten A W; Gabhainn, Saoirse Nic; Monshouwer, Karin; Vollebergh, Wilma A M

    2009-01-01

    A connection between preferences for heavy metal, rap, reggae, electronic dance music, and substance use has previously been established. However, evidence as to the gender-specific links between substance use and a wider range of music genres in a nationally representative sample of adolescents has to date been missing. In 2003, the Dutch government funded the Dutch National School Survey on Substance Use (DNSSSU), a self-report questionnaire among a representative school-based sample of 7,324 adolescents aged 12 to 16 years, assessed music preference, tobacco, and alcohol use and a set of relevant covariates related to both substance use and music preference. Overall, when all other factors were controlled, punk/hardcore, techno/hardhouse, and reggae were associated with more substance use, while pop and classical music marked less substance use. While prior research showed that liking heavy metal and rap predicts substance use, in this study a preference for rap/hip-hop only indicated elevated smoking among girls, whereas heavy metal was associated with less smoking among boys and less drinking among girls. The types of music that mark increased substance use may vary historically and cross-culturally, but, in general, preferences for nonmainstream music are associated positively with substance use, and preferences for mainstream pop and types of music preferred by adults (classical music) mark less substance use among adolescents. As this is a correlational study no valid conclusions in the direction of causation of the music-substance use link can be drawn.

  1. Sorting on STAR. [CDC computer algorithm timing comparison

    Science.gov (United States)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  2. Analysing Vote Counting Algorithms Via Logic - And its Application to the CADE Election Scheme

    DEFF Research Database (Denmark)

    Schürmann, Carsten; Beckert, Bernhard; Gore, Rejeev

    2013-01-01

    We present a method for using first-order logic to specify the semantics of preferences as used in common vote counting algorithms. We also present a corresponding system that uses Celf linear-logic programs to describe voting algorithms and which generates explicit examples when the algorithm de...

  3. Stability, Optimality and Manipulation in Matching Problems with Weighted Preferences

    Directory of Open Access Journals (Sweden)

    Maria Silvia Pini

    2013-11-01

    Full Text Available The stable matching problem (also known as the stable marriage problem is a well-known problem of matching men to women, so that no man and woman, who are not married to each other, both prefer each other. Such a problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools or, more generally, to any two-sided market. In the classical stable marriage problem, both men and women express a strict preference order over the members of the other sex, in a qualitative way. Here, we consider stable marriage problems with weighted preferences: each man (resp., woman provides a score for each woman (resp., man. Such problems are more expressive than the classical stable marriage problems. Moreover, in some real-life situations, it is more natural to express scores (to model, for example, profits or costs rather than a qualitative preference ordering. In this context, we define new notions of stability and optimality, and we provide algorithms to find marriages that are stable and/or optimal according to these notions. While expressivity greatly increases by adopting weighted preferences, we show that, in most cases, the desired solutions can be found by adapting existing algorithms for the classical stable marriage problem. We also consider the manipulability properties of the procedures that return such stable marriages. While we know that all procedures are manipulable by modifying the preference lists or by truncating them, here, we consider if manipulation can occur also by just modifying the weights while preserving the ordering and avoiding truncation. It turns out that, by adding weights, in some cases, we may increase the possibility of manipulating, and this cannot be avoided by any reasonable restriction on the weights.

  4. Evaluating Imputation Algorithms for Low-Depth Genotyping-By-Sequencing (GBS Data.

    Directory of Open Access Journals (Sweden)

    Ariel W Chan

    Full Text Available Well-powered genomic studies require genome-wide marker coverage across many individuals. For non-model species with few genomic resources, high-throughput sequencing (HTS methods, such as Genotyping-By-Sequencing (GBS, offer an inexpensive alternative to array-based genotyping. Although affordable, datasets derived from HTS methods suffer from sequencing error, alignment errors, and missing data, all of which introduce noise and uncertainty to variant discovery and genotype calling. Under such circumstances, meaningful analysis of the data is difficult. Our primary interest lies in the issue of how one can accurately infer or impute missing genotypes in HTS-derived datasets. Many of the existing genotype imputation algorithms and software packages were primarily developed by and optimized for the human genetics community, a field where a complete and accurate reference genome has been constructed and SNP arrays have, in large part, been the common genotyping platform. We set out to answer two questions: 1 can we use existing imputation methods developed by the human genetics community to impute missing genotypes in datasets derived from non-human species and 2 are these methods, which were developed and optimized to impute ascertained variants, amenable for imputation of missing genotypes at HTS-derived variants? We selected Beagle v.4, a widely used algorithm within the human genetics community with reportedly high accuracy, to serve as our imputation contender. We performed a series of cross-validation experiments, using GBS data collected from the species Manihot esculenta by the Next Generation (NEXTGEN Cassava Breeding Project. NEXTGEN currently imputes missing genotypes in their datasets using a LASSO-penalized, linear regression method (denoted 'glmnet'. We selected glmnet to serve as a benchmark imputation method for this reason. We obtained estimates of imputation accuracy by masking a subset of observed genotypes, imputing, and

  5. Evaluating Imputation Algorithms for Low-Depth Genotyping-By-Sequencing (GBS) Data.

    Science.gov (United States)

    Chan, Ariel W; Hamblin, Martha T; Jannink, Jean-Luc

    2016-01-01

    Well-powered genomic studies require genome-wide marker coverage across many individuals. For non-model species with few genomic resources, high-throughput sequencing (HTS) methods, such as Genotyping-By-Sequencing (GBS), offer an inexpensive alternative to array-based genotyping. Although affordable, datasets derived from HTS methods suffer from sequencing error, alignment errors, and missing data, all of which introduce noise and uncertainty to variant discovery and genotype calling. Under such circumstances, meaningful analysis of the data is difficult. Our primary interest lies in the issue of how one can accurately infer or impute missing genotypes in HTS-derived datasets. Many of the existing genotype imputation algorithms and software packages were primarily developed by and optimized for the human genetics community, a field where a complete and accurate reference genome has been constructed and SNP arrays have, in large part, been the common genotyping platform. We set out to answer two questions: 1) can we use existing imputation methods developed by the human genetics community to impute missing genotypes in datasets derived from non-human species and 2) are these methods, which were developed and optimized to impute ascertained variants, amenable for imputation of missing genotypes at HTS-derived variants? We selected Beagle v.4, a widely used algorithm within the human genetics community with reportedly high accuracy, to serve as our imputation contender. We performed a series of cross-validation experiments, using GBS data collected from the species Manihot esculenta by the Next Generation (NEXTGEN) Cassava Breeding Project. NEXTGEN currently imputes missing genotypes in their datasets using a LASSO-penalized, linear regression method (denoted 'glmnet'). We selected glmnet to serve as a benchmark imputation method for this reason. We obtained estimates of imputation accuracy by masking a subset of observed genotypes, imputing, and calculating the

  6. SU-E-P-31: Quantifying the Amount of Missing Tissue in a Digital Breast Tomosynthesis

    International Nuclear Information System (INIS)

    Goodenough, D; Olafsdottir, H; Olafsson, I; Fredriksson, J; Kristinsson, S; Oskarsdottir, G; Kristbjornsson, A; Mallozzi, R; Healy, A; Levy, J

    2015-01-01

    Purpose: To automatically quantify the amount of missing tissue in a digital breast tomosynthesis system using four stair-stepped chest wall missing tissue gauges in the Tomophan™ from the Phantom Laboratory and image processing from Image Owl. Methods: The Tomophan™ phantom incorporates four stair-stepped missing tissue gauges by the chest wall, allowing measurement of missing chest wall in two different locations along the chest wall at two different heights. Each of the four gauges has 12 steps in 0.5 mm increments rising from the chest wall. An image processing algorithm was developed by Image Owl that first finds the two slices containing the steps then finds the signal through the highest step in all four gauges. Using the signal drop at the beginning of each gauge the distance to the end of the image gives the length of the missing tissue gauge in millimeters. Results: The Tomophan™ was imaged in digital breast tomosynthesis (DBT) systems from various vendors resulting in 46 cases used for testing. The results showed that on average 1.9 mm of 6 mm of the gauges are visible. A small focus group was asked to count the number of visible steps for each case which resulted in a good agreement between observer counts and computed data. Conclusion: First, the results indicate that the amount of missing chest wall can differ between vendors. Secondly it was shown that an automated method to estimate the amount of missing chest wall gauges agreed well with observer assessments. This finding indicates that consistency testing may be simplified using the Tomophan™ phantom and analysis by an automated image processing named Tomo QA. In general the reason for missing chest wall may be due to a function of the beam profile at the chest wall as DBT projects through the angular sampling. Research supported by Image Owl, Inc., The Phantom Laboratory, Inc. and Raforninn ehf; Mallozzi and Healy employed by The Phantom Laboratory, Inc.; Goodenough is a consultant to The

  7. SU-E-P-31: Quantifying the Amount of Missing Tissue in a Digital Breast Tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Goodenough, D [George Washington University, Washington, DC (United States); Olafsdottir, H; Olafsson, I; Fredriksson, J; Kristinsson, S; Oskarsdottir, G; Kristbjornsson, A [Raforninn Ehf., Reykjavik, Gullbringusysla (Iceland); Mallozzi, R; Healy, A; Levy, J [The Phantom Laboratory, Salem, NY (United States)

    2015-06-15

    Purpose: To automatically quantify the amount of missing tissue in a digital breast tomosynthesis system using four stair-stepped chest wall missing tissue gauges in the Tomophan™ from the Phantom Laboratory and image processing from Image Owl. Methods: The Tomophan™ phantom incorporates four stair-stepped missing tissue gauges by the chest wall, allowing measurement of missing chest wall in two different locations along the chest wall at two different heights. Each of the four gauges has 12 steps in 0.5 mm increments rising from the chest wall. An image processing algorithm was developed by Image Owl that first finds the two slices containing the steps then finds the signal through the highest step in all four gauges. Using the signal drop at the beginning of each gauge the distance to the end of the image gives the length of the missing tissue gauge in millimeters. Results: The Tomophan™ was imaged in digital breast tomosynthesis (DBT) systems from various vendors resulting in 46 cases used for testing. The results showed that on average 1.9 mm of 6 mm of the gauges are visible. A small focus group was asked to count the number of visible steps for each case which resulted in a good agreement between observer counts and computed data. Conclusion: First, the results indicate that the amount of missing chest wall can differ between vendors. Secondly it was shown that an automated method to estimate the amount of missing chest wall gauges agreed well with observer assessments. This finding indicates that consistency testing may be simplified using the Tomophan™ phantom and analysis by an automated image processing named Tomo QA. In general the reason for missing chest wall may be due to a function of the beam profile at the chest wall as DBT projects through the angular sampling. Research supported by Image Owl, Inc., The Phantom Laboratory, Inc. and Raforninn ehf; Mallozzi and Healy employed by The Phantom Laboratory, Inc.; Goodenough is a consultant to The

  8. Algorithms imaging tests comparison following the first febrile urinary tract infection in children.

    Science.gov (United States)

    Tombesi, María M; Alconcher, Laura F; Lucarelli, Lucas; Ciccioli, Agustina

    2017-08-01

    To compare the diagnostic sensitivity, costs and radiation doses of imaging tests algorithms developed by the Argentine Society of Pediatrics in 2003 and 2015, against British and American guidelines after the first febrile urinary tract infection (UTI). Inclusion criteria: children ≤ 2 years old with their first febrile UTI and normal ultrasound, voiding cystourethrography and dimercaptosuccinic acid scintigraphy, according to the algorithm established by the Argentine Society of Pediatrics in 2003, treated between 2003 and 2010. The comparisons between algorithms were carried out through retrospective simulation. Eighty (80) patients met the inclusion criteria; 51 (63%) had vesicoureteral reflux (VUR); 6% of the cases were severe. Renal scarring was observed in 6 patients (7.5%). Cost: ARS 404,000. Radiation: 160 millisieverts. With the Argentine Society of Pediatrics' algorithm developed in 2015, the diagnosis of 4 VURs and 2 cases of renal scarring would have been missed. The cost of this omission would have been ARS 301,800 and 124 millisieverts of radiation. British and American guidelines would have missed the diagnosis of all VURs and all cases of renal scarring, with a related cost of ARS 23,000 and ARS 40,000, respectively and 0 radiation. Intensive protocols are highly sensitive to VUR and renal scarring, but they imply high costs and doses of radiation, and result in questionable benefits. Sociedad Argentina de Pediatría

  9. The missed diagnosis

    International Nuclear Information System (INIS)

    Bundy, A.L.

    1988-01-01

    One of the questions that haunts the radiologist as he shuffles through piles of films is ''What am I missing?'' This same question takes on even more meaning when the radiologist is pressed for time, when he reluctantly checks the night work of the resident, when the patient left before more or better films could be obtained; or when the radiologist is involved in a subspecialty in which he is not properly trained. According to Dr. Berlin's survey, the missed diagnosis category accounted for the largest number of radiology malpractice cases. We all know that many diagnoses are more easily made using the ''retrospectoscope.'' But is the plaintiff attorney also adept at using this instrument? Just how knowledgeable must the radiologist be in the use of the ''prospectoscope''? A familiarity with cases that have already been tried should at least alert radiologists to the chances of their own involvement in litigation. While the missed diagnosis is by no means peculiar to the radiologist, it is one of the principal reasons that he may find himself in court

  10. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  11. Urethral lymphogranuloma venereum infections in men with anorectal lymphogranuloma venereum and their partners: the missing link in the current epidemic?

    Science.gov (United States)

    de Vrieze, Nynke Hesselina Neeltje; van Rooijen, Martijn; Speksnijder, Arjen Gerard Cornelis Lambertus; de Vries, Henry John C

    2013-08-01

    Urethral lymphogranuloma venereum (LGV) is not screened routinely. We found that in 341 men having sex with men with anorectal LGV, 7 (2.1%) had concurrent urethral LGV. Among 59 partners, 4 (6.8%) had urethral LGV infections. Urethral LGV is common, probably key in transmission, and missed in current routine LGV screening algorithms.

  12. Target-type probability combining algorithms for multisensor tracking

    Science.gov (United States)

    Wigren, Torbjorn

    2001-08-01

    Algorithms for the handing of target type information in an operational multi-sensor tracking system are presented. The paper discusses recursive target type estimation, computation of crosses from passive data (strobe track triangulation), as well as the computation of the quality of the crosses for deghosting purposes. The focus is on Bayesian algorithms that operate in the discrete target type probability space, and on the approximations introduced for computational complexity reduction. The centralized algorithms are able to fuse discrete data from a variety of sensors and information sources, including IFF equipment, ESM's, IRST's as well as flight envelopes estimated from track data. All algorithms are asynchronous and can be tuned to handle clutter, erroneous associations as well as missed and erroneous detections. A key to obtain this ability is the inclusion of data forgetting by a procedure for propagation of target type probability states between measurement time instances. Other important properties of the algorithms are their abilities to handle ambiguous data and scenarios. The above aspects are illustrated in a simulations study. The simulation setup includes 46 air targets of 6 different types that are tracked by 5 airborne sensor platforms using ESM's and IRST's as data sources.

  13. Incomplete Early Childhood Immunization Series and Missing Fourth DTaP Immunizations; Missed Opportunities or Missed Visits?

    Science.gov (United States)

    Robison, Steve G

    2013-01-01

    The successful completion of early childhood immunizations is a proxy for overall quality of early care. Immunization statuses are usually assessed by up-to-date (UTD) rates covering combined series of different immunizations. However, series UTD rates often only bear on which single immunization is missing, rather than the success of all immunizations. In the US, most series UTD rates are limited by missing fourth DTaP-containing immunizations (diphtheria/tetanus/pertussis) due at 15 to 18 months of age. Missing 4th DTaP immunizations are associated either with a lack of visits at 15 to 18 months of age, or to visits without immunizations. Typical immunization data however cannot distinguish between these two reasons. This study compared immunization records from the Oregon ALERT IIS with medical encounter records for two-year olds in the Oregon Health Plan. Among those with 3 valid DTaPs by 9 months of age, 31.6% failed to receive a timely 4th DTaP; of those without a 4th DTaP, 42.1% did not have any provider visits from 15 through 18 months of age, while 57.9% had at least one provider visit. Those with a 4th DTaP averaged 2.45 encounters, while those with encounters but without 4th DTaPs averaged 2.23 encounters.

  14. Measuring adequacy of prenatal care: does missing visit information matter?

    Science.gov (United States)

    Kurtzman, Jordan H; Wasserman, Erin B; Suter, Barbara J; Glantz, J Christopher; Dozier, Ann M

    2014-09-01

    Kotelchuck's Adequacy of Prenatal Care Utilization (APNCU) Index is frequently used to classify levels of prenatal care. In the Finger Lakes Region (FLR) of upstate New York, prenatal care visit information late in pregnancy is often not documented on the birth certificate. We studied the extent of this missing information and its impact on the validity of regional APNCU scores. We calculated the "weeks between" a mother's last prenatal care visit and her infant's date of birth. We adjusted the APNCU algorithm creating the Last Visit Adequacy of Prenatal Care (LV-APNC) Index using the last recorded prenatal care visit date as the end point of care and the expected number of visits at that time. We compared maternal characteristics by care level with each index, examining rates of reclassification and number of "weeks between" by birth hospital. Stuart-Maxwell, McNemar, chi-square, and t-tests were used to determine statistical significance. Based on 58,462 births, the mean "weeks between" was 2.8 weeks. Compared with their APNCU Index score, 42.4 percent of mothers were reclassified using the LV-APNC Index. Major movement occurred from Intermediate (APNCU) to Adequate or Adequate Plus (LV-APNC) leaving the Intermediate Care group a more at-risk group of mothers. Those with Adequate or Adequate Plus Care (LV-APNC) increased by 31.6 percent, surpassing the Healthy People 2020 objective. In the FLR, missing visit information at the end of pregnancy results in an underestimation of mothers' prenatal care. Future research is needed to determine the extent of this missing visit information on the national level. © 2014 Wiley Periodicals, Inc.

  15. Time Series Forecasting with Missing Values

    OpenAIRE

    Shin-Fu Wu; Chia-Yung Chang; Shie-Jue Lee

    2015-01-01

    Time series prediction has become more popular in various kinds of applications such as weather prediction, control engineering, financial analysis, industrial monitoring, etc. To deal with real-world problems, we are often faced with missing values in the data due to sensor malfunctions or human errors. Traditionally, the missing values are simply omitted or replaced by means of imputation methods. However, omitting those missing values may cause temporal discontinuity. Imputation methods, o...

  16. A Relative-Localization Algorithm Using Incomplete Pairwise Distance Measurements for Underwater Applications

    Directory of Open Access Journals (Sweden)

    Kae Y. Foo

    2010-01-01

    Full Text Available The task of localizing underwater assets involves the relative localization of each unit using only pairwise distance measurements, usually obtained from time-of-arrival or time-delay-of-arrival measurements. In the fluctuating underwater environment, a complete set of pair-wise distance measurements can often be difficult to acquire, thus hindering a straightforward closed-form solution in deriving the assets' relative coordinates. An iterative multidimensional scaling approach is presented based upon a weighted-majorization algorithm that tolerates missing or inaccurate distance measurements. Substantial modifications are proposed to optimize the algorithm, while the effects of refractive propagation paths are considered. A parametric study of the algorithm based upon simulation results is shown. An acoustic field-trial was then carried out, presenting field measurements to highlight the practical implementation of this algorithm.

  17. Clustering for Binary Data Sets by Using Genetic Algorithm-Incremental K-means

    Science.gov (United States)

    Saharan, S.; Baragona, R.; Nor, M. E.; Salleh, R. M.; Asrah, N. M.

    2018-04-01

    This research was initially driven by the lack of clustering algorithms that specifically focus in binary data. To overcome this gap in knowledge, a promising technique for analysing this type of data became the main subject in this research, namely Genetic Algorithms (GA). For the purpose of this research, GA was combined with the Incremental K-means (IKM) algorithm to cluster the binary data streams. In GAIKM, the objective function was based on a few sufficient statistics that may be easily and quickly calculated on binary numbers. The implementation of IKM will give an advantage in terms of fast convergence. The results show that GAIKM is an efficient and effective new clustering algorithm compared to the clustering algorithms and to the IKM itself. In conclusion, the GAIKM outperformed other clustering algorithms such as GCUK, IKM, Scalable K-means (SKM) and K-means clustering and paves the way for future research involving missing data and outliers.

  18. STUDY ON MATERNAL MORTALITY AND NEAR MISS CASE

    Directory of Open Access Journals (Sweden)

    Ritanjali Behera

    2017-12-01

    Full Text Available BACKGROUND Maternal mortality traditionally has been the indicator of maternal health. More recently the review of cases of near miss obstetric event is found to be useful to investigate maternal mortality. Cases of near miss are those, where a woman nearly died but survived a complication that occur during pregnancy or child birth. Aim and Objective 1. To analyse near miss cases and maternal deaths. 2. To determine maternal near miss indicator and to analyse the cause and contributing factors for both of them. MATERIALS AND METHODS This prospective observational study conducted in M.K.C.G. medical college, Berhampur from 1st October 2015 to 30th September 2017. All the cases of maternal deaths and near miss cases defined by WHO criteria are taken. Information regarding demographic profile and reproductive parameters are collected and results are analysed using percentage and proportion. RESULTS Out of 17977 deliveries 201 were near miss cases and 116 were maternal deaths. MMR was 681, near miss incidence 1.18, maternal death to near miss ratio was 1:1.73. Hypertensive disorder of pregnancy (37.4% was the leading cause followed by haemorrhage (17.4%. For near miss cases 101 cases fulfilled clinical criteria, 61 laboratory criteria and 131 cases management based criteria. CONCLUSION Hypertensive disorder of pregnancy and haemorrhage are the leading cause of maternal death and for near miss cases most common organ system involved was cardiovascular system. All the near miss cases should be interpreted as opportunities to improve the health care services.

  19. Predicting Missing Marker Trajectories in Human Motion Data Using Marker Intercorrelations.

    Science.gov (United States)

    Gløersen, Øyvind; Federolf, Peter

    2016-01-01

    Missing information in motion capture data caused by occlusion or detachment of markers is a common problem that is difficult to avoid entirely. The aim of this study was to develop and test an algorithm for reconstruction of corrupted marker trajectories in datasets representing human gait. The reconstruction was facilitated using information of marker inter-correlations obtained from a principal component analysis, combined with a novel weighting procedure. The method was completely data-driven, and did not require any training data. We tested the algorithm on datasets with movement patterns that can be considered both well suited (healthy subject walking on a treadmill) and less suited (transitioning from walking to running and the gait of a subject with cerebral palsy) to reconstruct. Specifically, we created 50 copies of each dataset, and corrupted them with gaps in multiple markers at random temporal and spatial positions. Reconstruction errors, quantified by the average Euclidian distance between predicted and measured marker positions, was ≤ 3 mm for the well suited dataset, even when there were gaps in up to 70% of all time frames. For the less suited datasets, median reconstruction errors were in the range 5-6 mm. However, a few reconstructions had substantially larger errors (up to 29 mm). Our results suggest that the proposed algorithm is a viable alternative both to conventional gap-filling algorithms and state-of-the-art reconstruction algorithms developed for motion capture systems. The strengths of the proposed algorithm are that it can fill gaps anywhere in the dataset, and that the gaps can be considerably longer than when using conventional interpolation techniques. Limitations are that it does not enforce musculoskeletal constraints, and that the reconstruction accuracy declines if applied to datasets with less predictable movement patterns.

  20. The Role of Work-Integrated Learning in Student Preferences of Instructional Methods in an Accounting Curriculum

    Science.gov (United States)

    Abeysekera, Indra

    2015-01-01

    The role of work-integrated learning in student preferences of instructional methods is largely unexplored across the accounting curriculum. This study conducted six experiments to explore student preferences of instructional methods for learning, in six courses of the accounting curriculum that differed in algorithmic rigor, in the context of a…

  1. Design and Demonstration of Automated Data Analysis Algorithms for Ultrasonic Inspection of Complex Composite Panels with Bonds

    Science.gov (United States)

    2016-02-01

    all of the ADA called indications into three groups: true positives (TP), missed calls (MC) and false calls (FC). Note, an indication position error...data review burden and improve the reliability of the ultrasonic inspection of large composite structures, automated data analysis ( ADA ) algorithms...thickness and backwall C-scan images. 15. SUBJECT TERMS automated data analysis ( ADA ) algorithms; time-of-flight indications; backwall amplitude dropout

  2. Algorithmic Mechanism Design of Evolutionary Computation.

    Science.gov (United States)

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  3. Algorithmic randomness and physical entropy

    International Nuclear Information System (INIS)

    Zurek, W.H.

    1989-01-01

    Algorithmic randomness provides a rigorous, entropylike measure of disorder of an individual, microscopic, definite state of a physical system. It is defined by the size (in binary digits) of the shortest message specifying the microstate uniquely up to the assumed resolution. Equivalently, algorithmic randomness can be expressed as the number of bits in the smallest program for a universal computer that can reproduce the state in question (for instance, by plotting it with the assumed accuracy). In contrast to the traditional definitions of entropy, algorithmic randomness can be used to measure disorder without any recourse to probabilities. Algorithmic randomness is typically very difficult to calculate exactly but relatively easy to estimate. In large systems, probabilistic ensemble definitions of entropy (e.g., coarse-grained entropy of Gibbs and Boltzmann's entropy H=lnW, as well as Shannon's information-theoretic entropy) provide accurate estimates of the algorithmic entropy of an individual system or its average value for an ensemble. One is thus able to rederive much of thermodynamics and statistical mechanics in a setting very different from the usual. Physical entropy, I suggest, is a sum of (i) the missing information measured by Shannon's formula and (ii) of the algorithmic information content---algorithmic randomness---present in the available data about the system. This definition of entropy is essential in describing the operation of thermodynamic engines from the viewpoint of information gathering and using systems. These Maxwell demon-type entities are capable of acquiring and processing information and therefore can ''decide'' on the basis of the results of their measurements and computations the best strategy for extracting energy from their surroundings. From their internal point of view the outcome of each measurement is definite

  4. Methods for Mediation Analysis with Missing Data

    Science.gov (United States)

    Zhang, Zhiyong; Wang, Lijuan

    2013-01-01

    Despite wide applications of both mediation models and missing data techniques, formal discussion of mediation analysis with missing data is still rare. We introduce and compare four approaches to dealing with missing data in mediation analysis including list wise deletion, pairwise deletion, multiple imputation (MI), and a two-stage maximum…

  5. Finding the Most Preferred Decision-Making Unit in Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Shirin Mohammadi

    2016-01-01

    Full Text Available Data envelopment analysis (DEA evaluates the efficiency of the transformation of a decision-making unit’s (DMU’s inputs into its outputs. Finding the benchmarks of a DMU is one of the important purposes of DEA. The benchmarks of a DMU in DEA are obtained by solving some linear programming models. Currently, the obtained benchmarks are just found by using the information of the data of inputs and outputs without considering the decision-maker’s preferences. If the preferences of the decision-maker are available, it is very important to obtain the most preferred DMU as a benchmark of the under-assessment DMU. In this regard, we present an algorithm to find the most preferred DMU based on the utility function of decision-maker’s preferences by exploring some properties on that. The proposed method is constructed based on the projection of the gradient of the utility function on the production possibility set’s frontier.

  6. Reducing Misses and Near Misses Related to Multitasking on the Electronic Health Record: Observational Study and Qualitative Analysis.

    Science.gov (United States)

    Ratanawongsa, Neda; Matta, George Y; Bohsali, Fuad B; Chisolm, Margaret S

    2018-02-06

    Clinicians' use of electronic health record (EHR) systems while multitasking may increase the risk of making errors, but silent EHR system use may lower patient satisfaction. Delaying EHR system use until after patient visits may increase clinicians' EHR workload, stress, and burnout. We aimed to describe the perspectives of clinicians, educators, administrators, and researchers about misses and near misses that they felt were related to clinician multitasking while using EHR systems. This observational study was a thematic analysis of perspectives elicited from 63 continuing medical education (CME) participants during 2 workshops and 1 interactive lecture about challenges and strategies for relationship-centered communication during clinician EHR system use. The workshop elicited reflection about memorable times when multitasking EHR use was associated with "misses" (errors that were not caught at the time) or "near misses" (mistakes that were caught before leading to errors). We conducted qualitative analysis using an editing analysis style to identify codes and then select representative themes and quotes. All workshop participants shared stories of misses or near misses in EHR system ordering and documentation or patient-clinician communication, wondering about "misses we don't even know about." Risk factors included the computer's position, EHR system usability, note content and style, information overload, problematic workflows, systems issues, and provider and patient communication behaviors and expectations. Strategies to reduce multitasking EHR system misses included clinician transparency when needing silent EHR system use (eg, for prescribing), narrating EHR system use, patient activation during EHR system use, adapting visit organization and workflow, improving EHR system design, and improving team support and systems. CME participants shared numerous stories of errors and near misses in EHR tasks and communication that they felt related to EHR

  7. An interference account of the missing-VP effect

    Directory of Open Access Journals (Sweden)

    Markus eBader

    2015-06-01

    Full Text Available Sentences with doubly center-embedded relative clauses in which a verb phrase (VP is missing are sometimes perceived as grammatical, thus giving rise to an illusion of grammaticality. In this paper, we provide a new account of why missing-VP sentences, which are both complex and ungrammatical, lead to an illusion of grammaticality, the so-called missing-VP effect. We propose that the missing-VP effect in particular, and processing difficulties with multiply center-embedded clauses more generally, are best understood as resulting from interference during cue-based retrieval. When processing a sentence with double center-embedding, a retrieval error due to interference can cause the verb of an embedded clause to be erroneously attached into a higher clause. This can lead to an illusion of grammaticality in the case of missing-VP sentences and to processing complexity in the case of complete sentences with double center-embedding. Evidence for an interference account of the missing-VP effect comes from experiments that have investigated the missing-VP effect in German using a speeded grammaticality judgments procedure. We review this evidence and then present two new experiments that show that the missing VP effect can be found in German also with less restricting procedures. One experiment was a questionnaire study which required grammaticality judgments from participants but without imposing any time constraints. The second experiment used a self-paced reading procedure and did not require any judgments. Both experiments confirm the prior findings of missing-VP effects in German and also show that the missing-VP effect is subject to a primacy effect as known from the memory literature. Based on this evidence, we argue that an account of missing-VP effects in terms of interference during cue-based retrieval is superior to accounts in terms of limited memory resources or in terms of experience with embedded structures.

  8. An integrated fuzzy regression algorithm for energy consumption estimation with non-stationary data: A case study of Iran

    Energy Technology Data Exchange (ETDEWEB)

    Azadeh, A; Seraj, O [Department of Industrial Engineering and Research Institute of Energy Management and Planning, Center of Excellence for Intelligent-Based Experimental Mechanics, College of Engineering, University of Tehran, P.O. Box 11365-4563 (Iran); Saberi, M [Department of Industrial Engineering, University of Tafresh (Iran); Institute for Digital Ecosystems and Business Intelligence, Curtin University of Technology, Perth (Australia)

    2010-06-15

    This study presents an integrated fuzzy regression and time series framework to estimate and predict electricity demand for seasonal and monthly changes in electricity consumption especially in developing countries such as China and Iran with non-stationary data. Furthermore, it is difficult to model uncertain behavior of energy consumption with only conventional fuzzy regression (FR) or time series and the integrated algorithm could be an ideal substitute for such cases. At First, preferred Time series model is selected from linear or nonlinear models. For this, after selecting preferred Auto Regression Moving Average (ARMA) model, Mcleod-Li test is applied to determine nonlinearity condition. When, nonlinearity condition is satisfied, the preferred nonlinear model is selected and defined as preferred time series model. At last, the preferred model from fuzzy regression and time series model is selected by the Granger-Newbold. Also, the impact of data preprocessing on the fuzzy regression performance is considered. Monthly electricity consumption of Iran from March 1994 to January 2005 is considered as the case of this study. The superiority of the proposed algorithm is shown by comparing its results with other intelligent tools such as Genetic Algorithm (GA) and Artificial Neural Network (ANN). (author)

  9. Breast carcinomas: why are they missed?

    Science.gov (United States)

    Muttarak, M; Pojchamarnwiputh, S; Chaiwun, B

    2006-10-01

    Mammography has proven to be an effective modality for the detection of early breast carcinoma. However, 4-34 percent of breast cancers may be missed at mammography. Delayed diagnosis of breast carcinoma results in an unfavourable prognosis. The objective of this study was to determine the causes and characteristics of breast carcinomas missed by mammography at our institution, with the aim of reducing the rate of missed carcinoma. We reviewed the reports of 13,191 mammograms performed over a five-year period. Breast Imaging Reporting and Data Systems (BI-RADS) were used for the mammographical assessment, and reports were cross-referenced with the histological diagnosis of breast carcinoma. Causes of missed carcinomas were classified. Of 344 patients who had breast carcinoma and had mammograms done prior to surgery, 18 (5.2 percent) failed to be diagnosed by mammography. Of these, five were caused by dense breast parenchyma obscuring the lesions, 11 were due to perception and interpretation errors, and one each from unusual lesion characteristics and poor positioning. Several factors, including dense breast parenchyma obscuring a lesion, perception error, interpretation error, unusual lesion characteristics, and poor technique or positioning, are possible causes of missed breast cancers.

  10. Comparison of missing value imputation methods in time series: the case of Turkish meteorological data

    Science.gov (United States)

    Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci

    2013-04-01

    This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.

  11. Automated backbone assignment of labeled proteins using the threshold accepting algorithm

    International Nuclear Information System (INIS)

    Leutner, Michael; Gschwind, Ruth M.; Liermann, Jens; Schwarz, Christian; Gemmecker, Gerd; Kessler, Horst

    1998-01-01

    The sequential assignment of backbone resonances is the first step in the structure determination of proteins by heteronuclear NMR. For larger proteins, an assignment strategy based on proton side-chain information is no longer suitable for the use in an automated procedure. Our program PASTA (Protein ASsignment by Threshold Accepting) is therefore designed to partially or fully automate the sequential assignment of proteins, based on the analysis of NMR backbone resonances plus C β information. In order to overcome the problems caused by peak overlap and missing signals in an automated assignment process, PASTA uses threshold accepting, a combinatorial optimization strategy, which is superior to simulated annealing due to generally faster convergence and better solutions. The reliability of this algorithm is shown by reproducing the complete sequential backbone assignment of several proteins from published NMR data. The robustness of the algorithm against misassigned signals, noise, spectral overlap and missing peaks is shown by repeating the assignment with reduced sequential information and increased chemical shift tolerances. The performance of the program on real data is finally demonstrated with automatically picked peak lists of human nonpancreatic synovial phospholipase A 2 , a protein with 124 residues

  12. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    Science.gov (United States)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  13. Rapid maximum likelihood ancestral state reconstruction of continuous characters: A rerooting-free algorithm.

    Science.gov (United States)

    Goolsby, Eric W

    2017-04-01

    Ancestral state reconstruction is a method used to study the evolutionary trajectories of quantitative characters on phylogenies. Although efficient methods for univariate ancestral state reconstruction under a Brownian motion model have been described for at least 25 years, to date no generalization has been described to allow more complex evolutionary models, such as multivariate trait evolution, non-Brownian models, missing data, and within-species variation. Furthermore, even for simple univariate Brownian motion models, most phylogenetic comparative R packages compute ancestral states via inefficient tree rerooting and full tree traversals at each tree node, making ancestral state reconstruction extremely time-consuming for large phylogenies. Here, a computationally efficient method for fast maximum likelihood ancestral state reconstruction of continuous characters is described. The algorithm has linear complexity relative to the number of species and outperforms the fastest existing R implementations by several orders of magnitude. The described algorithm is capable of performing ancestral state reconstruction on a 1,000,000-species phylogeny in fewer than 2 s using a standard laptop, whereas the next fastest R implementation would take several days to complete. The method is generalizable to more complex evolutionary models, such as phylogenetic regression, within-species variation, non-Brownian evolutionary models, and multivariate trait evolution. Because this method enables fast repeated computations on phylogenies of virtually any size, implementation of the described algorithm can drastically alleviate the computational burden of many otherwise prohibitively time-consuming tasks requiring reconstruction of ancestral states, such as phylogenetic imputation of missing data, bootstrapping procedures, Expectation-Maximization algorithms, and Bayesian estimation. The described ancestral state reconstruction algorithm is implemented in the Rphylopars

  14. Gender identity rather than sexual orientation impacts on facial preferences.

    Science.gov (United States)

    Ciocca, Giacomo; Limoncin, Erika; Cellerino, Alessandro; Fisher, Alessandra D; Gravina, Giovanni Luca; Carosa, Eleonora; Mollaioli, Daniele; Valenzano, Dario R; Mennucci, Andrea; Bandini, Elisa; Di Stasi, Savino M; Maggi, Mario; Lenzi, Andrea; Jannini, Emmanuele A

    2014-10-01

    Differences in facial preferences between heterosexual men and women are well documented. It is still a matter of debate, however, how variations in sexual identity/sexual orientation may modify the facial preferences. This study aims to investigate the facial preferences of male-to-female (MtF) individuals with gender dysphoria (GD) and the influence of short-term/long-term relationships on facial preference, in comparison with healthy subjects. Eighteen untreated MtF subjects, 30 heterosexual males, 64 heterosexual females, and 42 homosexual males from university students/staff, at gay events, and in Gender Clinics were shown a composite male or female face. The sexual dimorphism of these pictures was stressed or reduced in a continuous fashion through an open-source morphing program with a sequence of 21 pictures of the same face warped from a feminized to a masculinized shape. An open-source morphing program (gtkmorph) based on the X-Morph algorithm. MtF GD subjects and heterosexual females showed the same pattern of preferences: a clear preference for less dimorphic (more feminized) faces for both short- and long-term relationships. Conversely, both heterosexual and homosexual men selected significantly much more dimorphic faces, showing a preference for hyperfeminized and hypermasculinized faces, respectively. These data show that the facial preferences of MtF GD individuals mirror those of the sex congruent with their gender identity. Conversely, heterosexual males trace the facial preferences of homosexual men, indicating that changes in sexual orientation do not substantially affect preference for the most attractive faces. © 2014 International Society for Sexual Medicine.

  15. Different methods for analysing and imputation missing values in wind speed series; La problematica de la calidad de la informacion en series de velocidad del viento-metodologias de analisis y imputacion de datos faltantes

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, A. M.

    2004-07-01

    This study concerns about different methods for analysing and imputation missing values in wind speed series. The algorithm EM and a methodology derivated from the sequential hot deck have been utilized. Series with missing values imputed are compared with original and complete series, using several criteria, such the wind potential; and appears to exist a significant goodness of fit between the estimates and real values. (Author)

  16. Robust digital image inpainting algorithm in the wireless environment

    Science.gov (United States)

    Karapetyan, G.; Sarukhanyan, H. G.; Agaian, S. S.

    2014-05-01

    Image or video inpainting is the process/art of retrieving missing portions of an image without introducing undesirable artifacts that are undetectable by an ordinary observer. An image/video can be damaged due to a variety of factors, such as deterioration due to scratches, laser dazzling effects, wear and tear, dust spots, loss of data when transmitted through a channel, etc. Applications of inpainting include image restoration (removing laser dazzling effects, dust spots, date, text, time, etc.), image synthesis (texture synthesis), completing panoramas, image coding, wireless transmission (recovery of the missing blocks), digital culture protection, image de-noising, fingerprint recognition, and film special effects and production. Most inpainting methods can be classified in two key groups: global and local methods. Global methods are used for generating large image regions from samples while local methods are used for filling in small image gaps. Each method has its own advantages and limitations. For example, the global inpainting methods perform well on textured image retrieval, whereas the classical local methods perform poorly. In addition, some of the techniques are computationally intensive; exceeding the capabilities of most currently used mobile devices. In general, the inpainting algorithms are not suitable for the wireless environment. This paper presents a new and efficient scheme that combines the advantages of both local and global methods into a single algorithm. Particularly, it introduces a blind inpainting model to solve the above problems by adaptively selecting support area for the inpainting scheme. The proposed method is applied to various challenging image restoration tasks, including recovering old photos, recovering missing data on real and synthetic images, and recovering the specular reflections in endoscopic images. A number of computer simulations demonstrate the effectiveness of our scheme and also illustrate the main properties

  17. Missing Data Imputation of Solar Radiation Data under Different Atmospheric Conditions

    Science.gov (United States)

    Turrado, Concepción Crespo; López, María del Carmen Meizoso; Lasheras, Fernando Sánchez; Gómez, Benigno Antonio Rodríguez; Rollé, José Luis Calvo; de Cos Juez, Francisco Javier

    2014-01-01

    Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE). This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW) and Multiple Linear Regression (MLR). The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW. PMID:25356644

  18. Missing Data Imputation of Solar Radiation Data under Different Atmospheric Conditions

    Directory of Open Access Journals (Sweden)

    Concepción Crespo Turrado

    2014-10-01

    Full Text Available Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE. This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW and Multiple Linear Regression (MLR. The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW.

  19. Missing data imputation of solar radiation data under different atmospheric conditions.

    Science.gov (United States)

    Turrado, Concepción Crespo; López, María Del Carmen Meizoso; Lasheras, Fernando Sánchez; Gómez, Benigno Antonio Rodríguez; Rollé, José Luis Calvo; Juez, Francisco Javier de Cos

    2014-10-29

    Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE). This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW) and Multiple Linear Regression (MLR). The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW.

  20. Planned Missing Data Designs in Educational Psychology Research

    NARCIS (Netherlands)

    Rhemtulla, M.; Hancock, G.R.

    2016-01-01

    Although missing data are often viewed as a challenge for applied researchers, in fact missing data can be highly beneficial. Specifically, when the amount of missing data on specific variables is carefully controlled, a balance can be struck between statistical power and research costs. This

  1. Creating Web Sites The Missing Manual

    CERN Document Server

    MacDonald, Matthew

    2006-01-01

    Think you have to be a technical wizard to build a great web site? Think again. For anyone who wants to create an engaging web site--for either personal or business purposes--Creating Web Sites: The Missing Manual demystifies the process and provides tools, techniques, and expert guidance for developing a professional and reliable web presence. Like every Missing Manual, you can count on Creating Web Sites: The Missing Manual to be entertaining and insightful and complete with all the vital information, clear-headed advice, and detailed instructions you need to master the task at hand. Autho

  2. Scalable Tensor Factorizations with Missing Data

    DEFF Research Database (Denmark)

    Acar, Evrim; Dunlavy, Daniel M.; Kolda, Tamara G.

    2010-01-01

    of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP...... is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram...

  3. Spacecraft intercept guidance using zero effort miss steering

    Science.gov (United States)

    Newman, Brett

    The suitability of proportional navigation, or an equivalent zero effort miss formulation, for spacecraft intercepts during midcourse guidance, followed by a ballistic coast to the endgame, is addressed. The problem is formulated in terms of relative motion in a general 3D framework. The proposed guidance law for the commanded thrust vector orientation consists of the sum of two terms: (1) along the line of sight unit direction and (2) along the zero effort miss component perpendicular to the line of sight and proportional to the miss itself and a guidance gain. If the guidance law is to be suitable for longer range targeting applications with significant ballistic coasting after burnout, determination of the zero effort miss must account for the different gravitational accelerations experienced by each vehicle. The proposed miss determination techniques employ approximations for the true differential gravity effect. Theoretical results are applied to a numerical engagement scenario and the resulting performance is evaluated in terms of the miss distances determined from nonlinear simulation.

  4. Methods to Minimize Zero-Missing Phenomenon

    DEFF Research Database (Denmark)

    da Silva, Filipe Miguel Faria; Bak, Claus Leth; Gudmundsdottir, Unnur Stella

    2010-01-01

    With the increasing use of high-voltage AC cables at transmission levels, phenomena such as current zero-missing start to appear more often in transmission systems. Zero-missing phenomenon can occur when energizing cable lines with shunt reactors. This may considerably delay the opening of the ci...

  5. An Intuitive Dominant Test Algorithm of CP-nets Applied on Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Liu Zhaowei

    2014-07-01

    Full Text Available A wireless sensor network is of spatially distributed with autonomous sensors, just like a multi-Agent system with single Agent. Conditional Preference networks is a qualitative tool for representing ceteris paribus (all other things being equal preference statements, it has been a research hotspot in artificial intelligence recently. But the algorithm and complexity of strong dominant test with respect to binary-valued structure CP-nets have not been solved, and few researchers address the application to other domain. In this paper, strong dominant test and application of CP-nets are studied in detail. Firstly, by constructing induced graph of CP-nets and studying its properties, we make a conclusion that the problem of strong dominant test on binary-valued CP-nets is single source shortest path problem essentially, so strong dominant test problem can be solved by improved Dijkstra’s algorithm. Secondly, we apply the algorithm above mentioned to the completeness of wireless sensor network, and design a completeness judging algorithm based on strong dominant test. Thirdly, we apply the algorithm on wireless sensor network to solve routing problem. In the end, we point out some interesting work in the future.

  6. Massively parallel algorithms for trace-driven cache simulations

    Science.gov (United States)

    Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.

    1991-01-01

    Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.

  7. A Monte Carlo Metropolis-Hastings Algorithm for Sampling from Distributions with Intractable Normalizing Constants

    KAUST Repository

    Liang, Faming; Jin, Ick-Hoon

    2013-01-01

    Simulating from distributions with intractable normalizing constants has been a long-standing problem inmachine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. TheMCMHalgorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals. © 2013 Massachusetts Institute of Technology.

  8. A Monte Carlo Metropolis-Hastings Algorithm for Sampling from Distributions with Intractable Normalizing Constants

    KAUST Repository

    Liang, Faming

    2013-08-01

    Simulating from distributions with intractable normalizing constants has been a long-standing problem inmachine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. TheMCMHalgorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals. © 2013 Massachusetts Institute of Technology.

  9. Preferred prenatal counselling at the limits of viability: a survey among Dutch perinatal professionals.

    Science.gov (United States)

    Geurtzen, R; Van Heijst, Arno; Hermens, Rosella; Scheepers, Hubertina; Woiski, Mallory; Draaisma, Jos; Hogeveen, Marije

    2018-01-03

    Since 2010, intensive care can be offered in the Netherlands at 24 +0  weeks gestation (with parental consent) but the Dutch guideline lacks recommendations on organization, content and preferred decision-making of the counselling. Our aim is to explore preferred prenatal counselling at the limits of viability by Dutch perinatal professionals and compare this to current care. Online nationwide survey as part of the PreCo study (2013) amongst obstetricians and neonatologists in all Dutch level III perinatal care centers (n = 205).The survey regarded prenatal counselling at the limits of viability and focused on the domains of organization, content and decision-making in both current and preferred practice. One hundred twenty-two surveys were returned out of 205 eligible professionals (response rate 60%). Organization-wise: more than 80% of all professionals preferred (but currently missed) having protocols for several aspects of counselling, joint counselling by both neonatologist and obstetrician, and the use of supportive materials. Most professionals preferred using national or local data (70%) on outcome statistics for the counselling content, in contrast to the international statistics currently used (74%). Current decisions on initiation care were mostly made together (in 99% parents and doctor). This shared decision model was preferred by 95% of the professionals. Dutch perinatal professionals would prefer more protocolized counselling, joint counselling, supportive material and local outcome statistics. Further studies on both barriers to perform adequate counselling, as well as on Dutch outcome statistics and parents' opinions are needed in order to develop a national framework. Clinicaltrials.gov, NCT02782650 , retrospectively registered May 2016.

  10. An Effective Recommender Algorithm for Cold-Start Problem in Academic Social Networks

    Directory of Open Access Journals (Sweden)

    Vala Ali Rohani

    2014-01-01

    Full Text Available Abundance of information in recent years has become a serious challenge for web users. Recommender systems (RSs have been often utilized to alleviate this issue. RSs prune large information spaces to recommend the most relevant items to users by considering their preferences. Nonetheless, in situations where users or items have few opinions, the recommendations cannot be made properly. This notable shortcoming in practical RSs is called cold-start problem. In the present study, we propose a novel approach to address this problem by incorporating social networking features. Coined as enhanced content-based algorithm using social networking (ECSN, the proposed algorithm considers the submitted ratings of faculty mates and friends besides user’s own preferences. The effectiveness of ECSN algorithm was evaluated by implementing it in MyExpert, a newly designed academic social network (ASN for academics in Malaysia. Real feedbacks from live interactions of MyExpert users with the recommended items are recorded for 12 consecutive weeks in which four different algorithms, namely, random, collaborative, content-based, and ECSN were applied every three weeks. The empirical results show significant performance of ECSN in mitigating the cold-start problem besides improving the prediction accuracy of recommendations when compared with other studied recommender algorithms.

  11. Improved collaborative filtering recommendation algorithm of similarity measure

    Science.gov (United States)

    Zhang, Baofu; Yuan, Baoping

    2017-05-01

    The Collaborative filtering recommendation algorithm is one of the most widely used recommendation algorithm in personalized recommender systems. The key is to find the nearest neighbor set of the active user by using similarity measure. However, the methods of traditional similarity measure mainly focus on the similarity of user common rating items, but ignore the relationship between the user common rating items and all items the user rates. And because rating matrix is very sparse, traditional collaborative filtering recommendation algorithm is not high efficiency. In order to obtain better accuracy, based on the consideration of common preference between users, the difference of rating scale and score of common items, this paper presents an improved similarity measure method, and based on this method, a collaborative filtering recommendation algorithm based on similarity improvement is proposed. Experimental results show that the algorithm can effectively improve the quality of recommendation, thus alleviate the impact of data sparseness.

  12. Missing data in trauma registries: A systematic review.

    Science.gov (United States)

    Shivasabesan, Gowri; Mitra, Biswadev; O'Reilly, Gerard M

    2018-03-30

    Trauma registries play an integral role in trauma systems but their valid use hinges on data quality. The aim of this study was to determine, among contemporary publications using trauma registry data, the level of reporting of data completeness and the methods used to deal with missing data. A systematic review was conducted of all trauma registry-based manuscripts published from 01 January 2015 to current date (17 March 2017). Studies were identified by searching MEDLINE, EMBASE, and CINAHL using relevant subject headings and keywords. Included manuscripts were evaluated based on previously published recommendations regarding the reporting and discussion of missing data. Manuscripts were graded on their degree of characterization of such observations. In addition, the methods used to manage missing data were examined. There were 539 manuscripts that met inclusion criteria. Among these, 208 (38.6%) manuscripts did not mention data completeness and 88 (16.3%) mentioned missing data but did not quantify the extent. Only a handful (n = 26; 4.8%) quantified the 'missingness' of all variables. Most articles (n = 477; 88.5%) contained no details such as a comparison between patient characteristics in cohorts with and without missing data. Of the 331 articles which made at least some mention of data completeness, the method of managing missing data was unknown in 34 (10.3%). When method(s) to handle missing data were identified, 234 (78.8%) manuscripts used complete case analysis only, 18 (6.1%) used multiple imputation only and 34 (11.4%) used a combination of these. Most manuscripts using trauma registry data did not quantify the extent of missing data for any variables and contained minimal discussion regarding missingness. Out of the studies which identified a method of managing missing data, most used complete case analysis, a method that may bias results. The lack of standardization in the reporting and management of missing data questions the validity of

  13. Interactive genetic algorithm for user-centered design of distributed conservation practices in a watershed: An examination of user preferences in objective space and user behavior

    Science.gov (United States)

    Piemonti, Adriana Debora; Babbar-Sebens, Meghna; Mukhopadhyay, Snehasis; Kleinberg, Austin

    2017-05-01

    Interactive Genetic Algorithms (IGA) are advanced human-in-the-loop optimization methods that enable humans to give feedback, based on their subjective and unquantified preferences and knowledge, during the algorithm's search process. While these methods are gaining popularity in multiple fields, there is a critical lack of data and analyses on (a) the nature of interactions of different humans with interfaces of decision support systems (DSS) that employ IGA in water resources planning problems and on (b) the effect of human feedback on the algorithm's ability to search for design alternatives desirable to end-users. In this paper, we present results and analyses of observational experiments in which different human participants (surrogates and stakeholders) interacted with an IGA-based, watershed DSS called WRESTORE to identify plans of conservation practices in a watershed. The main goal of this paper is to evaluate how the IGA adapts its search process in the objective space to a user's feedback, and identify whether any similarities exist in the objective space of plans found by different participants. Some participants focused on the entire watershed, while others focused only on specific local subbasins. Additionally, two different hydrology models were used to identify any potential differences in interactive search outcomes that could arise from differences in the numerical values of benefits displayed to participants. Results indicate that stakeholders, in comparison to their surrogates, were more likely to use multiple features of the DSS interface to collect information before giving feedback, and dissimilarities existed among participants in the objective space of design alternatives.

  14. Substituting missing data in compositional analysis

    Energy Technology Data Exchange (ETDEWEB)

    Real, Carlos, E-mail: carlos.real@usc.es [Area de Ecologia, Departamento de Biologia Celular y Ecologia, Escuela Politecnica Superior, Universidad de Santiago de Compostela, 27002 Lugo (Spain); Angel Fernandez, J.; Aboal, Jesus R.; Carballeira, Alejo [Area de Ecologia, Departamento de Biologia Celular y Ecologia, Facultad de Biologia, Universidad de Santiago de Compostela, 15782 Santiago de Compostela (Spain)

    2011-10-15

    Multivariate analysis of environmental data sets requires the absence of missing values or their substitution by small values. However, if the data is transformed logarithmically prior to the analysis, this solution cannot be applied because the logarithm of a small value might become an outlier. Several methods for substituting the missing values can be found in the literature although none of them guarantees that no distortion of the structure of the data set is produced. We propose a method for the assessment of these distortions which can be used for deciding whether to retain or not the samples or variables containing missing values and for the investigation of the performance of different substitution techniques. The method analyzes the structure of the distances among samples using Mantel tests. We present an application of the method to PCDD/F data measured in samples of terrestrial moss as part of a biomonitoring study. - Highlights: > Missing values in multivariate data sets must be substituted prior to analysis. > The substituted values can modify the structure of the data set. > We developed a method to estimate the magnitude of the alterations. > The method is simple and based on the Mantel test. > The method allowed the identification of problematic variables in a sample data set. - A method is presented for the assessment of the possible distortions in multivariate analysis caused by the substitution of missing values.

  15. Software for handling and replacement of missing data

    Directory of Open Access Journals (Sweden)

    Mayer, Benjamin

    2009-10-01

    Full Text Available In medical research missing values often arise in the course of a data analysis. This fact constitutes a problem for different reasons, so e.g. standard methods for analyzing data lead to biased estimates and a loss of statistical power due to missing values, since those methods require complete data sets and therefore omit incomplete cases for the analyses. Furthermore missing values imply a certain loss of information for what reason the validity of results of a study with missing values has to be rated less than in a case where all data had been available. For years there are methods for replacement of missing values (Rubin, Schafer to tackle these problems and solve them in parts. Hence in this article we want to present the existing software to handle and replace missing values on the one hand and give an outline about the available options to get information on the other hand. The methodological aspects of the replacement strategies are delineated just briefly in this article.

  16. A Demosaicking Algorithm with Adaptive Inter-Channel Correlation

    Directory of Open Access Journals (Sweden)

    Joan Duran

    2015-12-01

    Full Text Available Most common cameras use a CCD sensor device measuring a single color per pixel. Demosaicking is the interpolation process by which one can infer a full color image from such a matrix of values, thus interpolating the two missing components per pixel. Most demosaicking methods take advantage of inter-channel correlation locally selecting the best interpolation direction. The obtained results look convincing except when local geometry cannot be inferred from neighboring pixels or channel correlation is low. In these cases, these algorithms create interpolation artifacts such as zipper effect or color aliasing. This paper discusses the implementation details of the algorithm proposed in [J. Duran, A. Buades, ``Self-Similarity and Spectral Correlation Adaptive Algorithm for Color Demosaicking'', IEEE Transactions on Image Processing, 23(9, pp. 4031--4040, 2014]. The proposed method involves nonlocal image self-similarity in order to reduce interpolation artifacts when local geometry is ambiguous. It further introduces a clear and intuitive manner of balancing how much channel-correlation must be taken advantage of.

  17. Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering

    KAUST Repository

    Fonarev, Alexander; Mikhalev, Alexander; Serdyukov, Pavel; Gusev, Gleb; Oseledets, Ivan

    2017-01-01

    preference information for making good recommendations. One of the most successful approaches, called Representative Based Matrix Factorization, is based on Maxvol algorithm. Unfortunately, this approach has one important limitation - a seed set of a

  18. Impact of teamwork on missed care in four Australian hospitals.

    Science.gov (United States)

    Chapman, Rose; Rahman, Asheq; Courtney, Mary; Chalmers, Cheyne

    2017-01-01

    Investigate effects of teamwork on missed nursing care across a healthcare network in Australia. Missed care is universally used as an indicator of quality nursing care, however, little is known about mitigating effects of teamwork on these events. A descriptive exploratory study. Missed Care and Team Work surveys were completed by 334 nurses. Using Stata software, nursing staff demographic information and components of missed care and teamwork were compared across the healthcare network. Statistical tests were performed to identify predicting factors for missed care. The most commonly reported components of missed care were as follows: ambulation three times per day (43·3%), turning patient every two hours (29%) and mouth care (27·7%). The commonest reasons mentioned for missed care were as follows: inadequate labour resources (range 69·8-52·7%), followed by material resources (range 59·3-33·3%) and communication (range 39·3-27·2%). There were significant differences in missed care scores across units. Using the mean scores in regression correlation matrix, the negative relationship of missed care and teamwork was supported (r = -0·34, p teamwork alone accounted for about 9% of missed nursing care. Similar to previous international research findings, our results showed nursing teamwork significantly impacted on missed nursing care. Teamwork may be a mitigating factor to address missed care and future research is needed. These results may provide administrators, educators and clinicians with information to develop practices and policies to improve patient care internationally. © 2016 John Wiley & Sons Ltd.

  19. Maternal Near-Miss: A Multicenter Surveillance in Kathmandu Valley

    Directory of Open Access Journals (Sweden)

    Ashma Rana

    2013-06-01

    Full Text Available Introduction: Multicenter surveillance has been carried out on maternal near-miss in the hospitals with sentinel units. Near-miss is recognized as the predictor of level of care and maternal death. Reducing maternal mortality ratio is one of the challenges to achieve Millennium Development Goal. Objective was to determine the frequency and the nature of near-miss (severe acute maternal morbidity events and analysis of near-miss morbidities among pregnant women. Methods: Prospective surveillance was done for a year in 2012 in nine hospitals in Kathmandu valley. Cases eligible by definition recorded as a census based on WHO near-miss guideline. Similar questionnaire and dummy tables were used to present the result by non-inferential statistics. Results: Out of 157 cases identified with near-miss rate of 3.8, severe complications were PPH (40% and preeclampsia-eclampsia (17%. Blood transfusion (65%, ICU admission (54% and surgery (32% were the common critical intervention. Oxytocin was the main uterotonic used both prophylactically (86% and therapeutically (76%, and 19% arrived health facility after delivery or abortion. MgSO4 was used in all cases of eclampsia. All of the laparotomies were performed within 3 hours of arrival. Near-miss to mortality ratio was 6:1 and MMR 62. Conclusions: Study result yields similar pattern amongst developing countries and same near-miss conditions as the causes of maternal death reported by national statistics. Process indicators qualify the recommended standard of care. The near-miss event can be used as a surrogate marker of maternal death and a window for system level intervention. Keywords: abortion, eclampsia, hemorrhage, near-miss, surveillance

  20. Netbooks The Missing Manual

    CERN Document Server

    Biersdorfer, J

    2009-01-01

    Netbooks are the hot new thing in PCs -- small, inexpensive laptops designed for web browsing, email, and working with web-based programs. But chances are you don't know how to choose a netbook, let alone use one. Not to worry: with this Missing Manual, you'll learn which netbook is right for you and how to set it up and use it for everything from spreadsheets for work to hobbies like gaming and photo sharing. Netbooks: The Missing Manual provides easy-to-follow instructions and lots of advice to help you: Learn the basics for using a Windows- or Linux-based netbookConnect speakers, printe

  1. PCs The Missing Manual

    CERN Document Server

    Karp, David

    2005-01-01

    Your vacuum comes with one. Even your blender comes with one. But your PC--something that costs a whole lot more and is likely to be used daily and for tasks of far greater importance and complexity--doesn't come with a printed manual. Thankfully, that's not a problem any longer: PCs: The Missing Manual explains everything you need to know about PCs, both inside and out, and how to keep them running smoothly and working the way you want them to work. A complete PC manual for both beginners and power users, PCs: The Missing Manual has something for everyone. PC novices will appreciate the una

  2. Missing citations due to exact reference matching: Analysis of a random sample from WoS. Are publications from peripheral countries disadvantaged?

    Energy Technology Data Exchange (ETDEWEB)

    Donner, P.

    2016-07-01

    Citation counts of scientific research contributions are one fundamental data in scientometrics. Accuracy and completeness of citation links are therefore crucial data quality issues (Moed, 2005, Ch. 13). However, despite the known flaws of reference matching algorithms, usually no attempts are made to incorporate uncertainty about citation counts into indicators. This study is a step towards that goal. Particular attention is paid to the question whether publications from countries not using basic Latin script are differently affected by missed citations. The proprietary reference matching procedure of Web of Science (WoS) is based on (near) exact agreement of cited reference data (normalized during processing) to the target papers bibliographical data. Consequently, the procedure has near-optimal precision but incomplete recall - it is known to miss some slightly inaccurate reference links (Olensky, 2015). However, there has been no attempt so far to estimate the rate of missed citations by a principled method for a random sample. For this study a simple random sample of WoS source papers was drawn and it was attempted to find all reference strings of WoS indexed documents that refer to them, in particular inexact matches. The objective is to give a statistical estimate of the proportion of missed citations and to describe the relationship of the number of found citations to the number of missed citations, i.e. the conditional error distribution. The empirical error distribution is statistically analyzed and modelled. (Author)

  3. What's missing in missing data? Omissions in survey responses among parents of children with advanced cancer.

    Science.gov (United States)

    Rosenberg, Abby R; Dussel, Veronica; Orellana, Liliana; Kang, Tammy; Geyer, J Russel; Feudtner, Chris; Wolfe, Joanne

    2014-08-01

    Missing data is a common phenomenon with survey-based research; patterns of missing data may elucidate why participants decline to answer certain questions. To describe patterns of missing data in the Pediatric Quality of Life and Evaluation of Symptoms Technology (PediQUEST) study, and highlight challenges in asking sensitive research questions. Cross-sectional, survey-based study embedded within a randomized controlled trial. Three large children's hospitals: Dana-Farber/Boston Children's Cancer and Blood Disorders Center (DF/BCCDC); Children's Hospital of Philadelphia (CHOP); and Seattle Children's Hospital (SCH). At the time of their child's enrollment, parents completed the Survey about Caring for Children with Cancer (SCCC), including demographics, perceptions of prognosis, treatment goals, quality of life, and psychological distress. Eighty-six of 104 parents completed surveys (83% response). The proportion of missing data varied by question type. While 14 parents (16%) left demographic fields blank, over half (n=48; 56%) declined to answer at least one question about their child's prognosis, especially life expectancy. The presence of missing data was unrelated to the child's diagnosis, time from progression, time to death, or parent distress (p>0.3 for each). Written explanations in survey margins suggested that addressing a child's life expectancy is particularly challenging for parents. Parents of children with cancer commonly refrain from answering questions about their child's prognosis, however, they may be more likely to address general cure likelihood than explicit life expectancy. Understanding acceptability of sensitive questions in survey-based research will foster higher quality palliative care research.

  4. A novel approach based on preference-based index for interval bilevel linear programming problem.

    Science.gov (United States)

    Ren, Aihong; Wang, Yuping; Xue, Xingsi

    2017-01-01

    This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ -optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.

  5. A novel approach based on preference-based index for interval bilevel linear programming problem

    Directory of Open Access Journals (Sweden)

    Aihong Ren

    2017-05-01

    Full Text Available Abstract This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation ⪯ m w $\\preceq_{mw}$ . Furthermore, the concept of a preference δ-optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.

  6. Substituting missing data in compositional analysis

    International Nuclear Information System (INIS)

    Real, Carlos; Angel Fernandez, J.; Aboal, Jesus R.; Carballeira, Alejo

    2011-01-01

    Multivariate analysis of environmental data sets requires the absence of missing values or their substitution by small values. However, if the data is transformed logarithmically prior to the analysis, this solution cannot be applied because the logarithm of a small value might become an outlier. Several methods for substituting the missing values can be found in the literature although none of them guarantees that no distortion of the structure of the data set is produced. We propose a method for the assessment of these distortions which can be used for deciding whether to retain or not the samples or variables containing missing values and for the investigation of the performance of different substitution techniques. The method analyzes the structure of the distances among samples using Mantel tests. We present an application of the method to PCDD/F data measured in samples of terrestrial moss as part of a biomonitoring study. - Highlights: → Missing values in multivariate data sets must be substituted prior to analysis. → The substituted values can modify the structure of the data set. → We developed a method to estimate the magnitude of the alterations. → The method is simple and based on the Mantel test. → The method allowed the identification of problematic variables in a sample data set. - A method is presented for the assessment of the possible distortions in multivariate analysis caused by the substitution of missing values.

  7. Dental patient preferences and choice in clinical decision-making.

    Science.gov (United States)

    Fukai, Kakuhiro; Yoshino, Koichi; Ohyama, Atsushi; Takaesu, Yoshinori

    2012-01-01

    In economics, the concept of utility refers to the strength of customer preference. In health care assessment, the visual analogue scale (VAS), the standard gamble, and the time trade-off are used to measure health state utilities. These utility measurements play a key role in promoting shared decision-making in dental care. Individual preference, however, is complex and dynamic. The purpose of this study was to investigate the relationship between patient preference and educational intervention in the field of dental health. The data were collected by distributing questionnaires to employees of two companies in Japan. Participants were aged 18-65 years and consisted of 111 males and 93 females (204 in total). One company (Group A) had a dental program of annual check-ups and health education in the workplace, while the other company (Group B) had no such program. Statistical analyses were performed with the t-test and Chi-square test. The questionnaire items were designed to determine: (1) oral health-related quality of life, (2) dental health state utilities (using VAS), and (3) time trade-off for regular dental check-ups. The percentage of respondents in both groups who were satisfied with chewing function, appearance of teeth, and social function ranged from 23.1 to 42.4%. There were no significant differences between groups A and B in the VAS of decayed, filled, and missing teeth. The VAS of gum bleeding was 42.8 in Group A and 51.3 in Group B (pdecision-making.

  8. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    Science.gov (United States)

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  9. A DIFFERENTIAL EVOLUTION ALGORITHM DEVELOPED FOR A NURSE SCHEDULING PROBLEM

    Directory of Open Access Journals (Sweden)

    Shahnazari-Shahrezaei, P.

    2012-11-01

    Full Text Available Nurse scheduling is a type of manpower allocation problem that tries to satisfy hospital managers objectives and nurses preferences as much as possible by generating fair shift schedules. This paper presents a nurse scheduling problem based on a real case study, and proposes two meta-heuristics a differential evolution algorithm (DE and a greedy randomised adaptive search procedure (GRASP to solve it. To investigate the efficiency of the proposed algorithms, two problems are solved. Furthermore, some comparison metrics are applied to examine the reliability of the proposed algorithms. The computational results in this paper show that the proposed DE outperforms the GRASP.

  10. Cell Formation in Industrial Engineering : Theory, Algorithms and Experiments

    NARCIS (Netherlands)

    Goldengorin, B.; Krushynskyi, D.; Pardalos, P.M.

    2013-01-01

    This book focuses on a development of optimal, flexible, and efficient models and algorithms for cell formation in group technology. Its main aim is to provide a reliable tool that can be used by managers and engineers to design manufacturing cells based on their own preferences and constraints

  11. A systematic review of missed opportunities for improving tuberculosis and HIV/AIDS control in Sub-saharan Africa: what is still missed by health experts?

    Science.gov (United States)

    Keugoung, Basile; Fouelifack, Florent Ymele; Fotsing, Richard; Macq, Jean; Meli, Jean; Criel, Bart

    2014-01-01

    In sub-Saharan Africa, HIV/AIDS and tuberculosis are major public health problems. In 2010, 64% of the 34 million of people infected with HIV were reported to be living in sub-Saharan Africa. Only 41% of eligible HIV-positive people had access to antiretroviral therapy (ART). Regarding tuberculosis, in 2010, the region had 12% of the world's population but reported 26% of the 8.8 million incident cases and 254000 tuberculosis-related deaths. This paper aims to review missed opportunities for improving HIV/AIDS and tuberculosis prevention and care. We conducted a systematic review in PubMed using the terms 'missed'(Title) AND 'opportunities'(Title). We included systematic review and original research articles done in sub-Saharan Africa on missed opportunities in HIV/AIDS and/or tuberculosis care. Missed opportunities for improving HIV/AIDS and/or tuberculosis care can be classified into five categories: i) patient and community; ii) health professional; iii) health facility; iv) local health system; and v) vertical programme (HIV/AIDS and/or tuberculosis control programmes). None of the reviewed studies identified any missed opportunities related to health system strengthening. Opportunities that are missed hamper tuberculosis and/or HIV/AIDS care in sub-Saharan Africa where health systems remain weak. What is still missing in the analysis of health experts is the acknowledgement that opportunities that are missed to strengthen health systems also undermine tuberculosis and HIV/AIDS prevention and care. Studying why these opportunities are missed will help to understand the rationales behind the missed opportunities, and customize adequate strategies to seize them and for effective diseases control.

  12. Social Graph Community Differentiated by Node Features with Partly Missing Information

    Directory of Open Access Journals (Sweden)

    V. O. Chesnokov

    2015-01-01

    Full Text Available This paper proposes a new algorithm for community differentiation in social graphs, which uses information both on the graph structure and on the vertices. We consider user's ego-network i.e. his friends, with no himself, where each vertex has a set of features such as details on a workplace, institution, etc. The task is to determine missing or unspecified features of the vertices, based on their neighbors' features, and use these features to differentiate the communities in the social graph. Two vertices are believed to belong to the same community if they have a common feature. A hypothesis has been put forward that if most neighbors of a vertex have a common feature, there is a good probability that the vertex has this feature as well. The proposed algorithm is iterative and updates features of vertices, based on its neighbors, according to the hypothesis. Share of neighbors that form a majority is specified by the algorithm parameter. Complexity of single iteration depends linearly on the number of edges in the graph.To assess the quality of clustering three normalized metrics were used, namely: expected density, silhouette index, and Hubert's Gamma Statistic. The paper describes a method for test sampling of 2.000 graphs of the user's social network \\VKontakte". The API requests addressed \\VKontakte" and parsing HTML-pages of user's profiles and search results provided crawling. Information on user's group membership, secondary and higher education, and workplace was used as features. To store data the PostgreSQL DBMS was used, and the gexf format was used for data processing. For the test sample, metrics for several values of algorithm parameter were estimated: the value of index silhouettes was low (0.14-0.20, but within the normal range; the value of expected density was high, i.e. 1.17-1.52; the value of Hubert's gamma statistic was 0.94-0.95 that is close to the maximum. The number of vertices with no features was calculated before

  13. All hypertopologies are hit-and-miss

    Directory of Open Access Journals (Sweden)

    Somshekhar Naimpally

    2002-04-01

    Full Text Available We solve a long standing problem by showing that all known hypertopologies are hit-and-miss. Our solution is not merely of theoretical importance. This representation is useful in the study of comparison of the Hausdorff-Bourbaki or H-B uniform topologies and the Wijsman topologies among themselves and with others. Up to now some of these comparisons needed intricate manipulations. The H-B uniform topologies were the subject of intense activity in the 1960's in connection with the Isbell-Smith problem. We show that they are proximally locally finite topologies from which the solution to the above problem follows easily. It is known that the Wijsman topology on the hyperspace is the proximal ball (hit-and-miss topology in”nice” metric spaces including the normed linear spaces. With the introduction of a new far-miss topology we show that the Wijsman topology is hit-and-miss for all metric spaces. From this follows a natural generalization of the Wijsman topology to the hyperspace of any T1 space. Several existing results in the literature are easy consequences of our work.

  14. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  15. Missing data imputation using statistical and machine learning methods in a real breast cancer problem.

    Science.gov (United States)

    Jerez, José M; Molina, Ignacio; García-Laencina, Pedro J; Alba, Emilio; Ribelles, Nuria; Martín, Miguel; Franco, Leonardo

    2010-10-01

    Missing data imputation is an important task in cases where it is crucial to use all available data and not discard records with missing values. This work evaluates the performance of several statistical and machine learning imputation methods that were used to predict recurrence in patients in an extensive real breast cancer data set. Imputation methods based on statistical techniques, e.g., mean, hot-deck and multiple imputation, and machine learning techniques, e.g., multi-layer perceptron (MLP), self-organisation maps (SOM) and k-nearest neighbour (KNN), were applied to data collected through the "El Álamo-I" project, and the results were then compared to those obtained from the listwise deletion (LD) imputation method. The database includes demographic, therapeutic and recurrence-survival information from 3679 women with operable invasive breast cancer diagnosed in 32 different hospitals belonging to the Spanish Breast Cancer Research Group (GEICAM). The accuracies of predictions on early cancer relapse were measured using artificial neural networks (ANNs), in which different ANNs were estimated using the data sets with imputed missing values. The imputation methods based on machine learning algorithms outperformed imputation statistical methods in the prediction of patient outcome. Friedman's test revealed a significant difference (p=0.0091) in the observed area under the ROC curve (AUC) values, and the pairwise comparison test showed that the AUCs for MLP, KNN and SOM were significantly higher (p=0.0053, p=0.0048 and p=0.0071, respectively) than the AUC from the LD-based prognosis model. The methods based on machine learning techniques were the most suited for the imputation of missing values and led to a significant enhancement of prognosis accuracy compared to imputation methods based on statistical procedures. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Developing an eye-tracking algorithm as a potential tool for early diagnosis of autism spectrum disorder in children.

    Directory of Open Access Journals (Sweden)

    Natalia I Vargas-Cuentas

    Full Text Available Autism spectrum disorder (ASD currently affects nearly 1 in 160 children worldwide. In over two-thirds of evaluations, no validated diagnostics are used and gold standard diagnostic tools are used in less than 5% of evaluations. Currently, the diagnosis of ASD requires lengthy and expensive tests, in addition to clinical confirmation. Therefore, fast, cheap, portable, and easy-to-administer screening instruments for ASD are required. Several studies have shown that children with ASD have a lower preference for social scenes compared with children without ASD. Based on this, eye-tracking and measurement of gaze preference for social scenes has been used as a screening tool for ASD. Currently available eye-tracking software requires intensive calibration, training, or holding of the head to prevent interference with gaze recognition limiting its use in children with ASD.In this study, we designed a simple eye-tracking algorithm that does not require calibration or head holding, as a platform for future validation of a cost-effective ASD potential screening instrument. This system operates on a portable and inexpensive tablet to measure gaze preference of children for social compared to abstract scenes. A child watches a one-minute stimulus video composed of a social scene projected on the left side and an abstract scene projected on the right side of the tablet's screen. We designed five stimulus videos by changing the social/abstract scenes. Every child observed all the five videos in random order. We developed an eye-tracking algorithm that calculates the child's gaze preference for the social and abstract scenes, estimated as the percentage of the accumulated time that the child observes the left or right side of the screen, respectively. Twenty-three children without a prior history of ASD and 8 children with a clinical diagnosis of ASD were evaluated. The recorded video of the child´s eye movement was analyzed both manually by an observer

  17. An Empirical Research on Marketing Strategies of Different Risk Preference Merchant

    Directory of Open Access Journals (Sweden)

    Quan Chen

    2018-01-01

    Full Text Available Holiday merchandise has unique demand characteristics, unofficial start data, and a limited life cycle. In an intensely competitive market, individual merchants are able to get more sales opportunities if they display their products earlier. In this study, a time-variant variance and time-variant expected market demand model are introduced to investigate the order strategies that are used by risk-averse holiday merchants. Our results show that risk preference, market uncertainty, and market power have a significant effect on the merchant’s market strategies. Risk-averse merchants prefer to enhance forecast accuracy rather than using an early-display advantage. They can even give up their early-display advantage if they are faced with increased market uncertainty and small market power. Compared with the fixed purchase cost, the time-sensitive purchase cost can stimulate the merchant to purchase in advance, but this can decrease the merchant’s profit. Consequently, risk-averse merchants always display their merchandise later, decrease the order quantity, and, finally, miss the market opportunity.

  18. Maternal near-miss: a multicenter surveillance in Kathmandu Valley.

    Science.gov (United States)

    Rana, Ashma; Baral, Gehanath; Dangal, Ganesh

    2013-01-01

    Multicenter surveillance has been carried out on maternal near-miss in the hospitals with sentinel units. Near-miss is recognized as the predictor of level of care and maternal death. Reducing Maternal Mortality Ratio is one of the challenges to achieve Millennium Development Goal. The objective was to determine the frequency and the nature of near-miss events and to analyze the near-miss morbidities among pregnant women. A prospective surveillance was done for a year in 2012 at nine hospitals in Kathmandu valley. Cases eligible by definition were recorded as a census based on WHO near-miss guideline. Similar questionnaires and dummy tables were used to present the results by non-inferential statistics. Out of 157 cases identified with near-miss rate of 3.8 per 1000 live births, severe complications were postpartum hemorrhage 62 (40%) and preeclampsia-eclampsia 25 (17%). Blood transfusion 102 (65%), ICU admission 85 (54%) and surgery 53 (32%) were common critical interventions. Oxytocin was main uterotonic used both prophylactically and therapeutically at health facilities. Total of 30 (19%) cases arrived at health facility after delivery or abortion. MgSO4 was used in all cases of eclampsia. All laparotomies were performed within three hours of arrival. Near-miss to maternal death ratio was 6:1 and MMR was 62. Study result yielded similar pattern amongst developing countries and same near-miss conditions as the causes of maternal death reported by national statistics. Process indicators qualified the recommended standard of care. The near-miss event could be used as a surrogate marker of maternal death and a window for system level intervention.

  19. Algorithms for optimization of branching gravity-driven water networks

    Directory of Open Access Journals (Sweden)

    I. Dardani

    2018-05-01

    Full Text Available The design of a water network involves the selection of pipe diameters that satisfy pressure and flow requirements while considering cost. A variety of design approaches can be used to optimize for hydraulic performance or reduce costs. To help designers select an appropriate approach in the context of gravity-driven water networks (GDWNs, this work assesses three cost-minimization algorithms on six moderate-scale GDWN test cases. Two algorithms, a backtracking algorithm and a genetic algorithm, use a set of discrete pipe diameters, while a new calculus-based algorithm produces a continuous-diameter solution which is mapped onto a discrete-diameter set. The backtracking algorithm finds the global optimum for all but the largest of cases tested, for which its long runtime makes it an infeasible option. The calculus-based algorithm's discrete-diameter solution produced slightly higher-cost results but was more scalable to larger network cases. Furthermore, the new calculus-based algorithm's continuous-diameter and mapped solutions provided lower and upper bounds, respectively, on the discrete-diameter global optimum cost, where the mapped solutions were typically within one diameter size of the global optimum. The genetic algorithm produced solutions even closer to the global optimum with consistently short run times, although slightly higher solution costs were seen for the larger network cases tested. The results of this study highlight the advantages and weaknesses of each GDWN design method including closeness to the global optimum, the ability to prune the solution space of infeasible and suboptimal candidates without missing the global optimum, and algorithm run time. We also extend an existing closed-form model of Jones (2011 to include minor losses and a more comprehensive two-part cost model, which realistically applies to pipe sizes that span a broad range typical of GDWNs of interest in this work, and for smooth and commercial steel

  20. Algorithms for optimization of branching gravity-driven water networks

    Science.gov (United States)

    Dardani, Ian; Jones, Gerard F.

    2018-05-01

    The design of a water network involves the selection of pipe diameters that satisfy pressure and flow requirements while considering cost. A variety of design approaches can be used to optimize for hydraulic performance or reduce costs. To help designers select an appropriate approach in the context of gravity-driven water networks (GDWNs), this work assesses three cost-minimization algorithms on six moderate-scale GDWN test cases. Two algorithms, a backtracking algorithm and a genetic algorithm, use a set of discrete pipe diameters, while a new calculus-based algorithm produces a continuous-diameter solution which is mapped onto a discrete-diameter set. The backtracking algorithm finds the global optimum for all but the largest of cases tested, for which its long runtime makes it an infeasible option. The calculus-based algorithm's discrete-diameter solution produced slightly higher-cost results but was more scalable to larger network cases. Furthermore, the new calculus-based algorithm's continuous-diameter and mapped solutions provided lower and upper bounds, respectively, on the discrete-diameter global optimum cost, where the mapped solutions were typically within one diameter size of the global optimum. The genetic algorithm produced solutions even closer to the global optimum with consistently short run times, although slightly higher solution costs were seen for the larger network cases tested. The results of this study highlight the advantages and weaknesses of each GDWN design method including closeness to the global optimum, the ability to prune the solution space of infeasible and suboptimal candidates without missing the global optimum, and algorithm run time. We also extend an existing closed-form model of Jones (2011) to include minor losses and a more comprehensive two-part cost model, which realistically applies to pipe sizes that span a broad range typical of GDWNs of interest in this work, and for smooth and commercial steel roughness values.

  1. A chaos wolf optimization algorithm with self-adaptive variable step-size

    Science.gov (United States)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  2. A novel seizure detection algorithm informed by hidden Markov model event states

    Science.gov (United States)

    Baldassano, Steven; Wulsin, Drausin; Ung, Hoameng; Blevins, Tyler; Brown, Mesha-Gay; Fox, Emily; Litt, Brian

    2016-06-01

    Objective. Recently the FDA approved the first responsive, closed-loop intracranial device to treat epilepsy. Because these devices must respond within seconds of seizure onset and not miss events, they are tuned to have high sensitivity, leading to frequent false positive stimulations and decreased battery life. In this work, we propose a more robust seizure detection model. Approach. We use a Bayesian nonparametric Markov switching process to parse intracranial EEG (iEEG) data into distinct dynamic event states. Each event state is then modeled as a multidimensional Gaussian distribution to allow for predictive state assignment. By detecting event states highly specific for seizure onset zones, the method can identify precise regions of iEEG data associated with the transition to seizure activity, reducing false positive detections associated with interictal bursts. The seizure detection algorithm was translated to a real-time application and validated in a small pilot study using 391 days of continuous iEEG data from two dogs with naturally occurring, multifocal epilepsy. A feature-based seizure detector modeled after the NeuroPace RNS System was developed as a control. Main results. Our novel seizure detection method demonstrated an improvement in false negative rate (0/55 seizures missed versus 2/55 seizures missed) as well as a significantly reduced false positive rate (0.0012 h versus 0.058 h-1). All seizures were detected an average of 12.1 ± 6.9 s before the onset of unequivocal epileptic activity (unequivocal epileptic onset (UEO)). Significance. This algorithm represents a computationally inexpensive, individualized, real-time detection method suitable for implantable antiepileptic devices that may considerably reduce false positive rate relative to current industry standards.

  3. Missed Opportunities For Immunization In Children And Pregnant Women

    Directory of Open Access Journals (Sweden)

    Benjamin A I

    1990-01-01

    Full Text Available The role of immunization in reducing childhood mortality cannot be over-emphasised, yet many opportunities for immunization are missed when children and pregnant women visit a health facility. Reducing missed opportunities is the cheapest way to increase immunization coverage. The present study discusses the extent of the problem of missed opportunities for immunization in children and pregnant women and the factors contributing to the problem, in spatiality and community outreach clinics of Christian Medical College & Hospital, Ludhiana. Recommendations are made regarding ways and means of reducing missed opportunities.

  4. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm.

    Science.gov (United States)

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the "quality of service" as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services.

  5. An Example-Based Super-Resolution Algorithm for Selfie Images

    Directory of Open Access Journals (Sweden)

    Jino Hans William

    2016-01-01

    Full Text Available A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR rear camera and a low-resolution (LR front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details.

  6. Methods for Handling Missing Secondary Respondent Data

    Science.gov (United States)

    Young, Rebekah; Johnson, David

    2013-01-01

    Secondary respondent data are underutilized because researchers avoid using these data in the presence of substantial missing data. The authors reviewed, evaluated, and tested solutions to this problem. Five strategies of dealing with missing partner data were reviewed: (a) complete case analysis, (b) inverse probability weighting, (c) correction…

  7. Development of a preference-based index from the National Eye Institute Visual Function Questionnaire-25.

    Science.gov (United States)

    Rentz, Anne M; Kowalski, Jonathan W; Walt, John G; Hays, Ron D; Brazier, John E; Yu, Ren; Lee, Paul; Bressler, Neil; Revicki, Dennis A

    2014-03-01

    Understanding how individuals value health states is central to patient-centered care and to health policy decision making. Generic preference-based measures of health may not effectively capture the impact of ocular diseases. Recently, 6 items from the National Eye Institute Visual Function Questionnaire-25 were used to develop the Visual Function Questionnaire-Utility Index health state classification, which defines visual function health states. To describe elicitation of preferences for health states generated from the Visual Function Questionnaire-Utility Index health state classification and development of an algorithm to estimate health preference scores for any health state. Nonintervention, cross-sectional study of the general community in 4 countries (Australia, Canada, United Kingdom, and United States). A total of 607 adult participants were recruited from local newspaper advertisements. In the United Kingdom, an existing database of participants from previous studies was used for recruitment. Eight of 15,625 possible health states from the Visual Function Questionnaire-Utility Index were valued using time trade-off technique. A θ severity score was calculated for Visual Function Questionnaire-Utility Index-defined health states using item response theory analysis. Regression models were then used to develop an algorithm to assign health state preference values for all potential health states defined by the Visual Function Questionnaire-Utility Index. Health state preference values for the 8 states ranged from a mean (SD) of 0.343 (0.395) to 0.956 (0.124). As expected, preference values declined with worsening visual function. Results indicate that the Visual Function Questionnaire-Utility Index describes states that participants view as spanning most of the continuum from full health to dead. Visual Function Questionnaire-Utility Index health state classification produces health preference scores that can be estimated in vision-related studies that

  8. Reducing Competitive Cache Misses in Modern Processor Architectures

    OpenAIRE

    Prisagjanec, Milcho; Mitrevski, Pece

    2017-01-01

    The increasing number of threads inside the cores of a multicore processor, and competitive access to the shared cache memory, become the main reasons for an increased number of competitive cache misses and performance decline. Inevitably, the development of modern processor architectures leads to an increased number of cache misses. In this paper, we make an attempt to implement a technique for decreasing the number of competitive cache misses in the first level of cache memory. This tec...

  9. Missed opportunities in crystallography.

    Science.gov (United States)

    Dauter, Zbigniew; Jaskolski, Mariusz

    2014-09-01

    Scrutinized from the perspective of time, the giants in the history of crystallography more than once missed a nearly obvious chance to make another great discovery, or went in the wrong direction. This review analyzes such missed opportunities focusing on macromolecular crystallographers (using Perutz, Pauling, Franklin as examples), although cases of particular historical (Kepler), methodological (Laue, Patterson) or structural (Pauling, Ramachandran) relevance are also described. Linus Pauling, in particular, is presented several times in different circumstances, as a man of vision, oversight, or even blindness. His example underscores the simple truth that also in science incessant creativity is inevitably connected with some probability of fault. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  10. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  11. Performance Evaluation of New Joint EDF-RM Scheduling Algorithm for Real Time Distributed System

    Directory of Open Access Journals (Sweden)

    Rashmi Sharma

    2014-01-01

    Full Text Available In Real Time System, the achievement of deadline is the main target of every scheduling algorithm. Earliest Deadline First (EDF, Rate Monotonic (RM, and least Laxity First are some renowned algorithms that work well in their own context. As we know, there is a very common problem Domino's effect in EDF that is generated due to overloading condition (EDF is not working well in overloading situation. Similarly, performance of RM is degraded in underloading condition. We can say that both algorithms are complements of each other. Deadline missing in both events happens because of their utilization bounding strategy. Therefore, in this paper we are proposing a new scheduling algorithm that carries through the drawback of both existing algorithms. Joint EDF-RM scheduling algorithm is implemented in global scheduler that permits task migration mechanism in between processors in the system. In order to check the improved behavior of proposed algorithm we perform simulation. Results are achieved and evaluated in terms of Success Ratio (SR, Average CPU Utilization (ECU, Failure Ratio (FR, and Maximum Tardiness parameters. In the end, the results are compared with the existing (EDF, RM, and D_R_EDF algorithms. It has been shown that the proposed algorithm performs better during overloading condition as well in underloading condition.

  12. Decision process in MCDM with large number of criteria and heterogeneous risk preferences

    Directory of Open Access Journals (Sweden)

    Jian Liu

    Full Text Available A new decision process is proposed to address the challenge that a large number criteria in the multi-criteria decision making (MCDM problem and the decision makers with heterogeneous risk preferences. First, from the perspective of objective data, the effective criteria are extracted based on the similarity relations between criterion values and the criteria are weighted, respectively. Second, the corresponding types of theoretic model of risk preferences expectations will be built, based on the possibility and similarity between criterion values to solve the problem for different interval numbers with the same expectation. Then, the risk preferences (Risk-seeking, risk-neutral and risk-aversion will be embedded in the decision process. Later, the optimal decision object is selected according to the risk preferences of decision makers based on the corresponding theoretic model. Finally, a new algorithm of information aggregation model is proposed based on fairness maximization of decision results for the group decision, considering the coexistence of decision makers with heterogeneous risk preferences. The scientific rationality verification of this new method is given through the analysis of real case. Keywords: Heterogeneous, Risk preferences, Fairness, Decision process, Group decision

  13. Studies into tau reconstruction, missing transverse energy and photon induced processes with the ATLAS detector at the LHC

    International Nuclear Information System (INIS)

    Prabhu, Robindra P.

    2011-09-01

    The ATLAS experiment is currently recording data from proton-proton collisions delivered by CERN's Large Hadron Collider. As more data is amassed, studies of both Standard Model processes and searches for new physics beyond will intensify. This dissertation presents a three-part study providing new methods to help facilitate these efforts. The first part presents a novel τ-reconstruction algorithm for ATLAS inspired by the ideas of particle flow calorimetry. The algorithm is distinguished from traditional τ-reconstruction approaches in ATLAS, insofar that it seeks to recognize decay topologies consistent with a (hadronically) decaying τ-lepton using resolved energy flow objects in the calorimeters. This procedure allows for an early classification of τ-candidates according to their decay mode and the use of decay mode specific discrimination against fakes. A detailed discussion of the algorithm is provided along with early performance results derived from simulated data. The second part presents a Monte Carlo simulation tool which by way of a pseudorapidity-dependent parametrization of the jet energy resolution, provides a probabilistic estimate for the magnitude of instrumental contributions to missing transverse energy arising from jet fluctuations. The principles of the method are outlined and it is shown how the method can be used to populate tails of simulated missing transverse energy distributions suffering from low statistics. The third part explores the prospect of detecting photon-induced leptonic final states in early data. Such processes are distinguished from the more copious hadronic interactions at the LHC by cleaner final states void of hadronic debris, however the soft character of the final state leptons poses challenges to both trigger and offline selections. New trigger items enabling the online selection of such final states are presented, along with a study into the feasibility of detecting the two-photon exchange process pp(γγ →

  14. Twisted trees and inconsistency of tree estimation when gaps are treated as missing data - The impact of model mis-specification in distance corrections.

    Science.gov (United States)

    McTavish, Emily Jane; Steel, Mike; Holder, Mark T

    2015-12-01

    Statistically consistent estimation of phylogenetic trees or gene trees is possible if pairwise sequence dissimilarities can be converted to a set of distances that are proportional to the true evolutionary distances. Susko et al. (2004) reported some strikingly broad results about the forms of inconsistency in tree estimation that can arise if corrected distances are not proportional to the true distances. They showed that if the corrected distance is a concave function of the true distance, then inconsistency due to long branch attraction will occur. If these functions are convex, then two "long branch repulsion" trees will be preferred over the true tree - though these two incorrect trees are expected to be tied as the preferred true. Here we extend their results, and demonstrate the existence of a tree shape (which we refer to as a "twisted Farris-zone" tree) for which a single incorrect tree topology will be guaranteed to be preferred if the corrected distance function is convex. We also report that the standard practice of treating gaps in sequence alignments as missing data is sufficient to produce non-linear corrected distance functions if the substitution process is not independent of the insertion/deletion process. Taken together, these results imply inconsistent tree inference under mild conditions. For example, if some positions in a sequence are constrained to be free of substitutions and insertion/deletion events while the remaining sites evolve with independent substitutions and insertion/deletion events, then the distances obtained by treating gaps as missing data can support an incorrect tree topology even given an unlimited amount of data. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Algorithm for Extracting Digital Terrain Models under Forest Canopy from Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Almasi S. Maguya

    2014-07-01

    Full Text Available Extracting digital elevationmodels (DTMs from LiDAR data under forest canopy is a challenging task. This is because the forest canopy tends to block a portion of the LiDAR pulses from reaching the ground, hence introducing gaps in the data. This paper presents an algorithm for DTM extraction from LiDAR data under forest canopy. The algorithm copes with the challenge of low data density by generating a series of coarse DTMs by using the few ground points available and using trend surfaces to interpolate missing elevation values in the vicinity of the available points. This process generates a cloud of ground points from which the final DTM is generated. The algorithm has been compared to two other algorithms proposed in the literature in three different test sites with varying degrees of difficulty. Results show that the algorithm presented in this paper is more tolerant to low data density compared to the other two algorithms. The results further show that with decreasing point density, the differences between the three algorithms dramatically increased from about 0.5m to over 10m.

  16. Student versus Faculty Perceptions of Missing Class.

    Science.gov (United States)

    Sleigh, Merry J.; Ritzer, Darren R.; Casey, Michael B.

    2002-01-01

    Examines and compares student and faculty attitudes towards students missing classes and class attendance. Surveys undergraduate students (n=231) in lower and upper level psychology courses and psychology faculty. Reports that students found more reasons acceptable for missing classes and that the amount of in-class material on the examinations…

  17. Gender preference and awareness regarding sex determination among antenatal mothers attending a medical college of eastern India.

    Science.gov (United States)

    Yasmin, Shamima; Mukherjee, Anindya; Manna, Nirmalya; Baur, Baijayanti; Datta, Mousumi; Sau, Manabendra; Roy, Manidipa; Dasgupta, Samir

    2013-06-01

    There are many women "missing" due to an unfavourable sex ratio in India, which has strong patriarchal norms and a preference for sons. Female gender discrimination has been reported in health care, nutrition, education, and resource allocation due to man-made norms, religious beliefs, and recently by ultrasonography resulting in lowered sex ratio. The present study attempts to find out the level of awareness regarding sex determination and to explore preference of gender and factors associated among antenatal mothers attending a medical college in eastern India. Interviews were done by predesigned pretested proforma over 6 months. The data were analysed by SPSS 16.0 software for proportions with chi-squared tests and binary logistic regression analysis. Most women who were multigravida did not know about contraceptives; 1.8% of mothers knew the sex of the fetus in present pregnancy while another 34.7% expressed willingness; 13.6% knew of a place which could tell sex of the fetus beforehand; 55.6% expressed their preference of sex of the baby for present pregnancy while 50.6% of their husbands had gender preference. Gender preference was significantly high in subjects with: lower socioeconomic status (p=0.011); lower level of education of mother (p=0.047) and husband (p=0.0001); multigravida (p=0.002); presence of living children (p=0.0001); and husband having preference of sex of baby (p=0.0001). Parental education, socioeconomic background, and number of living issues were the main predictors for gender preference. Awareness regarding gender preference and related law and parental counselling to avoid gender preference with adoption of small family norm is recommended.

  18. Structural Group-based Auditing of Missing Hierarchical Relationships in UMLS

    Science.gov (United States)

    Chen, Yan; Gu, Huanying(Helen); Perl, Yehoshua; Geller, James

    2009-01-01

    The Metathesaurus of the UMLS was created by integrating various source terminologies. The inter-concept relationships were either integrated into the UMLS from the source terminologies or specially generated. Due to the extensive size and inherent complexity of the Metathesaurus, the accidental omission of some hierarchical relationships was inevitable. We present a recursive procedure which allows a human expert, with the support of an algorithm, to locate missing hierarchical relationships. The procedure starts with a group of concepts with exactly the same (correct) semantic type assignments. It then partitions the concepts, based on child-of hierarchical relationships, into smaller, singly rooted, hierarchically connected subgroups. The auditor only needs to focus on the subgroups with very few concepts and their concepts with semantic type reassignments. The procedure was evaluated by comparing it with a comprehensive manual audit and it exhibits a perfect error recall. PMID:18824248

  19. Farmers' preferences for automatic lameness-detection systems in dairy cattle.

    Science.gov (United States)

    Van De Gucht, T; Saeys, W; Van Nuffel, A; Pluym, L; Piccart, K; Lauwers, L; Vangeyte, J; Van Weyenberg, S

    2017-07-01

    As lameness is a major health problem in dairy herds, a lot of attention goes to the development of automated lameness-detection systems. Few systems have made it to the market, as most are currently still in development. To get these systems ready for practice, developers need to define which system characteristics are important for the farmers as end users. In this study, farmers' preferences for the different characteristics of proposed lameness-detection systems were investigated. In addition, the influence of sociodemographic and farm characteristics on farmers' preferences was assessed. The third aim was to find out if preferences change after the farmer receives extra information on lameness and its consequences. Therefore, a discrete choice experiment was designed with 3 alternative lameness-detection systems: a system attached to the cow, a walkover system, and a camera system. Each system was defined by 4 characteristics: the percentage missed lame cows, the percentage false alarms, the system cost, and the ability to indicate which leg is lame. The choice experiment was embedded in an online survey. After answering general questions and choosing their preferred option in 4 choice sets, extra information on lameness was provided. Consecutively, farmers were shown a second block of 4 choice sets. Results from 135 responses showed that farmers' preferences were influenced by the 4 system characteristics. The importance a farmer attaches to lameness, the interval between calving and first insemination, and the presence of an estrus-detection system contributed significantly to the value a farmer attaches to lameness-detection systems. Farmers who already use an estrus detection system were more willing to use automatic detection systems instead of visual lameness detection. Similarly, farmers who achieve shorter intervals between calving and first insemination and farmers who find lameness highly important had a higher tendency to choose for automatic

  20. The Impact of the Nursing Practice Environment on Missed Nursing Care.

    Science.gov (United States)

    Hessels, Amanda J; Flynn, Linda; Cimiotti, Jeannie P; Cadmus, Edna; Gershon, Robyn R M

    2015-12-01

    Missed nursing care is an emerging problem negatively impacting patient outcomes. There are gaps in our knowledge of factors associated with missed nursing care. The aim of this study was to determine the relationship between the nursing practice environment and missed nursing care in acute care hospitals. This is a secondary analysis of cross sectional data from a survey of over 7.000 nurses from 70 hospitals on workplace and process of care. Ordinary least squares and multiple regression models were constructed to examine the relationship between the nursing practice environment and missed nursing care while controlling for characteristics of nurses and hospitals. Nurses missed delivering a significant amount of necessary patient care (10-27%). Inadequate staffing and inadequate resources were the practice environment factors most strongly associated with missed nursing care events. This multi-site study examined the risk and risk factors associated with missed nursing care. Improvements targeting modifiable risk factors may reduce the risk of missed nursing care.

  1. Incidence and Correlates of Maternal Near Miss in Southeast Iran

    Directory of Open Access Journals (Sweden)

    Tayebeh Naderi

    2015-01-01

    Full Text Available This prospective study aimed to estimate the incidence and associated factors of severe maternal morbidity in southeast Iran. During a 9-month period in 2013, all women referring to eight hospitals for termination of pregnancy as well as women admitted during 42 days after the termination of pregnancy were enrolled into the study. Maternal near miss conditions were defined based on Say et al.’s recommendations. Five hundred and one cases of maternal near miss and 19,908 live births occurred in the study period, yielding a maternal near miss ratio of 25.2 per 1000 live births. This rate was 7.5 and 105 per 1000 in private and tertiary care settings, respectively. The rate of maternal death in near miss cases was 0.40% with a case:fatality ratio of 250 : 1. The most prevalent causes of near miss were severe preeclampsia (27.3%, ectopic pregnancy (18.4%, and abruptio placentae (16.2%. Higher age, higher education, and being primiparous were associated with a higher risk of near miss. Considering the high rate of maternal near miss in referral hospitals, maternal near miss surveillance system should be set up in these hospitals to identify cases of severe maternal morbidity as soon as possible.

  2. Droid X The Missing Manual

    CERN Document Server

    Gralla, Preston

    2011-01-01

    Get the most from your Droid X right away with this entertaining Missing Manual. Veteran tech author Preston Gralla offers a guided tour of every feature, with lots of expert tips and tricks along the way. You'll learn how to use calling and texting features, take and share photos, enjoy streaming music and video, and much more. Packed with full-color illustrations, this engaging book covers everything from getting started to advanced features and troubleshooting. Unleash the power of Motorola's hot new device with Droid X: The Missing Manual. Get organized. Import your contacts and sync wit

  3. Reduction of artefacts due to missing projections using OSEM

    International Nuclear Information System (INIS)

    Hutton, B.F.; Kyme, A.; Choong, K.

    2002-01-01

    Full text: It is well recognised that missing or corrupted projections can result in artefacts. This occasionally occurs due to errors in data transfer from acquisition memory to disk. A possible approach for reducing these artefacts was investigated, Using ordered subsets expectation maximization (OSEM) the iterative reconstruction proceeds by progressively including additional projections until a single iteration is complete. Clinically useful results can be obtained using a small subset size in a single iteration. Stopping prior to the complete iteration so as to avoid inclusion of missing or corrupted data should provide a 'partial' reconstruction with minimal artefacts. To test this hypothesis projections were selectively removed from a complete data set (2, 4, 8, 12 adjacent projections) and reconstructions were performed using both filtered back projection (FBP) and OSEM. To maintain a constant number of sub-iterations in OSEM an equal number of duplicate projections were substituted for the missing projections. Both 180 and 360 degrees reconstructions with missing data were compared with reconstruction for the complete data using sum of absolute differences. Results indicate that missing data causes artefacts for both FBP and OSEM however the severity of artefacts is significantly reduced using OSEM. The effect of missing data is generally greater for 180 degrees acquisition. OSEM is recommended for minimising reconstruction artefacts due to missing projections. Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  4. New approach for measuring 3D space by using Advanced SURF Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Youm, Minkyo; Min, Byungil; Suh, Kyungsuk [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, Backgeun [Sungkyunkwan Univ., Suwon (Korea, Republic of)

    2013-05-15

    The nuclear disasters compared to natural disaster create a more extreme condition for analyzing and evaluating. In this paper, measuring 3D space and modeling was studied by simple pictures in case of small sand dune. The suggested method can be used for the acquisition of spatial information by robot at the disaster area. As a result, these data are helpful for identify the damaged part, degree of damage and determination of recovery sequences. In this study we are improving computer vision algorithm for 3-D geo spatial information measurement. And confirm by test. First, we can get noticeable improvement of 3-D geo spatial information result by SURF algorithm and photogrammetry surveying. Second, we can confirm not only decrease algorithm running time, but also increase matching points through epi polar line filtering. From the study, we are extracting 3-D model by open source algorithm and delete miss match point by filtering method. However on characteristic of SURF algorithm, it can't find match point if structure don't have strong feature. So we will need more study about find feature point if structure don't have strong feature.

  5. A Modified Spatiotemporal Fusion Algorithm Using Phenological Information for Predicting Reflectance of Paddy Rice in Southern China

    Directory of Open Access Journals (Sweden)

    Mengxue Liu

    2018-05-01

    Full Text Available Satellite data for studying surface dynamics in heterogeneous landscapes are missing due to frequent cloud contamination, low temporal resolution, and technological difficulties in developing satellites. A modified spatiotemporal fusion algorithm for predicting the reflectance of paddy rice is presented in this paper. The algorithm uses phenological information extracted from a moderate-resolution imaging spectroradiometer enhanced vegetation index time series to improve the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM. The algorithm is tested with satellite data on Yueyang City, China. The main contribution of the modified algorithm is the selection of similar neighborhood pixels by using phenological information to improve accuracy. Results show that the modified algorithm performs better than ESTARFM in visual inspection and quantitative metrics, especially for paddy rice. This modified algorithm provides not only new ideas for the improvement of spatiotemporal data fusion method, but also technical support for the generation of remote sensing data with high spatial and temporal resolution.

  6. Robust Floor Determination Algorithm for Indoor Wireless Localization Systems under Reference Node Failure

    Directory of Open Access Journals (Sweden)

    Kriangkrai Maneerat

    2016-01-01

    Full Text Available One of the challenging problems for indoor wireless multifloor positioning systems is the presence of reference node (RN failures, which cause the values of received signal strength (RSS to be missed during the online positioning phase of the location fingerprinting technique. This leads to performance degradation in terms of floor accuracy, which in turn affects other localization procedures. This paper presents a robust floor determination algorithm called Robust Mean of Sum-RSS (RMoS, which can accurately determine the floor on which mobile objects are located and can work under either the fault-free scenario or the RN-failure scenarios. The proposed fault tolerance floor algorithm is based on the mean of the summation of the strongest RSSs obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSNs during the online phase. The performance of the proposed algorithm is compared with those of different floor determination algorithms in literature. The experimental results show that the proposed robust floor determination algorithm outperformed the other floor algorithms and can achieve the highest percentage of floor determination accuracy in all scenarios tested. Specifically, the proposed algorithm can achieve greater than 95% correct floor determination under the scenario in which 40% of RNs failed.

  7. Search for Supersymmetry in final states with jets, missing transverse momentum and at least one lepton with the ATLAS Experiment

    CERN Document Server

    Janus, Michel

    A search for Supersymmetry (SUSY) in final states with at least one lepton and large missing transverse energy based on proton-proton collisions recorded with the ATLAS detector at the LHC is presented. The leptons are reconstructed in the hadronic decay mode. Final states with leptons o er a good sensitivity for SUSY models where the coupling to the third generation of fermions is enhanced, e.g. gauge mediated SUSY breaking (GMSB) models, for which the supersymmetric partner of the lepton is the next-to-lightest SUSY particle (NLSP) and its decay to electrons or muons is strongly suppressed. To search for new physics in final states with hadronic -lepton decays a reliable and e cient re- construction algorithm for hadronic decays is needed to separate real decays and backgrounds from quark- or gluon-initiated jets and electrons. As part of this thesis two existing -reconstruction al- gorithms were further developed and integrated into a single algorithm that has by now become the standard algorithm for recon...

  8. Maternal near-miss in a rural hospital in Sudan

    Directory of Open Access Journals (Sweden)

    Adam Gamal K

    2011-06-01

    Full Text Available Abstract Background Investigation of maternal near-miss is a useful complement to the investigation of maternal mortality with the aim of meeting the United Nations' fifth Millennium Development Goal. The present study was conducted to investigate the frequency of near-miss events, to calculate the mortality index for each event and to compare the socio-demographic and obstetrical data (age, parity, gestational age, education and antenatal care of the near-miss cases with maternal deaths. Methods Near-miss cases and events (hemorrhage, infection, hypertensive disorders, anemia and dystocia, maternal deaths and their causes were retrospectively reviewed and the mortality index for each event was calculated in Kassala Hospital, eastern Sudan over a 2-year period, from January 2008 to December 2010. Disease-specific criteria were applied for these events. Results There were 9578 deliveries, 205 near-miss cases, 228 near-miss events and 40 maternal deaths. Maternal near-miss and maternal mortality ratio were 22.1/1000 live births and 432/100 000 live births, respectively. Hemorrhage accounted for the most common event (40.8%, followed by infection (21.5%, hypertensive disorders (18.0%, anemia (11.8% and dystocia (7.9%. The mortality index were 22.2%, 10.0%, 10.0%, 8.8% and 2.4% for infection, dystocia, anemia, hemorrhage and hypertensive disorders, respectively. Conclusion There is a high frequency of maternal morbidity and mortality at the level of this facility. Therefore maternal health policy needs to be concerned not only with averting the loss of life, but also with preventing or ameliorating maternal-near miss events (hemorrhage, infections, hypertension and anemia at all care levels including primary level.

  9. PRS: PERSONNEL RECOMMENDATION SYSTEM FOR HUGE DATA ANALYSIS USING PORTER STEMMER

    Directory of Open Access Journals (Sweden)

    T N Chiranjeevi

    2016-04-01

    Full Text Available Personal recommendation system is one which gives better and preferential recommendation to the users to satisfy their personalized requirements such as practical applications like Webpage Preferences, Sport Videos preferences, Stock selection based on price, TV preferences, Hotel preferences, books, Mobile phones, CDs and various other products now use recommender systems. The existing Pearson Correlation Coefficient (PCC and item-based algorithm using PCC, are called as UPCC and IPCC respectively. These systems are mainly based on only the rating services and does not consider the user personal preferences, they simply just give the result based on the ratings. As the size of data increases it will give the recommendations based on the top rated services and it will miss out most of user preferences. These are main drawbacks in the existing system which will give same results to the users based on some evaluations and rankings or rating service, they will neglect the user preferences and necessities. To address this problem we propose a new approach called, Personnel Recommendation System (PRS for huge data analysis using Porter Stemmer to solve the above challenges. In the proposed system it provides a personalized service recommendation list to the users and recommends the most useful services to the users which will increase the accuracy and efficiency in searching better services. Particularly, a set of suggestions or keywords are provided to indicate user preferences and we used Collaborative Filtering and Porter Stemmer algorithm which gives a suitable recommendations to the users. In real, the broad experiments are conducted on the huge database which is available in real world, and outcome shows that our proposed personal recommender method extensively improves the precision and efficiency of service recommender system over the KASR method. In our approach mainly consider the user preferences so it will not miss out the any of the data

  10. WE-EF-207-04: An Inter-Projection Sensor Fusion (IPSF) Approach to Estimate Missing Projection Signal in Synchronized Moving Grid (SMOG) System

    International Nuclear Information System (INIS)

    Zhang, H; Kong, V; Jin, J; Ren, L; Zhang, Y; Giles, W

    2015-01-01

    Purpose: A synchronized moving grid (SMOG) has been proposed to reduce scatter and lag artifacts in cone beam computed tomography (CBCT). However, information is missing in each projection because certain areas are blocked by the grid. A previous solution to this issue is acquiring 2 complimentary projections at each position, which increases scanning time. This study reports our first Result using an inter-projection sensor fusion (IPSF) method to estimate missing projection in our prototype SMOG-based CBCT system. Methods: An in-house SMOG assembling with a 1:1 grid of 3 mm gap has been installed in a CBCT benchtop. The grid moves back and forth in a 3-mm amplitude and up-to 20-Hz frequency. A control program in LabView synchronizes the grid motion with the platform rotation and x-ray firing so that the grid patterns for any two neighboring projections are complimentary. A Catphan was scanned with 360 projections. After scatter correction, the IPSF algorithm was applied to estimate missing signal for each projection using the information from the 2 neighboring projections. Feldkamp-Davis-Kress (FDK) algorithm was applied to reconstruct CBCT images. The CBCTs were compared to those reconstructed using normal projections without applying the SMOG system. Results: The SMOG-IPSF method may reduce image dose by half due to the blocked radiation by the grid. The method almost completely removed scatter related artifacts, such as the cupping artifacts. The evaluation of line pair patterns in the CatPhan suggested that the spatial resolution degradation was minimal. Conclusion: The SMOG-IPSF is promising in reducing scatter artifacts and improving image quality while reducing radiation dose

  11. WE-EF-207-04: An Inter-Projection Sensor Fusion (IPSF) Approach to Estimate Missing Projection Signal in Synchronized Moving Grid (SMOG) System

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, H; Kong, V; Jin, J [Georgia Regents University Cancer Center, Augusta, GA (Georgia); Ren, L; Zhang, Y; Giles, W [Duke University Medical Center, Durham, NC (United States)

    2015-06-15

    Purpose: A synchronized moving grid (SMOG) has been proposed to reduce scatter and lag artifacts in cone beam computed tomography (CBCT). However, information is missing in each projection because certain areas are blocked by the grid. A previous solution to this issue is acquiring 2 complimentary projections at each position, which increases scanning time. This study reports our first Result using an inter-projection sensor fusion (IPSF) method to estimate missing projection in our prototype SMOG-based CBCT system. Methods: An in-house SMOG assembling with a 1:1 grid of 3 mm gap has been installed in a CBCT benchtop. The grid moves back and forth in a 3-mm amplitude and up-to 20-Hz frequency. A control program in LabView synchronizes the grid motion with the platform rotation and x-ray firing so that the grid patterns for any two neighboring projections are complimentary. A Catphan was scanned with 360 projections. After scatter correction, the IPSF algorithm was applied to estimate missing signal for each projection using the information from the 2 neighboring projections. Feldkamp-Davis-Kress (FDK) algorithm was applied to reconstruct CBCT images. The CBCTs were compared to those reconstructed using normal projections without applying the SMOG system. Results: The SMOG-IPSF method may reduce image dose by half due to the blocked radiation by the grid. The method almost completely removed scatter related artifacts, such as the cupping artifacts. The evaluation of line pair patterns in the CatPhan suggested that the spatial resolution degradation was minimal. Conclusion: The SMOG-IPSF is promising in reducing scatter artifacts and improving image quality while reducing radiation dose.

  12. Missing transverse momentum in ATLAS: current and future performance

    CERN Document Server

    Schramm, S; The ATLAS collaboration

    2014-01-01

    During the Run-I data taking period, ATLAS has developed and refined several approaches for measuring missing transverse momentum in proton-proton collisions. Standard calorimeter-based $\\mathrm{E}_\\mathrm{T}^\\mathrm{miss}$ reconstruction techniques have been improved to obtain new levels of precision, while new track-based $\\mathrm{p}_\\mathrm{T}^\\mathrm{miss}$ methods provide for a way to have a second independent measurement of the momentum lost due to particles which do not leave tracks in the inner detectors. While both procedures are individually useful, preliminary studies have shown that combining information from both techniques leads to an improved understanding of missing transverse momentum. Data taking conditions during Run-I varied extensively, especially with respect to the amount of pileup activity present in each event, which provides unique challenges to calorimeter-based $\\mathrm{E}_\\mathrm{T}^\\mathrm{miss}$. Multiple solutions have been demonstrated, including methods which exploit both cal...

  13. A nonparametric multiple imputation approach for missing categorical data

    Directory of Open Access Journals (Sweden)

    Muhan Zhou

    2017-06-01

    Full Text Available Abstract Background Incomplete categorical variables with more than two categories are common in public health data. However, most of the existing missing-data methods do not use the information from nonresponse (missingness probabilities. Methods We propose a nearest-neighbour multiple imputation approach to impute a missing at random categorical outcome and to estimate the proportion of each category. The donor set for imputation is formed by measuring distances between each missing value with other non-missing values. The distance function is calculated based on a predictive score, which is derived from two working models: one fits a multinomial logistic regression for predicting the missing categorical outcome (the outcome model and the other fits a logistic regression for predicting missingness probabilities (the missingness model. A weighting scheme is used to accommodate contributions from two working models when generating the predictive score. A missing value is imputed by randomly selecting one of the non-missing values with the smallest distances. We conduct a simulation to evaluate the performance of the proposed method and compare it with several alternative methods. A real-data application is also presented. Results The simulation study suggests that the proposed method performs well when missingness probabilities are not extreme under some misspecifications of the working models. However, the calibration estimator, which is also based on two working models, can be highly unstable when missingness probabilities for some observations are extremely high. In this scenario, the proposed method produces more stable and better estimates. In addition, proper weights need to be chosen to balance the contributions from the two working models and achieve optimal results for the proposed method. Conclusions We conclude that the proposed multiple imputation method is a reasonable approach to dealing with missing categorical outcome data with

  14. Missing and Spurious Level Corrections for Nuclear Resonances

    International Nuclear Information System (INIS)

    Mitchell, G E; Agvaanluvsan, U; Pato, M P; Shriner, J F

    2005-01-01

    Neutron and proton resonances provide detailed level density information. However, due to experimental limitations, some levels are missed and some are assigned incorrect quantum numbers. The standard method to correct for missing levels uses the experimental widths and the Porter-Thomas distribution. Analysis of the spacing distribution provides an independent determination of the fraction of missing levels. We have derived a general expression for such an imperfect spacing distribution using the maximum entropy principle and applied it to a variety of nuclear resonance data. The problem of spurious levels has not been extensively addressed

  15. Random forest predictive modeling of mineral prospectivity with small number of prospects and data with missing values in Abra (Philippines)

    Science.gov (United States)

    Carranza, Emmanuel John M.; Laborte, Alice G.

    2015-01-01

    Machine learning methods that have been used in data-driven predictive modeling of mineral prospectivity (e.g., artificial neural networks) invariably require large number of training prospect/locations and are unable to handle missing values in certain evidential data. The Random Forests (RF) algorithm, which is a machine learning method, has recently been applied to data-driven predictive mapping of mineral prospectivity, and so it is instructive to further study its efficacy in this particular field. This case study, carried out using data from Abra (Philippines), examines (a) if RF modeling can be used for data-driven modeling of mineral prospectivity in areas with a few (i.e., individual layers of evidential data. Furthermore, RF modeling can handle missing values in evidential data through an RF-based imputation technique whereas in WofE modeling values are simply represented by zero weights. Therefore, the RF algorithm is potentially more useful than existing methods that are currently used for data-driven predictive mapping of mineral prospectivity. In particular, it is not a purely black-box method like artificial neural networks in the context of data-driven predictive modeling of mineral prospectivity. However, further testing of the method in other areas with a few mineral occurrences is needed to fully investigate its usefulness in data-driven predictive modeling of mineral prospectivity.

  16. Enhancement of tracking performance in electro-optical system based on servo control algorithm

    Science.gov (United States)

    Choi, WooJin; Kim, SungSu; Jung, DaeYoon; Seo, HyoungKyu

    2017-10-01

    Modern electro-optical surveillance and reconnaissance systems require tracking capability to get exact images of target or to accurately direct the line of sight to target which is moving or still. This leads to the tracking system composed of image based tracking algorithm and servo control algorithm. In this study, we focus on the servo control function to minimize the overshoot in the tracking motion and do not miss the target. The scheme is to limit acceleration and velocity parameters in the tracking controller, depending on the target state information in the image. We implement the proposed techniques by creating a system model of DIRCM and simulate the same environment, validate the performance on the actual equipment.

  17. Route planning algorithms: Planific@ Project

    Directory of Open Access Journals (Sweden)

    Gonzalo Martín Ortega

    2009-12-01

    Full Text Available Planific@ is a route planning project for the city of Madrid (Spain. Its main aim is to develop an intelligence system capable of routing people from one place in the city to any other using the public transport. In order to do this, it is necessary to take into account such things as: time, traffic, user preferences, etc. Before beginning to design the project is necessary to make a comprehensive study of the variety of main known route planning algorithms suitable to be used in this project.

  18. Improved Degree Search Algorithms in Unstructured P2P Networks

    Directory of Open Access Journals (Sweden)

    Guole Liu

    2012-01-01

    Full Text Available Searching and retrieving the demanded correct information is one important problem in networks; especially, designing an efficient search algorithm is a key challenge in unstructured peer-to-peer (P2P networks. Breadth-first search (BFS and depth-first search (DFS are the current two typical search methods. BFS-based algorithms show the perfect performance in the aspect of search success rate of network resources, while bringing the huge search messages. On the contrary, DFS-based algorithms reduce the search message quantity and also cause the dropping of search success ratio. To address the problem that only one of performances is excellent, we propose two memory function degree search algorithms: memory function maximum degree algorithm (MD and memory function preference degree algorithm (PD. We study their performance including the search success rate and the search message quantity in different networks, which are scale-free networks, random graph networks, and small-world networks. Simulations show that the two performances are both excellent at the same time, and the performances are improved at least 10 times.

  19. Recurrent Neural Networks for Multivariate Time Series with Missing Values.

    Science.gov (United States)

    Che, Zhengping; Purushotham, Sanjay; Cho, Kyunghyun; Sontag, David; Liu, Yan

    2018-04-17

    Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provide useful insights for better understanding and utilization of missing values in time series analysis.

  20. Definition and Analysis of a System for the Automated Comparison of Curriculum Sequencing Algorithms in Adaptive Distance Learning

    Science.gov (United States)

    Limongelli, Carla; Sciarrone, Filippo; Temperini, Marco; Vaste, Giulia

    2011-01-01

    LS-Lab provides automatic support to comparison/evaluation of the Learning Object Sequences produced by different Curriculum Sequencing Algorithms. Through this framework a teacher can verify the correspondence between the behaviour of different sequencing algorithms and her pedagogical preferences. In fact the teacher can compare algorithms…

  1. Association of the Nurse Work Environment, Collective Efficacy, and Missed Care.

    Science.gov (United States)

    Smith, Jessica G; Morin, Karen H; Wallace, Leigh E; Lake, Eileen T

    2018-06-01

    Missed nursing care is a significant threat to quality patient care. Promoting collective efficacy within nurse work environments could decrease missed care. The purpose was to understand how missed care is associated with nurse work environments and collective efficacy of hospital staff nurses. A cross-sectional, convenience sample was obtained through online surveys from registered nurses working at five southwestern U.S. hospitals. Descriptive, correlational, regression, and path analyses were conducted ( N = 233). The percentage of nurses who reported that at least one care activity was missed frequently or always was 94%. Mouth care (36.0% of nurses) and ambulation (35.3%) were missed frequently or always. Nurse work environments and collective efficacy were moderately, positively correlated. Nurse work environments and collective efficacy were associated with less missed care (χ 2 = 10.714, p = .0054). Fostering collective efficacy in the nurse work environment could reduce missed care and improve patient outcomes.

  2. Motorola Xoom The Missing Manual

    CERN Document Server

    Gralla, Preston

    2011-01-01

    Motorola Xoom is the first tablet to rival the iPad, and no wonder with all of the great features packed into this device. But learning how to use everything can be tricky-and Xoom doesn't come with a printed guide. That's where this Missing Manual comes in. Gadget expert Preston Gralla helps you master your Xoom with step-by-step instructions and clear explanations. As with all Missing Manuals, this book offers refreshing, jargon-free prose and informative illustrations. Use your Xoom as an e-book reader, music player, camcorder, and phoneKeep in touch with email, video and text chat, and so

  3. Evaluation of algorithms used to order markers on genetic maps.

    Science.gov (United States)

    Mollinari, M; Margarido, G R A; Vencovsky, R; Garcia, A A F

    2009-12-01

    When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with 100 and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results.

  4. A chaos wolf optimization algorithm with self-adaptive variable step-size

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2017-10-01

    Full Text Available To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as “winner-take-all” and the update mechanism as “survival of the fittest” were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  5. Missing data imputation: focusing on single imputation.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-01-01

    Complete case analysis is widely used for handling missing data, and it is the default method in many statistical packages. However, this method may introduce bias and some useful information will be omitted from analysis. Therefore, many imputation methods are developed to make gap end. The present article focuses on single imputation. Imputations with mean, median and mode are simple but, like complete case analysis, can introduce bias on mean and deviation. Furthermore, they ignore relationship with other variables. Regression imputation can preserve relationship between missing values and other variables. There are many sophisticated methods exist to handle missing values in longitudinal data. This article focuses primarily on how to implement R code to perform single imputation, while avoiding complex mathematical calculations.

  6. A combination-weighted Feldkamp-based reconstruction algorithm for cone-beam CT

    International Nuclear Information System (INIS)

    Mori, Shinichiro; Endo, Masahiro; Komatsu, Shuhei; Kandatsu, Susumu; Yashiro, Tomoyasu; Baba, Masayuki

    2006-01-01

    The combination-weighted Feldkamp algorithm (CW-FDK) was developed and tested in a phantom in order to reduce cone-beam artefacts and enhance cranio-caudal reconstruction coverage in an attempt to improve image quality when utilizing cone-beam computed tomography (CBCT). Using a 256-slice cone-beam CT (256CBCT), image quality (CT-number uniformity and geometrical accuracy) was quantitatively evaluated in phantom and clinical studies, and the results were compared to those obtained with the original Feldkamp algorithm. A clinical study was done in lung cancer patients under breath holding and free breathing. Image quality for the original Feldkamp algorithm is degraded at the edge of the scan region due to the missing volume, commensurate with the cranio-caudal distance between the reconstruction and central planes. The CW-FDK extended the reconstruction coverage to equal the scan coverage and improved reconstruction accuracy, unaffected by the cranio-caudal distance. The extended reconstruction coverage with good image quality provided by the CW-FDK will be clinically investigated for improving diagnostic and radiotherapy applications. In addition, this algorithm can also be adapted for use in relatively wide cone-angle CBCT such as with a flat-panel detector CBCT

  7. Time Series Forecasting with Missing Values

    Directory of Open Access Journals (Sweden)

    Shin-Fu Wu

    2015-11-01

    Full Text Available Time series prediction has become more popular in various kinds of applications such as weather prediction, control engineering, financial analysis, industrial monitoring, etc. To deal with real-world problems, we are often faced with missing values in the data due to sensor malfunctions or human errors. Traditionally, the missing values are simply omitted or replaced by means of imputation methods. However, omitting those missing values may cause temporal discontinuity. Imputation methods, on the other hand, may alter the original time series. In this study, we propose a novel forecasting method based on least squares support vector machine (LSSVM. We employ the input patterns with the temporal information which is defined as local time index (LTI. Time series data as well as local time indexes are fed to LSSVM for doing forecasting without imputation. We compare the forecasting performance of our method with other imputation methods. Experimental results show that the proposed method is promising and is worth further investigations.

  8. Missing School Matters

    Science.gov (United States)

    Balfanz, Robert

    2016-01-01

    Results of a survey conducted by the Office for Civil Rights show that 6 million public school students (13%) are not attending school regularly. Chronic absenteeism--defined as missing more than 10% of school for any reason--has been negatively linked to many key academic outcomes. Evidence shows that students who exit chronic absentee status can…

  9. Miss World going deshi

    DEFF Research Database (Denmark)

    Wildermuth, Norbert

      In November 1996 the South Indian metropolis Bangalore hosted the annual Miss World show. The live event and its televisualisation became a prominent symbol for the India's economic liberalisation and for the immanent globalizing dimensions of this development. As such, the highly prestigious......, the pageant's contestation, which gave rise to a series of vehement protests and a broad public debate about the country's cultural alienation, marked a crucial point in time and trend towards the (re)localisation of the Indian television landscape. In consequence, the 1996 Miss World show and its...... their vision and politics of gender, nation and modernity on the larger Indian public, over the last two decades. Engaging the Indian population increasingly by way of the new electronic, c&s distributed media, competing discourses of gender and sexuality were projected, basically as a necessary, effective...

  10. An optimal algorithm for configuring delivery options of a one-dimensional intensity-modulated beam

    International Nuclear Information System (INIS)

    Luan Shuang; Chen, Danny Z; Zhang, Li; Wu Xiaodong; Yu, Cedric X

    2003-01-01

    The problem of generating delivery options for one-dimensional intensity-modulated beams (1D IMBs) arises in intensity-modulated radiation therapy. In this paper, we present an algorithm with the optimal running time, based on the 'rightmost-preference' method, for generating all distinct delivery options for an arbitrary 1D IMB. The previously best known method for generating delivery options for a 1D IMB with N left leaf positions and N right leaf positions is a 'brute-force' solution, which first generates all N! possible combinations of the left and right leaf positions and then removes combinations that are not physically allowed delivery options. Compared with the brute-force method, our algorithm has several advantages: (1) our algorithm runs in an optimal time that is linearly proportional to the total number of distinct delivery options that it actually produces. Note that for a 1D IMB with multiple peaks, the total number of distinct delivery options in general tends to be considerably smaller than the worst case N!. (2) Our algorithm can be adapted to generating delivery options subject to additional constraints such as the 'minimum leaf separation' constraint. (3) Our algorithm can also be used to generate random subsets of delivery options; this feature is especially useful when the 1D IMBs in question have too many delivery options for a computer to store and process. The key idea of our method is that we impose an order on how left leaf positions should be paired with right leaf positions. Experiments indicated that our rightmost-preference algorithm runs dramatically faster than the brute-force algorithm. This implies that our algorithm can handle 1D IMBs whose sizes are substantially larger than those handled by the brute-force method. Applications of our algorithm in therapeutic techniques such as intensity-modulated arc therapy and 2D modulations are also discussed

  11. On the relationship between Gaussian stochastic blockmodels and label propagation algorithms

    International Nuclear Information System (INIS)

    Zhang, Junhao; Hu, Junfeng; Chen, Tongfei

    2015-01-01

    The problem of community detection has received great attention in recent years. Many methods have been proposed to discover communities in networks. In this paper, we propose a Gaussian stochastic blockmodel that uses Gaussian distributions to fit weight of edges in networks for non-overlapping community detection. The maximum likelihood estimation of this model has the same objective function as general label propagation with node preference. The node preference of a specific vertex turns out to be a value proportional to the intra-community eigenvector centrality (the corresponding entry in principal eigenvector of the adjacency matrix of the subgraph inside that vertex's community) under maximum likelihood estimation. Additionally, the maximum likelihood estimation of a constrained version of our model is highly related to another extension of the label propagation algorithm, namely, the label propagation algorithm under constraint. Experiments show that the proposed Gaussian stochastic blockmodel performs well on various benchmark networks. (paper)

  12. Research on Modified Root-MUSIC Algorithm of DOA Estimation Based on Covariance Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Changgan SHU

    2014-09-01

    Full Text Available In the standard root multiple signal classification algorithm, the performance of direction of arrival estimation will reduce and even lose effect in circumstances that a low signal noise ratio and a small signals interval. By reconstructing and weighting the covariance matrix of received signal, the modified algorithm can provide more accurate estimation results. The computer simulation and performance analysis are given next, which show that under the condition of lower signal noise ratio and stronger correlation between signals, the proposed modified algorithm could provide preferable azimuth estimating performance than the standard method.

  13. Using Deep Learning Algorithm to Enhance Image-review Software for Surveillance Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Yonggang

    2018-05-07

    We propose the development of proven deep learning algorithms to flag objects and events of interest in Next Generation Surveillance System (NGSS) surveillance to make IAEA image review more efficient. Video surveillance is one of the core monitoring technologies used by the IAEA Department of Safeguards when implementing safeguards at nuclear facilities worldwide. The current image review software GARS has limited automated functions, such as scene-change detection, black image detection and missing scene analysis, but struggles with highly cluttered backgrounds. A cutting-edge algorithm to be developed in this project will enable efficient and effective searches in images and video streams by identifying and tracking safeguards relevant objects and detect anomalies in their vicinity. In this project, we will develop the algorithm, test it with the IAEA surveillance cameras and data sets collected at simulated nuclear facilities at BNL and SNL, and implement it in a software program for potential integration into the IAEA’s IRAP (Integrated Review and Analysis Program).

  14. An improved algorithm for personalized recommendation on MOOCs

    Directory of Open Access Journals (Sweden)

    Yuqin Wang

    2017-09-01

    Full Text Available Purpose – In the past few years, millions of people started to acquire knowledge from the Massive Open Online Courses (MOOCs. MOOCs contain massive video courses produced by instructors, and learners all over the world can get access to these courses via the internet. However, faced with massive courses, learners often waste much time finding courses they like. This paper aims to explore the problem that how to make accurate personalized recommendations for MOOC users. Design/methodology/approach – This paper proposes a multi-attribute weight algorithm based on collaborative filtering (CF to select a recommendation set of courses for target MOOC users. Findings – The recall of the proposed algorithm in this paper is higher than both the traditional CF and a CF-based algorithm – uncertain neighbors’ collaborative filtering recommendation algorithm. The higher the recall is, the more accurate the recommendation result is. Originality/value – This paper reflects the target users’ preferences for the first time by calculating separately the weight of the attributes and the weight of attribute values of the courses.

  15. Recovering missing mesothelioma deaths in death certificates using hospital records.

    Science.gov (United States)

    Santana, Vilma S; Algranti, Eduardo; Campos, Felipe; Cavalcante, Franciana; Salvi, Leonardo; Santos, Simone A; Inamine, Rosemeire N; Souza, William; Consonni, Dario

    2018-04-02

    In Brazil, underreporting of mesothelioma and cancer of the pleura (MCP) is suspected to be high. Records from death certificates (SIM) and hospital registers (SIH-SUS) can be combined to recover missing data but only anonymous databases are available. This study shows how common data can be used for linkage and as an assessment of accuracy. Mesothelioma (all sites, ICD-10 codes C45.0-C45.9) and cancer of the pleura (C38.4) were retrieved from both information systems and combined using a linkage algorithm. Accuracy was examined with non-anonymous databases, limited to the state of São Paulo. We found 775 cases in death certificates and 283 in hospital registers. The linkage matched 57 cases, all accurately paired. Three cases, 0.4% in SIM and 1.3% in SIH-SUS, could not be matched because of data inconsistencies. A computer linkage can recover MCP cases from hospital records not found in death certificates in Brazil. © 2018 Wiley Periodicals, Inc.

  16. Topology-Based Estimation of Missing Smart Meter Readings

    Directory of Open Access Journals (Sweden)

    Daisuke Kodaira

    2018-01-01

    Full Text Available Smart meters often fail to measure or transmit the data they record when measuring energy consumption, known as meter readings, owing to faulty measuring equipment or unreliable communication modules. Existing studies do not address successive and non-periodical missing meter readings. This paper proposes a method whereby missing readings observed at a node are estimated by using circuit theory principles that leverage the voltage and current data from adjacent nodes. A case study is used to demonstrate the ability of the proposed method to successfully estimate the missing readings over an entire day during which outages and unpredictable perturbations occurred.

  17. The Vocational Preference Inventory Scores and Environmental Preferences

    Science.gov (United States)

    Kunce, Joseph T.; Kappes, Bruno Maurice

    1976-01-01

    This study investigated the relationship between vocational interest measured by the Vocational Preference Inventory (VPI) and preferences of 175 undergraduates for structured or unstructured environments. Males having clear-cut preferences for structured situations had significantly higher Realistic-Conventional scores than those without…

  18. Missed nursing care and its relationship with confidence in delegation among hospital nurses.

    Science.gov (United States)

    Saqer, Tahani J; AbuAlRub, Raeda F

    2018-04-06

    To (i) identify the types and reasons for missed nursing care among Jordanian hospital nurses; (ii) identify predictors of missed nursing care based on study variables; and (iii) examine the relationship between nurses' confidence in delegation and missed nursing care. Missed nursing care is a global concern for nurses and nurse administrators. Investigating the relation between the confidence in delegation and missed nursing care might help in designing strategies that enable nurses to minimise missed care and enhance quality of services. A correlational research design was used for this study. A convenience sample of 362 hospital nurses completed the missed nursing care survey, and confidence and intent to delegate scale. The results of the study revealed that ambulating and feeding patients on time, doing mouth care and attending interdisciplinary care conferences were the most frequent types of missed care. The mean score for missed nursing care was (2.78) on a scale from 1-5. The most prevalent reasons for missed care were "labour resources, followed by material resources, and then communication". Around 45% of the variation in the perceived level of "missed nursing care" was explained by background variables and perceived reasons for missed nursing. However, the relationship between confidence in delegation and missed care was insignificant. The results of this study add to the body of international literature on most prevalent types and reasons for missed nursing care in a different cultural context. Highlighting most prevalent reasons for missed nursing care could help nurse administrators in designing responsive strategies to eliminate or reduces such reasons. © 2018 John Wiley & Sons Ltd.

  19. MISSE PEACE Polymers Atomic Oxygen Erosion Results

    Science.gov (United States)

    deGroh, Kim, K.; Banks, Bruce A.; McCarthy, Catherine E.; Rucker, Rochelle N.; Roberts, Lily M.; Berger, Lauren A.

    2006-01-01

    Forty-one different polymer samples, collectively called the Polymer Erosion and Contamination Experiment (PEACE) Polymers, have been exposed to the low Earth orbit (LEO) environment on the exterior of the International Space Station (ISS) for nearly 4 years as part of Materials International Space Station Experiment 2 (MISSE 2). The objective of the PEACE Polymers experiment was to determine the atomic oxygen erosion yield of a wide variety of polymeric materials after long term exposure to the space environment. The polymers range from those commonly used for spacecraft applications, such as Teflon (DuPont) FEP, to more recently developed polymers, such as high temperature polyimide PMR (polymerization of monomer reactants). Additional polymers were included to explore erosion yield dependence upon chemical composition. The MISSE PEACE Polymers experiment was flown in MISSE Passive Experiment Carrier 2 (PEC 2), tray 1, on the exterior of the ISS Quest Airlock and was exposed to atomic oxygen along with solar and charged particle radiation. MISSE 2 was successfully retrieved during a space walk on July 30, 2005, during Discovery s STS-114 Return to Flight mission. Details on the specific polymers flown, flight sample fabrication, pre-flight and post-flight characterization techniques, and atomic oxygen fluence calculations are discussed along with a summary of the atomic oxygen erosion yield results. The MISSE 2 PEACE Polymers experiment is unique because it has the widest variety of polymers flown in LEO for a long duration and provides extremely valuable erosion yield data for spacecraft design purposes.

  20. Missed retinal breaks in rhegmatogenous retinal detachment

    Directory of Open Access Journals (Sweden)

    Brijesh Takkar

    2016-12-01

    Full Text Available AIM: To evaluate the causes and associations of missed retinal breaks (MRBs and posterior vitreous detachment (PVD in patients with rhegmatogenous retinal detachment (RRD. METHODS: Case sheets of patients undergoing vitreo retinal surgery for RRD at a tertiary eye care centre were evaluated retrospectively. Out of the 378 records screened, 253 were included for analysis of MRBs and 191 patients were included for analysis of PVD, depending on the inclusion criteria. Features of RRD and retinal breaks noted on examination were compared to the status of MRBs and PVD detected during surgery for possible associations. RESULTS: Overall, 27% patients had MRBs. Retinal holes were commonly missed in patients with lattice degeneration while missed retinal tears were associated with presence of complete PVD. Patients operated for cataract surgery were significantly associated with MRBs (P=0.033 with the odds of missing a retinal break being 1.91 as compared to patients with natural lens. Advanced proliferative vitreo retinopathy (PVR and retinal bullae were the most common reasons for missing a retinal break during examination. PVD was present in 52% of the cases and was wrongly assessed in 16%. Retinal bullae, pseudophakia/aphakia, myopia, and horse shoe retinal tears were strongly associated with presence of PVD. Traumatic RRDs were rarely associated with PVD. CONCLUSION: Pseudophakic patients, and patients with retinal bullae or advanced PVR should be carefully screened for MRBs. Though Weiss ring is a good indicator of PVD, it may still be over diagnosed in some cases. PVD is associated with retinal bullae and pseudophakia, and inversely with traumatic RRD.

  1. The tradition algorithm approach underestimates the prevalence of serodiagnosis of syphilis in HIV-infected individuals.

    Science.gov (United States)

    Chen, Bin; Peng, Xiuming; Xie, Tiansheng; Jin, Changzhong; Liu, Fumin; Wu, Nanping

    2017-07-01

    Currently, there are three algorithms for screening of syphilis: traditional algorithm, reverse algorithm and European Centre for Disease Prevention and Control (ECDC) algorithm. To date, there is not a generally recognized diagnostic algorithm. When syphilis meets HIV, the situation is even more complex. To evaluate their screening performance and impact on the seroprevalence of syphilis in HIV-infected individuals, we conducted a cross-sectional study included 865 serum samples from HIV-infected patients in a tertiary hospital. Every sample (one per patient) was tested with toluidine red unheated serum test (TRUST), T. pallidum particle agglutination assay (TPPA), and Treponema pallidum enzyme immunoassay (TP-EIA) according to the manufacturer's instructions. The results of syphilis serological testing were interpreted following different algorithms respectively. We directly compared the traditional syphilis screening algorithm with the reverse syphilis screening algorithm in this unique population. The reverse algorithm achieved remarkable higher seroprevalence of syphilis than the traditional algorithm (24.9% vs. 14.2%, p algorithm, the traditional algorithm also had a missed serodiagnosis rate of 42.8%. The total percentages of agreement and corresponding kappa values of tradition and ECDC algorithm compared with those of reverse algorithm were as follows: 89.4%,0.668; 99.8%, 0.994. There was a very good strength of agreement between the reverse and the ECDC algorithm. Our results supported the reverse (or ECDC) algorithm in screening of syphilis in HIV-infected populations. In addition, our study demonstrated that screening of HIV-populations using different algorithms may result in a statistically different seroprevalence of syphilis.

  2. Miss Julie: A Psychoanalytic Study

    Directory of Open Access Journals (Sweden)

    Sonali Jain

    2015-10-01

    Full Text Available Sigmund Freud theorized that ‘the hero of the tragedy must suffer…to bear the burden of tragic guilt…(that lay in rebellion against some divine or human authority.’ August Strindberg, the Swedish poet, playwright, author and visual artist, like Shakespeare before him, portrayed insanity as the ultimate of tragic conflict. In this paper I seek to explore and reiterate the dynamics of human relationships that are as relevant today as they were in Strindberg’s time. I propose to examine Strindberg’s Miss Julie, a play set in nineteenth century Sweden, through a psychoanalytic lens. The play deals with bold themes of class and sexual identity politics. Notwithstanding the progress made in breaking down gender barriers, the inequalities inherent in a patriarchal system persist in modern society. Miss Julie highlights these imbalances. My analysis of the play deals with issues of culture and psyche, and draws on Freud, Melanie Klein, Lacan, Luce Irigaray and other contemporary feminists. Miss Julie is a discourse on hysteria, which is still pivotal to psychoanalysis. Prominent philosophers like Hegel and the psychoanalyst Jacques Lacan have written about the dialectic of the master and the slave – a relationship that is characterized by dependence, demand and cruelty. The history of human civilization shows beyond any doubt that there is an intimate connection between cruelty and the sexual instinct. An analysis of the text is carried out using the sado-masochistic dynamic as well the slave-master discourse. I argue that Miss Julie subverts the slave-master relationship. The struggle for dominance and power is closely linked with the theme of sexuality in the unconscious. To quote the English actor and director Alan Rickman, ‘Watching or working on the plays of Strindberg is like seeing the skin, flesh and bones of life separated from each other. Challenging and timeless.’

  3. Network-based recommendation algorithms: A review

    Science.gov (United States)

    Yu, Fei; Zeng, An; Gillard, Sébastien; Medo, Matúš

    2016-06-01

    Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use-such as the possible influence of recommendation on the evolution of systems that use it-and finally discuss open research directions and challenges.

  4. Comprehensive preference optimization of an irreversible thermal engine using pareto based mutable smart bee algorithm and generalized regression neural network

    DEFF Research Database (Denmark)

    Mozaffari, Ahmad; Gorji-Bandpy, Mofid; Samadian, Pendar

    2013-01-01

    Optimizing and controlling of complex engineering systems is a phenomenon that has attracted an incremental interest of numerous scientists. Until now, a variety of intelligent optimizing and controlling techniques such as neural networks, fuzzy logic, game theory, support vector machines...... and stochastic algorithms were proposed to facilitate controlling of the engineering systems. In this study, an extended version of mutable smart bee algorithm (MSBA) called Pareto based mutable smart bee (PBMSB) is inspired to cope with multi-objective problems. Besides, a set of benchmark problems and four...... well-known Pareto based optimizing algorithms i.e. multi-objective bee algorithm (MOBA), multi-objective particle swarm optimization (MOPSO) algorithm, non-dominated sorting genetic algorithm (NSGA-II), and strength Pareto evolutionary algorithm (SPEA 2) are utilized to confirm the acceptable...

  5. Managed ventricular pacing vs. conventional dual-chamber pacing for elective replacements: the PreFER MVP study: clinical background, rationale, and design.

    Science.gov (United States)

    Quesada, Aurelio; Botto, Gianluca; Erdogan, Ali; Kozak, Milan; Lercher, Peter; Nielsen, Jens Cosedis; Piot, Olivier; Ricci, Renato; Weiss, Christian; Becker, Daniel; Wetzels, Gwenn; De Roy, Luc

    2008-03-01

    Several clinical studies have shown that, in patients with intact atrioventricular (AV) conduction, unnecessary chronic right ventricular (RV) pacing can be detrimental. The managed ventricular pacing (MVP) algorithm is designed to give preference to spontaneous AV conduction, thus minimizing RV pacing. The clinical outcomes of MVP are being studied in several ongoing trials in patients undergoing a first device implantation, but it is unknown to what extent MVP is beneficial in patients with a history of ventricular pacing. The purpose of the Prefer for Elective Replacement MVP (PreFER MVP) study is to assess the superiority of the MVP algorithm to conventional pacemaker and implantable cardioverter-defibrillator programming in terms of freedom from hospitalization for cardiovascular causes in a population of patients exposed to long periods of ventricular pacing. PreFER MVP is a prospective, 1:1 parallel, randomized (MVP ON/MVP OFF), single-blinded multi-centre trial. The study population consists of patients with more than 40% ventricular pacing documented with their previous device. Approximately, 600 patients will be randomized and followed for at least 24 months. The primary endpoint comprises cardiovascular hospitalization. The PreFER MVP trial is the first large prospective randomized clinical trial evaluating the effect of MVP in patients with a history of RV pacing.

  6. Application of third order stochastic dominance algorithm in investments ranking

    Directory of Open Access Journals (Sweden)

    Lončar Sanja

    2012-01-01

    Full Text Available The paper presents the use of third order stochastic dominance in ranking Investment alternatives, using TSD algorithms (Levy, 2006for testing third order stochastic dominance. The main goal of using TSD rule is minimization of efficient investment set for investor with risk aversion, who prefers more money and likes positive skew ness.

  7. The Missing Entrepreneurs 2014

    DEFF Research Database (Denmark)

    Halabisky, David; Potter, Jonathan; Thompson, Stuart

    OECD's LEED Programme and the European Commission's DG on Employment, Social Affairs and Inclusion recently published the second book as part of their programme of work on inclusive entrepreneurship. The Missing Entrepreneurs 2014 examines how public policies at national and local levels can...

  8. 9 CFR 2.128 - Inspection for missing animals.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Inspection for missing animals. 2.128 Section 2.128 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE ANIMAL WELFARE REGULATIONS Miscellaneous § 2.128 Inspection for missing animals. Each dealer...

  9. VIGAN: Missing View Imputation with Generative Adversarial Networks.

    Science.gov (United States)

    Shang, Chao; Palmer, Aaron; Sun, Jiangwen; Chen, Ko-Shin; Lu, Jin; Bi, Jinbo

    2017-01-01

    In an era when big data are becoming the norm, there is less concern with the quantity but more with the quality and completeness of the data. In many disciplines, data are collected from heterogeneous sources, resulting in multi-view or multi-modal datasets. The missing data problem has been challenging to address in multi-view data analysis. Especially, when certain samples miss an entire view of data, it creates the missing view problem. Classic multiple imputations or matrix completion methods are hardly effective here when no information can be based on in the specific view to impute data for such samples. The commonly-used simple method of removing samples with a missing view can dramatically reduce sample size, thus diminishing the statistical power of a subsequent analysis. In this paper, we propose a novel approach for view imputation via generative adversarial networks (GANs), which we name by VIGAN. This approach first treats each view as a separate domain and identifies domain-to-domain mappings via a GAN using randomly-sampled data from each view, and then employs a multi-modal denoising autoencoder (DAE) to reconstruct the missing view from the GAN outputs based on paired data across the views. Then, by optimizing the GAN and DAE jointly, our model enables the knowledge integration for domain mappings and view correspondences to effectively recover the missing view. Empirical results on benchmark datasets validate the VIGAN approach by comparing against the state of the art. The evaluation of VIGAN in a genetic study of substance use disorders further proves the effectiveness and usability of this approach in life science.

  10. Ultrasonographic findings of hydatidiform mole and missed abortion

    International Nuclear Information System (INIS)

    Yun, Kwang Myeong; Lee, Yeong Hwan; Chung, Hye Kyeong; Chung, Duck Soo; Kim, Ok Dong

    1990-01-01

    To establish the sonographic characteristics of the hydatidiform mole and the missed abortion with placental degeneration, we have retrospectively analyzed 12 cases of complete mole, 10 cases of partial mole, and 10 cases of missed abortion with placental hydropic degeneration, collected at Taegu Catholic General Hospital, from January 1986 to December 1989. The results were as follows : 1. Of 12 cases of complete mole, all demonstrated diffuse intrauterine vesicular pattern of internal echo without a gestational sac. Two cases were recurred after D and E. 2. The partial mole was characterized by focal (70%) or diffuse (20%) distribution of hydatidiform placental change and a gestational sac (100%) with or without a macerated fetus. But the striking hydatidiform placental change was not present in one cases of partial mole. 3. The uterus was larger for dates in 9 cases (90%) of complete mole, but smaller for dates in 7 cases (70%) of partial mole. 4. The missed abortion with placental hydropic degeneration was indistinguished from a partial mole due to their similar sonographic appearance : focal or diffuse cystic change of a placenta, a distorted gestational sac with or without a fetus, and a smaller uterus for dates. On conclusion, the complete mole could be easily distinguished from a partial mole or a missed abortion by sonography : a gestational sac or an area of noncystic placenta was not identified in a complete mole. The partial mole was indistinguished from a missed abortion, but if there is the suspicion of trophoblastic proliferation, such as a convex placental surface or a larger uterus for dates, then the diagnosis is probably a partial mole rather than a missed abortion

  11. Parameter optimization of differential evolution algorithm for automatic playlist generation problem

    Science.gov (United States)

    Alamag, Kaye Melina Natividad B.; Addawe, Joel M.

    2017-11-01

    With the digitalization of music, the number of collection of music increased largely and there is a need to create lists of music that filter the collection according to user preferences, thus giving rise to the Automatic Playlist Generation Problem (APGP). Previous attempts to solve this problem include the use of search and optimization algorithms. If a music database is very large, the algorithm to be used must be able to search the lists thoroughly taking into account the quality of the playlist given a set of user constraints. In this paper we perform an evolutionary meta-heuristic optimization algorithm, Differential Evolution (DE) using different combination of parameter values and select the best performing set when used to solve four standard test functions. Performance of the proposed algorithm is then compared with normal Genetic Algorithm (GA) and a hybrid GA with Tabu Search. Numerical simulations are carried out to show better results from Differential Evolution approach with the optimized parameter values.

  12. Factors influencing the missed nursing care in patients from a private hospital

    Directory of Open Access Journals (Sweden)

    Raúl Hernández-Cruz

    Full Text Available ABSTRACT Objective: to determine the factors that influence the missed nursing care in hospitalized patients. Methods: descriptive correlational study developed at a private hospital in Mexico. To identify the missed nursing care and related factors, the MISSCARE survey was used, which measures the care missed and associated factors. The care missed and the factors were grouped in global and dimension rates. For the analysis, descriptive statistics, Spearman’s correlation and simple linear regression were used. Approval for the study was obtained from the ethics committee. Results: the participants were 71 nurses from emergency, intensive care and inpatient services. The global missed care index corresponded to M=7.45 (SD=10.74; the highest missed care index was found in the dimension basic care interventions (M=13.02, SD=17.60. The main factor contributing to the care missed was human resources (M=56.13, SD=21.38. The factors related to the care missed were human resources (rs=0.408, p<0.001 and communication (rs=0.418, p<0.001. Conclusions: the nursing care missed is mainly due to the human resource factor; these study findings will permit the strengthening of nursing care continuity.

  13. Investigation of the three-dimensional lattice HP protein folding model using a genetic algorithm

    Directory of Open Access Journals (Sweden)

    Fábio L. Custódio

    2004-01-01

    Full Text Available An approach to the hydrophobic-polar (HP protein folding model was developed using a genetic algorithm (GA to find the optimal structures on a 3D cubic lattice. A modification was introduced to the scoring system of the original model to improve the model's capacity to generate more natural-like structures. The modification was based on the assumption that it may be preferable for a hydrophobic monomer to have a polar neighbor than to be in direct contact with the polar solvent. The compactness and the segregation criteria were used to compare structures created by the original HP model and by the modified one. An islands' algorithm, a new selection scheme and multiple-points crossover were used to improve the performance of the algorithm. Ten sequences, seven with length 27 and three with length 64 were analyzed. Our results suggest that the modified model has a greater tendency to form globular structures. This might be preferable, since the original HP model does not take into account the positioning of long polar segments. The algorithm was implemented in the form of a program with a graphical user interface that might have a didactical potential in the study of GA and on the understanding of hydrophobic core formation.

  14. HARDWARE REALIZATION OF CANNY EDGE DETECTION ALGORITHM FOR UNDERWATER IMAGE SEGMENTATION USING FIELD PROGRAMMABLE GATE ARRAYS

    Directory of Open Access Journals (Sweden)

    ALEX RAJ S. M.

    2017-09-01

    Full Text Available Underwater images raise new challenges in the field of digital image processing technology in recent years because of its widespread applications. There are many tangled matters to be considered in processing of images collected from water medium due to the adverse effects imposed by the environment itself. Image segmentation is preferred as basal stage of many digital image processing techniques which distinguish multiple segments in an image and reveal the hidden crucial information required for a peculiar application. There are so many general purpose algorithms and techniques that have been developed for image segmentation. Discontinuity based segmentation are most promising approach for image segmentation, in which Canny Edge detection based segmentation is more preferred for its high level of noise immunity and ability to tackle underwater environment. Since dealing with real time underwater image segmentation algorithm, which is computationally complex enough, an efficient hardware implementation is to be considered. The FPGA based realization of the referred segmentation algorithm is presented in this paper.

  15. The effect of tertiary surveys on missed injuries in trauma: a systematic review

    Directory of Open Access Journals (Sweden)

    Keijzers Gerben B

    2012-11-01

    Full Text Available Abstract Background Trauma tertiary surveys (TTS are advocated to reduce the rate of missed injuries in hospitalized trauma patients. Moreover, the missed injury rate can be a quality indicator of trauma care performance. Current variation of the definition of missed injury restricts interpretation of the effect of the TTS and limits the use of missed injury for benchmarking. Only a few studies have specifically assessed the effect of the TTS on missed injury. We aimed to systematically appraise these studies using outcomes of two common definitions of missed injury rates and long-term health outcomes. Methods A systematic review was performed. An electronic search (without language or publication restrictions of the Cochrane Library, Medline and Ovid was used to identify studies assessing TTS with short-term measures of missed injuries and long-term health outcomes. ‘Missed injury’ was defined as either: Type I any injury missed at primary and secondary survey and detected by the TTS; or Type II any injury missed at primary and secondary survey and missed by the TTS, detected during hospital stay. Two authors independently selected studies. Risk of bias for observational studies was assessed using the Newcastle-Ottawa scale. Results Ten observational studies met our inclusion criteria. None was randomized and none reported long-term health outcomes. Their risk of bias varied considerably. Nine studies assessed Type I missed injury and found an overall rate of 4.3%. A single study reported Type II missed injury with a rate of 1.5%. Three studies reported outcome data on missed injuries for both control and intervention cohorts, with two reporting an increase in Type I missed injuries (3% vs. 7%, PP=0.01. Conclusions Overall Type I and Type II missed injury rates were 4.3% and 1.5%. Routine TTS performance increased Type I and reduced Type II missed injuries. However, evidence is sub-optimal: few observational studies, non-uniform outcome

  16. A Method Based on Dial's Algorithm for Multi-time Dynamic Traffic Assignment

    Directory of Open Access Journals (Sweden)

    Rongjie Kuang

    2014-03-01

    Full Text Available Due to static traffic assignment has poor performance in reflecting actual case and dynamic traffic assignment may incurs excessive compute cost, method of multi-time dynamic traffic assignment combining static and dynamic traffic assignment balances factors of precision and cost effectively. A method based on Dial's logit algorithm is proposed in the article to solve the dynamic stochastic user equilibrium problem in dynamic traffic assignment. Before that, a fitting function that can proximately reflect overloaded traffic condition of link is proposed and used to give corresponding model. Numerical example is given to illustrate heuristic procedure of method and to compare results with one of same example solved by other literature's algorithm. Results show that method based on Dial's algorithm is preferable to algorithm from others.

  17. Prevalence and Correlates of Missing Meals Among High School Students-United States, 2010.

    Science.gov (United States)

    Demissie, Zewditu; Eaton, Danice K; Lowry, Richard; Nihiser, Allison J; Foltz, Jennifer L

    2018-01-01

    To determine the prevalence and correlates of missing meals among adolescents. The 2010 National Youth Physical Activity and Nutrition Study, a cross-sectional study. School based. A nationally representative sample of 11 429 high school students. Breakfast, lunch, and dinner consumption; demographics; measured and perceived weight status; physical activity and sedentary behaviors; and fruit, vegetable, milk, sugar-sweetened beverage, and fast-food intake. Prevalence estimates for missing breakfast, lunch, or dinner on ≥1 day during the past 7 days were calculated. Associations between demographics and missing meals were tested. Associations of lifestyle and dietary behaviors with missing meals were examined using logistic regression controlling for sex, race/ethnicity, and grade. In 2010, 63.1% of students missed breakfast, 38.2% missed lunch, and 23.3% missed dinner; the prevalence was highest among female and non-Hispanic black students. Being overweight/obese, perceiving oneself to be overweight, and video game/computer use were associated with increased risk of missing meals. Physical activity behaviors were associated with reduced risk of missing meals. Students who missed breakfast were less likely to eat fruits and vegetables and more likely to consume sugar-sweetened beverages and fast food. Breakfast was the most frequently missed meal, and missing breakfast was associated with the greatest number of less healthy dietary practices. Intervention and education efforts might prioritize breakfast consumption.

  18. 78 FR 2256 - Extension of the Extended Missing Parts Pilot Program

    Science.gov (United States)

    2013-01-10

    ...] Extension of the Extended Missing Parts Pilot Program AGENCY: United States Patent and Trademark Office... pilot program (Extended Missing Parts Pilot Program) in which an applicant, under certain conditions... nonprovisional application. The Extended Missing Parts Pilot Program benefits applicants by permitting additional...

  19. Independent preferences

    DEFF Research Database (Denmark)

    Vind, Karl

    1991-01-01

    A simple mathematical result characterizing a subset of a product set is proved and used to obtain additive representations of preferences. The additivity consequences of independence assumptions are obtained for preferences which are not total or transitive. This means that most of the economic ...... theory based on additive preferences - expected utility, discounted utility - has been generalized to preferences which are not total or transitive. Other economic applications of the theorem are given...

  20. Missed Opportunities for Hepatitis A Vaccination, National Immunization Survey-Child, 2013.

    Science.gov (United States)

    Casillas, Shannon M; Bednarczyk, Robert A

    2017-08-01

    To quantify the number of missed opportunities for vaccination with hepatitis A vaccine in children and assess the association of missed opportunities for hepatitis A vaccination with covariates of interest. Weighted data from the 2013 National Immunization Survey of US children aged 19-35 months were used. Analysis was restricted to children with provider-verified vaccination history (n = 13 460). Missed opportunities for vaccination were quantified by determining the number of medical visits a child made when another vaccine was administered during eligibility for hepatitis A vaccine, but hepatitis A vaccine was not administered. Cross-sectional bivariate and multivariate polytomous logistic regression were used to assess the association of missed opportunities for vaccination with child and maternal demographic, socioeconomic, and geographic covariates. In 2013, 85% of children in our study population had initiated the hepatitis A vaccine series, and 60% received 2 or more doses. Children who received zero doses of hepatitis A vaccine had an average of 1.77 missed opportunities for vaccination compared with 0.43 missed opportunities for vaccination in those receiving 2 doses. Children with 2 or more missed opportunities for vaccination initiated the vaccine series 6 months later than children without missed opportunities. In the fully adjusted multivariate model, children who were younger, had ever received WIC benefits, or lived in a state with childcare entry mandates were at a reduced odds for 2 or more missed opportunities for vaccination; children living in the Northeast census region were at an increased odds. Missed opportunities for vaccination likely contribute to the poor coverage for hepatitis A vaccination in children; it is important to understand why children are not receiving the vaccine when eligible. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Contouring algorithm for two dimensional data- an application to airborne surveys

    International Nuclear Information System (INIS)

    Suryakumar, N.V.; Rohatgi, Savita; Raghuwanshi, S.S.

    1994-01-01

    The paper describes in general the contouring algorithm for two dimensional projection of aeroradiometric data and considers not only irregularly spaced flight lines but also solves the other problems related to voluminous data acquired during the airborne surveys. Several simple logics have been described for drawing the contours using scan method and taking care of annotations, identification marking, geographical locations, map size, contour density for visual distinctness and many such problems which may arise during contouring. The present paper also discusses various possibilities of contour line segments in the mini-grid and the criterion for selection of suitable segments has been described in detail. A novel approach to avoid the crossing of contours or missing data is also briefly discussed. The simplicity of the algorithm is mentioned for its ready implementation or any computer/plotter. (author). 8 refs., 8 figs

  2. Missing data in trial-based cost-effectiveness analysis: An incomplete journey.

    Science.gov (United States)

    Leurent, Baptiste; Gomes, Manuel; Carpenter, James R

    2018-06-01

    Cost-effectiveness analyses (CEA) conducted alongside randomised trials provide key evidence for informing healthcare decision making, but missing data pose substantive challenges. Recently, there have been a number of developments in methods and guidelines addressing missing data in trials. However, it is unclear whether these developments have permeated CEA practice. This paper critically reviews the extent of and methods used to address missing data in recently published trial-based CEA. Issues of the Health Technology Assessment journal from 2013 to 2015 were searched. Fifty-two eligible studies were identified. Missing data were very common; the median proportion of trial participants with complete cost-effectiveness data was 63% (interquartile range: 47%-81%). The most common approach for the primary analysis was to restrict analysis to those with complete data (43%), followed by multiple imputation (30%). Half of the studies conducted some sort of sensitivity analyses, but only 2 (4%) considered possible departures from the missing-at-random assumption. Further improvements are needed to address missing data in cost-effectiveness analyses conducted alongside randomised trials. These should focus on limiting the extent of missing data, choosing an appropriate method for the primary analysis that is valid under contextually plausible assumptions, and conducting sensitivity analyses to departures from the missing-at-random assumption. © 2018 The Authors Health Economics published by John Wiley & Sons Ltd.

  3. Semiparametric approach for non-monotone missing covariates in a parametric regression model

    KAUST Repository

    Sinha, Samiran

    2014-02-26

    Missing covariate data often arise in biomedical studies, and analysis of such data that ignores subjects with incomplete information may lead to inefficient and possibly biased estimates. A great deal of attention has been paid to handling a single missing covariate or a monotone pattern of missing data when the missingness mechanism is missing at random. In this article, we propose a semiparametric method for handling non-monotone patterns of missing data. The proposed method relies on the assumption that the missingness mechanism of a variable does not depend on the missing variable itself but may depend on the other missing variables. This mechanism is somewhat less general than the completely non-ignorable mechanism but is sometimes more flexible than the missing at random mechanism where the missingness mechansim is allowed to depend only on the completely observed variables. The proposed approach is robust to misspecification of the distribution of the missing covariates, and the proposed mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation studies. Finally, for the purpose of illustration we analyze an endometrial cancer dataset and a hip fracture dataset.

  4. Analyzing time-ordered event data with missed observations.

    Science.gov (United States)

    Dokter, Adriaan M; van Loon, E Emiel; Fokkema, Wimke; Lameris, Thomas K; Nolet, Bart A; van der Jeugd, Henk P

    2017-09-01

    A common problem with observational datasets is that not all events of interest may be detected. For example, observing animals in the wild can difficult when animals move, hide, or cannot be closely approached. We consider time series of events recorded in conditions where events are occasionally missed by observers or observational devices. These time series are not restricted to behavioral protocols, but can be any cyclic or recurring process where discrete outcomes are observed. Undetected events cause biased inferences on the process of interest, and statistical analyses are needed that can identify and correct the compromised detection processes. Missed observations in time series lead to observed time intervals between events at multiples of the true inter-event time, which conveys information on their detection probability. We derive the theoretical probability density function for observed intervals between events that includes a probability of missed detection. Methodology and software tools are provided for analysis of event data with potential observation bias and its removal. The methodology was applied to simulation data and a case study of defecation rate estimation in geese, which is commonly used to estimate their digestive throughput and energetic uptake, or to calculate goose usage of a feeding site from dropping density. Simulations indicate that at a moderate chance to miss arrival events ( p  = 0.3), uncorrected arrival intervals were biased upward by up to a factor 3, while parameter values corrected for missed observations were within 1% of their true simulated value. A field case study shows that not accounting for missed observations leads to substantial underestimates of the true defecation rate in geese, and spurious rate differences between sites, which are introduced by differences in observational conditions. These results show that the derived methodology can be used to effectively remove observational biases in time-ordered event

  5. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    Science.gov (United States)

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and

  6. Between the Cup and the Lip: Missed Dental Appointments.

    Science.gov (United States)

    Tandon, Sandeep; Duhan, Reena; Sharma, Meenakshi; Vasudeva, Suraj

    2016-05-01

    Missed appointments are an issue which have been very commonly noticed but overlooked in Indian dental society. Almost every dentist, general or specialized, private or public, has faced this problem in routine practice but very less research has been conducted on this issue in Asian countries. The aim of this study was to determine the frequency and distribution of missed dental appointments among children and the reasons behind the non-attendance in department of paediatric and preventive dentistry. Patients under 15 years of age who reported during the period March through August 2014 were included in this study. Attendance data and demographical data for patients was obtained from patient records and the hospital database. The type of treatment patients were to receive was gathered from the appointment diaries of staff, postgraduate students and undergraduates. A structured questionnaire regarding the most frequent reasons given by patients for not attending the scheduled appointment was also prepared. The data were analysed using descriptive analysis. Of the total 2294 patients 886 patients failed to come on their scheduled appointment. Percentage of patients who missed their appointments was 38.6%. A 38.2% of them required primary teeth pulp therapy. No significant differences was found between genders regarding the prevalence of missed dental appointments. Only 40% dentist witnessed that the most common reason for their patients to miss dental appointment was "no leave from school". Illness was the second frequent excuse heard by dentists (5/20= 25%) from their patients and attendants. Missed dental appointment was found to be a common issue in paediatric age group. Counseling and motivation is required to be done at first dental visit to reduce the chances of missed appointment.

  7. Dentistry to the rescue of missing children: A review

    Science.gov (United States)

    Vij, Nitika; Kochhar, Gulsheen Kaur; Chachra, Sanjay; Kaur, Taranjot

    2016-01-01

    Today's society is becoming increasingly unsafe for children: we frequently hear about new incidents of missing children, which lead to emotional trauma for the loved ones and expose systemic failures of law and order. Parents can take extra precautions to ensure the safety of their children by educating them about ways to protect themselves and keep important records of the child such as updated color photographs, fingerprints, deoxyribonucleic acid (DNA) samples, etc., handy. However, in spite of all efforts, the problem of missing children still remains. Developments in the field of dentistry have empowered dentists with various tools and techniques to play a pivotal role in tracing a missing child. One such tool is Toothprints, a patented arch-shaped thermoplastic dental impression wafer developed by Dr. David Tesini, a paediatric dentist from Massachusetts. Toothprints enables a unique identification of the missing children not only through the bite impression but also through salivary DNA. Besides the use of Toothprints, a dentist can assist investigating agencies in identifying the missing children in multiple ways, including postmortem dental profiling, labeled dental fixtures, DNA extraction from teeth, and serial number engraving on the children's teeth. More importantly, all these tools cause minimal inconvenience to the individual, making a dentist's role in tracking a missing child even more significant. Thus, the simple discipline of maintaining timely dental records with the help of their dentists can save potential hassles for the parents in the future. PMID:27051216

  8. Dentistry to the rescue of missing children: A review.

    Science.gov (United States)

    Vij, Nitika; Kochhar, Gulsheen Kaur; Chachra, Sanjay; Kaur, Taranjot

    2016-01-01

    Today's society is becoming increasingly unsafe for children: we frequently hear about new incidents of missing children, which lead to emotional trauma for the loved ones and expose systemic failures of law and order. Parents can take extra precautions to ensure the safety of their children by educating them about ways to protect themselves and keep important records of the child such as updated color photographs, fingerprints, deoxyribonucleic acid (DNA) samples, etc., handy. However, in spite of all efforts, the problem of missing children still remains. Developments in the field of dentistry have empowered dentists with various tools and techniques to play a pivotal role in tracing a missing child. One such tool is Toothprints, a patented arch-shaped thermoplastic dental impression wafer developed by Dr. David Tesini, a paediatric dentist from Massachusetts. Toothprints enables a unique identification of the missing children not only through the bite impression but also through salivary DNA. Besides the use of Toothprints, a dentist can assist investigating agencies in identifying the missing children in multiple ways, including postmortem dental profiling, labeled dental fixtures, DNA extraction from teeth, and serial number engraving on the children's teeth. More importantly, all these tools cause minimal inconvenience to the individual, making a dentist's role in tracking a missing child even more significant. Thus, the simple discipline of maintaining timely dental records with the help of their dentists can save potential hassles for the parents in the future.

  9. Ultrasonographic findings of Myoma, H-mole and Missed abortion

    International Nuclear Information System (INIS)

    Huh, Nam Yoon; You, H. S.; Seong, K. J.; Park, C. Y.

    1982-01-01

    Ultrasonography is very important in the diagnosis of various kinds of diseases in Obsterics and Gynecology. It has high diagnostic accuracy in the diagnosis of pelvic masses and widely used for the detection of normal orpathologic pregnancy. But still it is difficult to differentiate degenerated myoma, H-mole and missed abortion by ultrasonography. So the authors analyzed the ultrasonographic findings of 81 patients with myoma(29 cases), H-mole(23 cases), and missed abortion(29 cases) and the results are as follows; 1. Diagnostic accuracy was 8.6% in myoma, 87% in H-mole and 89% in missed abortion. 2. The most typical ultrasonographic finding of myoma was obulated mass contour with nonhomogenous internal echo. 3. The most characteristic finding of H-mole was fine vesicular pattern internal echo with globular enlargement of uterus. 4. The most frequent finding of missed abortion was deformed gestational sac with or without remained fetal echo. 5. Clinical correlation was very important for accurate diagnosis, especially when differential diagnosis was very difficult between myoma with marked cystic degeneration, missed abortion with large distorted gestational sac and H-mole with severe degeneration

  10. Predicting personal preferences in subjective video quality assessment

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2017-01-01

    In this paper, we study the problem of predicting the visual quality of a specific test sample (e.g. a video clip) experienced by a specific user, based on the ratings by other users for the same sample and the same user for other samples. A simple linear model and algorithm is presented, where...... the characteristics of each test sample are represented by a set of parameters, and the individual preferences are represented by weights for the parameters. According to the validation experiment performed on public visual quality databases annotated with raw individual scores, the proposed model can predict...

  11. iLife '05 The Missing Manual

    CERN Document Server

    Pogue, David

    2005-01-01

    The incomparable iLife '05 is the must-have multimedia suite for everyone who owns a Mac--and the envy of everyone who doesn't. iLife '05: The Missing Manual is the definitive iLife '05 book--and what should have come with the suite. There's no better guide to your iLife experience than the #1 bestselling Macintosh author and expert--and Missing Manual series creator--David Pogue. Totally objective and utterly in-the-know, Pogue highlights the newest features, changes, and improvements of iLife '05, covers the capabilities and limitations of each program within the suite, and delivers count

  12. The role of proxy information in missing data analysis.

    Science.gov (United States)

    Huang, Rong; Liang, Yuanyuan; Carrière, K C

    2005-10-01

    This article investigates the role of proxy data in dealing with the common problem of missing data in clinical trials using repeated measures designs. In an effort to avoid the missing data situation, some proxy information can be gathered. The question is how to treat proxy information, that is, is it always better to utilize proxy information when there are missing data? A model for repeated measures data with missing values is considered and a strategy for utilizing proxy information is developed. Then, simulations are used to compare the power of a test using proxy to simply utilizing all available data. It is concluded that using proxy information can be a useful alternative when such information is available. The implications for various clinical designs are also considered and a data collection strategy for efficiently estimating parameters is suggested.

  13. The tradition algorithm approach underestimates the prevalence of serodiagnosis of syphilis in HIV-infected individuals.

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2017-07-01

    Full Text Available Currently, there are three algorithms for screening of syphilis: traditional algorithm, reverse algorithm and European Centre for Disease Prevention and Control (ECDC algorithm. To date, there is not a generally recognized diagnostic algorithm. When syphilis meets HIV, the situation is even more complex. To evaluate their screening performance and impact on the seroprevalence of syphilis in HIV-infected individuals, we conducted a cross-sectional study included 865 serum samples from HIV-infected patients in a tertiary hospital. Every sample (one per patient was tested with toluidine red unheated serum test (TRUST, T. pallidum particle agglutination assay (TPPA, and Treponema pallidum enzyme immunoassay (TP-EIA according to the manufacturer's instructions. The results of syphilis serological testing were interpreted following different algorithms respectively. We directly compared the traditional syphilis screening algorithm with the reverse syphilis screening algorithm in this unique population. The reverse algorithm achieved remarkable higher seroprevalence of syphilis than the traditional algorithm (24.9% vs. 14.2%, p < 0.0001. Compared to the reverse algorithm, the traditional algorithm also had a missed serodiagnosis rate of 42.8%. The total percentages of agreement and corresponding kappa values of tradition and ECDC algorithm compared with those of reverse algorithm were as follows: 89.4%,0.668; 99.8%, 0.994. There was a very good strength of agreement between the reverse and the ECDC algorithm. Our results supported the reverse (or ECDC algorithm in screening of syphilis in HIV-infected populations. In addition, our study demonstrated that screening of HIV-populations using different algorithms may result in a statistically different seroprevalence of syphilis.

  14. Studies into tau reconstruction, missing transverse energy and photon induced processes with the ATLAS detector at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Prabhu, Robindra P.

    2011-09-15

    The ATLAS experiment is currently recording data from proton-proton collisions delivered by CERN's Large Hadron Collider. As more data is amassed, studies of both Standard Model processes and searches for new physics beyond will intensify. This dissertation presents a three-part study providing new methods to help facilitate these efforts. The first part presents a novel {tau}-reconstruction algorithm for ATLAS inspired by the ideas of particle flow calorimetry. The algorithm is distinguished from traditional {tau}-reconstruction approaches in ATLAS, insofar that it seeks to recognize decay topologies consistent with a (hadronically) decaying {tau}-lepton using resolved energy flow objects in the calorimeters. This procedure allows for an early classification of {tau}-candidates according to their decay mode and the use of decay mode specific discrimination against fakes. A detailed discussion of the algorithm is provided along with early performance results derived from simulated data. The second part presents a Monte Carlo simulation tool which by way of a pseudorapidity-dependent parametrization of the jet energy resolution, provides a probabilistic estimate for the magnitude of instrumental contributions to missing transverse energy arising from jet fluctuations. The principles of the method are outlined and it is shown how the method can be used to populate tails of simulated missing transverse energy distributions suffering from low statistics. The third part explores the prospect of detecting photon-induced leptonic final states in early data. Such processes are distinguished from the more copious hadronic interactions at the LHC by cleaner final states void of hadronic debris, however the soft character of the final state leptons poses challenges to both trigger and offline selections. New trigger items enabling the online selection of such final states are presented, along with a study into the feasibility of detecting the two-photon exchange process

  15. Some 'Near Miss ' Experiences

    African Journals Online (AJOL)

    perienced health Workers, especially at lower level units, poor referral ... in the wards or operating theatre, and inability to access the busy health .... clinics, costs incurred and by who, who decided on hospitalisation, who .... pected pregnancy as I had missed my period the previous month. .... the patient received attention.

  16. MISSE in the Materials and Processes Technical Information System (MAPTIS )

    Science.gov (United States)

    Burns, DeWitt; Finckenor, Miria; Henrie, Ben

    2013-01-01

    Materials International Space Station Experiment (MISSE) data is now being collected and distributed through the Materials and Processes Technical Information System (MAPTIS) at Marshall Space Flight Center in Huntsville, Alabama. MISSE data has been instrumental in many programs and continues to be an important source of data for the space community. To facilitate great access to the MISSE data the International Space Station (ISS) program office and MAPTIS are working to gather this data into a central location. The MISSE database contains information about materials, samples, and flights along with pictures, pdfs, excel files, word documents, and other files types. Major capabilities of the system are: access control, browsing, searching, reports, and record comparison. The search capabilities will search within any searchable files so even if the desired meta-data has not been associated data can still be retrieved. Other functionality will continue to be added to the MISSE database as the Athena Platform is expanded

  17. An Adaptive Tradeoff Algorithm for Multi-issue SLA Negotiation

    Science.gov (United States)

    Son, Seokho; Sim, Kwang Mong

    Since participants in a Cloud may be independent bodies, mechanisms are necessary for resolving different preferences in leasing Cloud services. Whereas there are currently mechanisms that support service-level agreement negotiation, there is little or no negotiation support for concurrent price and timeslot for Cloud service reservations. For the concurrent price and timeslot negotiation, a tradeoff algorithm to generate and evaluate a proposal which consists of price and timeslot proposal is necessary. The contribution of this work is thus to design an adaptive tradeoff algorithm for multi-issue negotiation mechanism. The tradeoff algorithm referred to as "adaptive burst mode" is especially designed to increase negotiation speed and total utility and to reduce computational load by adaptively generating concurrent set of proposals. The empirical results obtained from simulations carried out using a testbed suggest that due to the concurrent price and timeslot negotiation mechanism with adaptive tradeoff algorithm: 1) both agents achieve the best performance in terms of negotiation speed and utility; 2) the number of evaluations of each proposal is comparatively lower than previous scheme (burst-N).

  18. Moderation analysis with missing data in the predictors.

    Science.gov (United States)

    Zhang, Qian; Wang, Lijuan

    2017-12-01

    The most widely used statistical model for conducting moderation analysis is the moderated multiple regression (MMR) model. In MMR modeling, missing data could pose a challenge, mainly because the interaction term is a product of two or more variables and thus is a nonlinear function of the involved variables. In this study, we consider a simple MMR model, where the effect of the focal predictor X on the outcome Y is moderated by a moderator U. The primary interest is to find ways of estimating and testing the moderation effect with the existence of missing data in X. We mainly focus on cases when X is missing completely at random (MCAR) and missing at random (MAR). Three methods are compared: (a) Normal-distribution-based maximum likelihood estimation (NML); (b) Normal-distribution-based multiple imputation (NMI); and (c) Bayesian estimation (BE). Via simulations, we found that NML and NMI could lead to biased estimates of moderation effects under MAR missingness mechanism. The BE method outperformed NMI and NML for MMR modeling with missing data in the focal predictor, missingness depending on the moderator and/or auxiliary variables, and correctly specified distributions for the focal predictor. In addition, more robust BE methods are needed in terms of the distribution mis-specification problem of the focal predictor. An empirical example was used to illustrate the applications of the methods with a simple sensitivity analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Algorithmic, LOCS and HOCS (chemistry) exam questions: performance and attitudes of college students

    Science.gov (United States)

    Zoller, Uri

    2002-02-01

    The performance of freshmen biology and physics-mathematics majors and chemistry majors as well as pre- and in-service chemistry teachers in two Israeli universities on algorithmic (ALG), lower-order cognitive skills (LOCS), and higher-order cognitive skills (HOCS) chemistry exam questions were studied. The driving force for the study was an interest in moving science and chemistry instruction from an algorithmic and factual recall orientation dominated by LOCS, to a decision-making, problem-solving and critical system thinking approach, dominated by HOCS. College students' responses to the specially designed ALG, LOCS and HOCS chemistry exam questions were scored and analysed for differences and correlation between the performance means within and across universities by the questions' category. This was followed by a combined student interview - 'speaking aloud' problem solving session for assessing the thinking processes involved in solving these types of questions and the students' attitudes towards them. The main findings were: (1) students in both universities performed consistently in each of the three categories in the order of ALG > LOCS > HOCS; their 'ideological' preference, was HOCS > algorithmic/LOCS, - referred to as 'computational questions', but their pragmatic preference was the reverse; (2) success on algorithmic/LOCS does not imply success on HOCS questions; algorithmic questions constitute a category on its own as far as students success in solving them is concerned. Our study and its results support the effort being made, worldwide, to integrate HOCS-fostering teaching and assessment strategies and, to develop HOCS-oriented science-technology-environment-society (STES)-type curricula within science and chemistry education.

  20. A Multiresolution Image Completion Algorithm for Compressing Digital Color Images

    Directory of Open Access Journals (Sweden)

    R. Gomathi

    2014-01-01

    Full Text Available This paper introduces a new framework for image coding that uses image inpainting method. In the proposed algorithm, the input image is subjected to image analysis to remove some of the portions purposefully. At the same time, edges are extracted from the input image and they are passed to the decoder in the compressed manner. The edges which are transmitted to decoder act as assistant information and they help inpainting process fill the missing regions at the decoder. Textural synthesis and a new shearlet inpainting scheme based on the theory of p-Laplacian operator are proposed for image restoration at the decoder. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. This novel shearlet p-Laplacian inpainting model can effectively reduce the staircase effect in Total Variation (TV inpainting model whereas it can still keep edges as well as TV model. In the proposed scheme, neural network is employed to enhance the value of compression ratio for image coding. Test results are compared with JPEG 2000 and H.264 Intracoding algorithms. The results show that the proposed algorithm works well.

  1. Statistical analysis of longitudinal quality of life data with missing measurements

    NARCIS (Netherlands)

    Zwinderman, A. H.

    1992-01-01

    The statistical analysis of longitudinal quality of life data in the presence of missing data is discussed. In cancer trials missing data are generated due to the fact that patients die, drop out, or are censored. These missing data are problematic in the monitoring of the quality of life during the

  2. An administrative data merging solution for dealing with missing data in a clinical registry: adaptation from ICD-9 to ICD-10

    Directory of Open Access Journals (Sweden)

    Galbraith P Diane

    2008-01-01

    Full Text Available Abstract Background We have previously described a method for dealing with missing data in a prospective cardiac registry initiative. The method involves merging registry data to corresponding ICD-9-CM administrative data to fill in missing data 'holes'. Here, we describe the process of translating our data merging solution to ICD-10, and then validating its performance. Methods A multi-step translation process was undertaken to produce an ICD-10 algorithm, and merging was then implemented to produce complete datasets for 1995–2001 based on the ICD-9-CM coding algorithm, and for 2002–2005 based on the ICD-10 algorithm. We used cardiac registry data for patients undergoing cardiac catheterization in fiscal years 1995–2005. The corresponding administrative data records were coded in ICD-9-CM for 1995–2001 and in ICD-10 for 2002–2005. The resulting datasets were then evaluated for their ability to predict death at one year. Results The prevalence of the individual clinical risk factors increased gradually across years. There was, however, no evidence of either an abrupt drop or rise in prevalence of any of the risk factors. The performance of the new data merging model was comparable to that of our previously reported methodology: c-statistic = 0.788 (95% CI 0.775, 0.802 for the ICD-10 model versus c-statistic = 0.784 (95% CI 0.780, 0.790 for the ICD-9-CM model. The two models also exhibited similar goodness-of-fit. Conclusion The ICD-10 implementation of our data merging method performs as well as the previously-validated ICD-9-CM method. Such methodological research is an essential prerequisite for research with administrative data now that most health systems are transitioning to ICD-10.

  3. Exoatmospheric intercepts using zero effort miss steering for midcourse guidance

    Science.gov (United States)

    Newman, Brett

    The suitability of proportional navigation, or an equivalent zero effort miss formulation, for exatmospheric intercepts during midcourse guidance, followed by a ballistic coast to the endgame, is addressed. The problem is formulated in terms of relative motion in a general, three dimensional framework. The proposed guidance law for the commanded thrust vector orientation consists of the sum of two terms: (1) along the line of sight unit direction and (2) along the zero effort miss component perpendicular to the line of sight and proportional to the miss itself and a guidance gain. If the guidance law is to be suitable for longer range targeting applications with significant ballistic coasting after burnout, determination of the zero effort miss must account for the different gravitational accelerations experienced by each vehicle. The proposed miss determination techniques employ approximations for the true differential gravity effect and thus, are less accurate than a direct numerical propagation of the governing equations, but more accurate than a baseline determination, which assumes equal accelerations for both vehicles. Approximations considered are constant, linear, quadratic, and linearized inverse square models. Theoretical results are applied to a numerical engagement scenario and the resulting performance is evaluated in terms of the miss distances determined from nonlinear simulation.

  4. 577 Missed opportunities for inununisation in Natal health facilities

    African Journals Online (AJOL)

    Wkly Epidemwl Rec 1984; 59: 117-119. 5. Expanded Programme on Immunization. Missed immunizacion opportunities and acceptability of immunization. Wkly Epidemiol Rec. 1989;64: 181-184. . . 6. Loevinsohn BP. Missed opportunities for immunization during visits for curative care: practical reasons for their occurrence.

  5. 78 FR 37598 - Missing Participants in Individual Account Plans

    Science.gov (United States)

    2013-06-21

    ... information from the public to assist it in making decisions about implementing a new program to deal with... allocated to such participant's account.'' Before making decisions about implementing a missing participants... PENSION BENEFIT GUARANTY CORPORATION Missing Participants in Individual Account Plans AGENCY...

  6. Benchmarking homogenization algorithms for monthly data

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2012-01-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  7. Cyclists’ Anger As Determinant of Near Misses Involving Different Road Users

    Directory of Open Access Journals (Sweden)

    Víctor Marín Puchades

    2017-12-01

    Full Text Available Road anger constitutes one of the determinant factors related to safety outcomes (e.g., accidents, near misses. Although cyclists are considered vulnerable road users due to their relatively high rate of fatalities in traffic, previous research has solely focused on car drivers, and no study has yet investigated the effect of anger on cyclists’ safety outcomes. The present research aims to investigate, for the first time, the effects of cycling anger toward different types of road users on near misses involving such road users and near misses in general. Using a daily diary web-based questionnaire, we collected data about daily trips, bicycle use, near misses experienced, cyclist’s anger and demographic information from 254 Spanish cyclists. Poisson regression was used to assess the association of cycling anger with near misses, which is a count variable. No relationship was found between general cycling anger and near misses occurrence. Anger toward specific road users had different effects on the probability of near misses with different road users. Anger toward the interaction with car drivers increased the probability of near misses involving cyclists and pedestrians. Anger toward interaction with pedestrians was associated with higher probability of near misses with pedestrians. Anger toward cyclists exerted no effect on the probability of near misses with any road user (i.e., car drivers, cyclists or pedestrians, whereas anger toward the interactions with the police had a diminishing effect on the occurrence of near misses’ involving all types of road users. The present study demonstrated that the effect of road anger on safety outcomes among cyclists is different from that of motorists. Moreover, the target of anger played an important role on safety both for the cyclist and the specific road users. Possible explanations for these differences are based on the difference in status and power with motorists, as well as on the potential

  8. Application of particle swarm optimization algorithm in the heating system planning problem.

    Science.gov (United States)

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem.

  9. Results of Database Studies in Spine Surgery Can Be Influenced by Missing Data.

    Science.gov (United States)

    Basques, Bryce A; McLynn, Ryan P; Fice, Michael P; Samuel, Andre M; Lukasiewicz, Adam M; Bohl, Daniel D; Ahn, Junyoung; Singh, Kern; Grauer, Jonathan N

    2017-12-01

    National databases are increasingly being used for research in spine surgery; however, one limitation of such databases that has received sparse mention is the frequency of missing data. Studies using these databases often do not emphasize the percentage of missing data for each variable used and do not specify how patients with missing data are incorporated into analyses. This study uses the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) database to examine whether different treatments of missing data can influence the results of spine studies. (1) What is the frequency of missing data fields for demographics, medical comorbidities, preoperative laboratory values, operating room times, and length of stay recorded in ACS-NSQIP? (2) Using three common approaches to handling missing data, how frequently do those approaches agree in terms of finding particular variables to be associated with adverse events? (3) Do different approaches to handling missing data influence the outcomes and effect sizes of an analysis testing for an association with these variables with occurrence of adverse events? Patients who underwent spine surgery between 2005 and 2013 were identified from the ACS-NSQIP database. A total of 88,471 patients undergoing spine surgery were identified. The most common procedures were anterior cervical discectomy and fusion, lumbar decompression, and lumbar fusion. Demographics, comorbidities, and perioperative laboratory values were tabulated for each patient, and the percent of missing data was noted for each variable. These variables were tested for an association with "any adverse event" using three separate multivariate regressions that used the most common treatments for missing data. In the first regression, patients with any missing data were excluded. In the second regression, missing data were treated as a negative or "reference" value; for continuous variables, the mean of each variable's reference range

  10. Patient understanding of oral contraceptive pill instructions related to missed pills: a systematic review.

    Science.gov (United States)

    Zapata, Lauren B; Steenland, Maria W; Brahmi, Dalia; Marchbanks, Polly A; Curtis, Kathryn M

    2013-05-01

    Instructions on what to do after pills are missed are critical to reducing unintended pregnancies resulting from patient non-adherence to oral contraceptive (OC) regimens. Missed pill instructions have previously been criticized for being too complex, lacking a definition of what is meant by "missed pills," and for being confusing to women who may not know the estrogen content of their formulation. To help inform the development of missed pill guidance to be included in the forthcoming US Selected Practice Recommendations, the objective of this systematic review was to evaluate the evidence on patient understanding of missed pill instructions. We searched the PubMed database for peer-reviewed articles that examined patient understanding of OC pill instructions that were published in any language from inception of the database through March 2012. We included studies that examined women's knowledge and understanding of missed pill instructions after exposure to some written material (e.g., patient package insert, brochure), as well as studies that compared different types of missed pill instructions on women's comprehension. We used standard abstract forms and grading systems to summarize and assess the quality of the evidence. From 1620 articles, nine studies met our inclusion criteria. Evidence from one randomized controlled trial (RCT) and two descriptive studies found that more women knew what to do after missing 1 pill than after missing 2 or 3 pills (Level I, good, to Level II-3, poor), and two descriptive studies found that more women knew what to do after missing 2 pills than after missing 3 pills (Level II-3, fair). Data from two descriptive studies documented the difficulty women have understanding missed pill instructions contained in patient package inserts (Level II-3, poor), and evidence from two RCTs found that providing written brochures with information on missed pill instructions in addition to contraceptive counseling significantly improved

  11. Uncovering missing links with cold ends

    Science.gov (United States)

    Zhu, Yu-Xiao; Lü, Linyuan; Zhang, Qian-Ming; Zhou, Tao

    2012-11-01

    To evaluate the performance of prediction of missing links, the known data are randomly divided into two parts, the training set and the probe set. We argue that this straightforward and standard method may lead to terrible bias, since in real biological and information networks, missing links are more likely to be links connecting low-degree nodes. We therefore study how to uncover missing links with low-degree nodes, namely links in the probe set are of lower degree products than a random sampling. Experimental analysis on ten local similarity indices and four disparate real networks reveals a surprising result that the Leicht-Holme-Newman index [E.A. Leicht, P. Holme, M.E.J. Newman, Vertex similarity in networks, Phys. Rev. E 73 (2006) 026120] performs the best, although it was known to be one of the worst indices if the probe set is a random sampling of all links. We further propose an parameter-dependent index, which considerably improves the prediction accuracy. Finally, we show the relevance of the proposed index to three real sampling methods: acquaintance sampling, random-walk sampling and path-based sampling.

  12. Food Culture, Preferences and Ethics in Dysphagia Management.

    Science.gov (United States)

    Kenny, Belinda

    2015-11-01

    Adults with dysphagia experience difficulties swallowing food and fluids with potentially harmful health and psychosocial consequences. Speech pathologists who manage patients with dysphagia are frequently required to address ethical issues when patients' food culture and/ or preferences are inconsistent with recommended diets. These issues incorporate complex links between food, identity and social participation. A composite case has been developed to reflect ethical issues identified by practising speech pathologists for the purposes of illustrating ethical concerns in dysphagia management. The case examines a speech pathologist's role in supporting patient autonomy when patients and carers express different goals and values. The case presents a 68-year-old man of Australian/Italian heritage with severe swallowing impairment and strong values attached to food preferences. The case is examined through application of the dysphagia algorithm, a tool for shared decision-making when patients refuse dietary modifications. Case analysis revealed the benefits and challenges of shared decision-making processes in dysphagia management. Four health professional skills and attributes were identified as synonymous with shared decision making: communication, imagination, courage and reflection. © 2015 John Wiley & Sons Ltd.

  13. Transitivity of Preferences

    Science.gov (United States)

    Regenwetter, Michel; Dana, Jason; Davis-Stober, Clintin P.

    2011-01-01

    Transitivity of preferences is a fundamental principle shared by most major contemporary rational, prescriptive, and descriptive models of decision making. To have transitive preferences, a person, group, or society that prefers choice option "x" to "y" and "y" to "z" must prefer "x" to…

  14. Mani, Miss Anna Modayil

    Indian Academy of Sciences (India)

    Home; Fellowship. Fellow Profile. Elected: 1960 Section: Earth & Planetary Sciences. Mani, Miss Anna Modayil A.I.I.Sc., FNA 1971-79; Secretary 1977-79. Date of birth: 23 August 1918. Date of death: 16 August 2001. Specialization: Atmospheric Physics and Instrumentation Last known address: c/o Mr K.T. Chandy, 14, ...

  15. Do attendees at sexual health and HIV clinics prefer to be called in by name or number?

    Science.gov (United States)

    Dabis, R; Nightingale, P; Kumar, V; Jaffer, K; Radcliffe, K

    2014-06-01

    Calling patients in from the waiting area is an important aspect of the initial medical encounter. According to national and international guidelines, clinics should decide on an appropriate way of calling patients in from the waiting room for consultations; however, no preference is actually recommended. A survey was carried out to see if patients were happy to be called in by number, first name, surname, full name, or title (Mr/Mrs/Miss/Ms) followed by surname. One hundred unselected patients were drawn from each clinic including; a genito-urinary medicine (GUM), a co-located GUM (cGUM) and co-located reproductive health (cRH), an HIV and a reproductive health (RH) clinic. Patients from the GUM, cGUM, cRH and RH clinics preferred to be called in by number rather than full name or title. Patients from the cRH clinic also preferred number to first name. In contrast, patients from the HIV clinics preferred to be called in by first name rather than number, surname, full name or title. Following this survey it would appear that number would be the most popular method of calling patients in sexual and reproductive health clinics and first name is the choice in HIV clinics. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  16. ErythropoieSIS stimulating agent (ESA use is increased following missed dialysis sessions

    Directory of Open Access Journals (Sweden)

    T. Christopher Bond

    2012-06-01

    Missed session episodes result in significant increases in ESA utilization in the post-miss period, and also in total monthly ESA use. Such increases should be considered in any assessment of impact of missed sessions: both clinical and economic.

  17. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    International Nuclear Information System (INIS)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Vermandel, Maximilien; Baillet, Clio

    2015-01-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging.Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used.Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results.The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging. (paper)

  18. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    Science.gov (United States)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  19. Posttraumatic stress disorder in women with war missing family members.

    Science.gov (United States)

    Baraković, Devla; Avdibegović, Esmina; Sinanović, Osman

    2014-12-01

    Research in crisis areas indicate that survivors' responses to the forced disappearance of family members are similar to reactions to other traumatic events. The aim of this study was to determine the presence of symptoms of posttraumatic stress disorder (PTSD) in women with war missing family members in Bosnia and Herzegovina 18 years after the war in this region (1992-1995). The study included 160 women aged 47.1±14.0 from three regions of Bosnia and Herzegovina. It was carried out in the period from April 2010 to May 2011. Of the 160 participants, 120 women had a war missing family member and 40 women had no war missing family members. The Harvard Trauma Questionnaire (HTQ), the Beck Depression Inventory (BDI) and the Hamilton Anxiety Rating Scale (HAMA) were used for data collection. Basic socio-demographic data and data concerning the missing family members were also collected. Women with war missing family members experienced significantly more traumatic war experiences (18.43±5.27 vs 6.57±4.34, pfamily members. Women with war missing family members showed significantly more severe PTSD symptoms. Based on the results of this study, it was determined that the forced disappearance of a family member is an ambiguous situation that can be characterized as a traumatic experience.

  20. Why Patients Miss Follow-Up Appointments: A Prospective Control ...

    African Journals Online (AJOL)

    Reasons include: transport (19 responses), ill-health (6) and financial constraints (5). State transport was unavailable to almost twothirds of the responders who cited transport as a problem. Conclusions: The 17% missed appointment rate is largely due to transport constraints. The commonest time for patients to miss ...

  1. 40 CFR 98.405 - Procedures for estimating missing data.

    Science.gov (United States)

    2010-07-01

    ... meter malfunctions), a substitute data value for the missing quantity measurement must be used in the... period for any reason, the reporter shall use either its delivering pipeline measurements or the default... § 98.405 Procedures for estimating missing data. (a) Whenever a quality-assured value of the quantity...

  2. Enhanced Seamless Handover Algorithm for WiMAX and LTE Roaming

    Directory of Open Access Journals (Sweden)

    HINDIA, M. N.

    2014-11-01

    Full Text Available With the ever evolving mobile communication technology, achieving a high quality seamless mobility access across mobile networks is the present challenge to research and development engineers. Existing algorithms are used to make handover while a mobile station is roaming between cells. Such algorithms have some handover instability due to method of making handover decision. This paper proposes an enhanced handover algorithm that substantially reduces the handover redundancy in vertical and horizontal handovers. Also, it enables users to select the most appropriate target network technology based on their preferences even in the worst case where the mobile station roams between cell boundaries, and has high ability to have efficient performance in the critical area full of interferences. The proposed algorithm uses additional quality of service criteria, such as cost, delay, available bandwidth and network condition with two handover thresholds to achieve a better seamless handover process. After developing and testing this algorithm, the simulation results show a major reduction in the redundant handover, so high accuracy of horizontal and vertical handovers obtained. Moreover, the signal strength is kept at a level higher than the threshold during the whole simulation period, while maintaining low delay and connection cost compared to other two algorithms in both scenarios.

  3. Automated measurement of spatial preference in the open field test with transmitted lighting.

    Science.gov (United States)

    Kulikov, Alexander V; Tikhonova, Maria A; Kulikov, Victor A

    2008-05-30

    New modification of the open field was designed to improve automation of the test. The main innovations were: (1) transmitted lighting and (2) estimation of probability to find pixels associated with an animal in the selected region of arena as an objective index of spatial preference. Transmitted (inverted) lighting significantly ameliorated the contrast between an animal and arena and allowed to track white animals with similar efficacy as colored ones. Probability as a measure of preference of selected region was mathematically proved and experimentally verified. A good correlation between probability and classic indices of spatial preference (number of region entries and time spent therein) was shown. The algorithm of calculation of probability to find pixels associated with an animal in the selected region was implemented in the EthoStudio software. Significant interstrain differences in locomotion and the central zone preference (index of anxiety) were shown using the inverted lighting and the EthoStudio software in mice of six inbred strains. The effects of arena shape (circle or square) and a novel object presence in the center of arena on the open field behavior in mice were studied.

  4. Missing data and the accuracy of magnetic-observatory hour means

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2009-09-01

    Full Text Available Analysis is made of the accuracy of magnetic-observatory hourly means constructed from definitive minute data having missing values (gaps. Bootstrap sampling from different data-gap distributions is used to estimate average errors on hourly means as a function of the number of missing data. Absolute and relative error results are calculated for horizontal-intensity, declination, and vertical-component data collected at high, medium, and low magnetic latitudes. For 90% complete coverage (10% missing data, average (RMS absolute errors on hourly means are generally less than errors permitted by Intermagnet for minute data. As a rule of thumb, the average relative error for hourly means with 10% missing minute data is approximately equal to 10% of the hourly standard deviation of the source minute data.

  5. Missing Value Imputation Based on Gaussian Mixture Model for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Xiaobo Yan

    2015-01-01

    Full Text Available This paper addresses missing value imputation for the Internet of Things (IoT. Nowadays, the IoT has been used widely and commonly by a variety of domains, such as transportation and logistics domain and healthcare domain. However, missing values are very common in the IoT for a variety of reasons, which results in the fact that the experimental data are incomplete. As a result of this, some work, which is related to the data of the IoT, can’t be carried out normally. And it leads to the reduction in the accuracy and reliability of the data analysis results. This paper, for the characteristics of the data itself and the features of missing data in IoT, divides the missing data into three types and defines three corresponding missing value imputation problems. Then, we propose three new models to solve the corresponding problems, and they are model of missing value imputation based on context and linear mean (MCL, model of missing value imputation based on binary search (MBS, and model of missing value imputation based on Gaussian mixture model (MGI. Experimental results showed that the three models can improve the accuracy, reliability, and stability of missing value imputation greatly and effectively.

  6. Nurses' personal and ward accountability and missed nursing care: A cross-sectional study.

    Science.gov (United States)

    Srulovici, Einav; Drach-Zahavy, Anat

    2017-10-01

    Missed nursing care is considered an act of omission with potentially detrimental consequences for patients, nurses, and organizations. Although the theoretical conceptualization of missed nursing care specifies nurses' values, attitudes, and perceptions of their work environment as its core antecedents, empirical studies have mainly focused on nurses' socio-demographic and professional attributes. Furthermore, assessment of missed nursing care has been mainly based on same-source methods. This study aimed to test the joint effects of personal and ward accountability on missed nursing care, by using both focal (the nurse whose missed nursing care is examined) and incoming (the nurse responsible for the same patients at the subsequent shift) nurses' assessments of missed nursing care. A cross-sectional design, where nurses were nested in wards. A total of 172 focal and 123 incoming nurses from 32 nursing wards in eight hospitals. Missed nursing care was assessed with the 22-item MISSCARE survey using two sources: focal and incoming nurses. Personal and ward accountability were assessed by the focal nurse with two 19-item scales. Nurses' socio-demographics and ward and shift characteristics were also collected. Mixed linear models were used as the analysis strategy. Focal and incoming nurses reported occasional missed nursing care of the focal nurse (Mean=1.87, SD=0.71 and Mean=2.09, SD=0.84, respectively; r=0.55, ppersonal socio-demographic characteristics, higher personal accountability was significantly associated with decreased missed care (β=-0.29, p0.05). The interaction effect was significant (β=-0.31, ppersonal accountability and missed nursing care. Similar patterns were obtained for the incoming nurses' assessment of focal nurse's missed care. Use of focal and incoming nurses' missed nursing care assessments limited the common source bias and strengthened our findings. Personal and ward accountability are significant values, which are associated with

  7. Intelligence in Ecology: How Internet of Things Expands Insights into the Missing CO2 Sink

    Directory of Open Access Journals (Sweden)

    Wenfeng Wang

    2016-01-01

    Full Text Available Arid region characterizes more than 30% of the Earth’s total land surface area and the area is still increasing due to the trends of desertification, yet the extent to which it modulates the global C balance has been inadequately studied. As an emerging technology, IoT monitoring can combine researchers, instruments, and field sites and generate archival data for a better understanding of soil abiotic CO2 uptake in arid region. Images’ similarity analyses based on IoT monitoring can help ecologists to find sites where the abiotic uptake can temporally dominate and how the negative soil respiration fluxes were produced, while IoT monitoring with a set of intelligent video recognition algorithms enables ecologists to revisit these sites and the experiments details through the videos. Therefore, IoT monitoring of geospatial images, videos, and associated optimization and control algorithms should be a research priority towards expanding insights for soil abiotic CO2 uptake and a better understanding of how the uptake happens in arid region. Nevertheless, there are still considerable uncertainties and difficulties in determining the overall perspective of IoT monitoring for insights into the missing CO2 sink.

  8. Missing level corrections using neutron spacings

    International Nuclear Information System (INIS)

    Mitchell, G.E.; Shriner, J.F. Jr.

    2009-11-01

    Nuclear level densities are very important for a wide variety of pure and applied neutron physics. Most of the relevant information is obtained from neutron resonance data. The key correction to the raw experimental data is for missing levels. All of the standard correction methods assume that the neutron resonances obey the predictions of the Gaussian Orthogonal Ensemble version of Random Matrix Theory (RMT) and utilize comparison with the Porter-Thomas distribution of reduced widths in order to determine the fraction of missing levels. Here we adopt an alternate approach, comparing the neutron data with the predictions of RMT for eigenvalue statistics. Since in RMT the widths and eigenvalues are independent, analysis of the eigenvalues provides an independent analysis of the same data set. We summarize recent work in this area using the nearest neighbour spacing distribution, and we also develop tests that utilize several other eigenvalue statistics to provide additional estimates of the missing fraction of levels. These additional statistics include the key test for long range order - the Dyson-Mehta Δ 3 statistic - as well as the thermodynamic energy (that arises from Dyson's Circular Orthogonal Ensemble), the linear correlation coefficient of adjacent spacings (a measure of short range anti-correlation), and a statistic related to the Q statistic defined by Dyson and Mehta in the early 1960s. Developed FORTRAN code is available at http://www-nds.iaea.org/missing-levels/. These tests are applied to the s-wave neutron resonances in n + 238 U and n + 232 Th. The results for 238 U are consistent with each other and raise some issues concerning data purity. For the 232 Th data, all of the tests are in excellent agreement. (author)

  9. 40 CFR 75.37 - Missing data procedures for moisture.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Missing data procedures for moisture... data procedures for moisture. (a) The owner or operator of a unit with a continuous moisture monitoring system shall substitute for missing moisture data using the procedures of this section. (b) Where no...

  10. Medical surgical nurses describe missed nursing care tasks-Evaluating our work environment.

    Science.gov (United States)

    Winsett, Rebecca P; Rottet, Kendra; Schmitt, Abby; Wathen, Ellen; Wilson, Debra

    2016-11-01

    The purpose of the study was to explore the nurse work environment by evaluating the self-report of missed nursing care and the reasons for the missed care. A convenience sample of medical surgical nurses from four hospitals was invited to complete the survey for this descriptive study. The sample included 168 nurses. The MISSCARE survey assessed the frequency and reason of 24 routine nursing care elements. The most frequently reported missed care was ambulation as ordered, medications given within a 30 minute window, and mouth care. Moderate or significant reasons reported for the missed care were: unexpected rise in volume/acuity, heavy admissions/discharges, inadequate assistants, inadequate staff, meds not available when needed, and urgent situations. Identifying missed nursing care and reasons for missed care provides an opportunity for exploring strategies to reduce interruptions, develop unit cohesiveness, improve the nurse work environment, and ultimately leading to improved patient outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. A regressive methodology for estimating missing data in rainfall daily time series

    Science.gov (United States)

    Barca, E.; Passarella, G.

    2009-04-01

    The "presence" of gaps in environmental data time series represents a very common, but extremely critical problem, since it can produce biased results (Rubin, 1976). Missing data plagues almost all surveys. The problem is how to deal with missing data once it has been deemed impossible to recover the actual missing values. Apart from the amount of missing data, another issue which plays an important role in the choice of any recovery approach is the evaluation of "missingness" mechanisms. When data missing is conditioned by some other variable observed in the data set (Schafer, 1997) the mechanism is called MAR (Missing at Random). Otherwise, when the missingness mechanism depends on the actual value of the missing data, it is called NCAR (Not Missing at Random). This last is the most difficult condition to model. In the last decade interest arose in the estimation of missing data by using regression (single imputation). More recently multiple imputation has become also available, which returns a distribution of estimated values (Scheffer, 2002). In this paper an automatic methodology for estimating missing data is presented. In practice, given a gauging station affected by missing data (target station), the methodology checks the randomness of the missing data and classifies the "similarity" between the target station and the other gauging stations spread over the study area. Among different methods useful for defining the similarity degree, whose effectiveness strongly depends on the data distribution, the Spearman correlation coefficient was chosen. Once defined the similarity matrix, a suitable, nonparametric, univariate, and regressive method was applied in order to estimate missing data in the target station: the Theil method (Theil, 1950). Even though the methodology revealed to be rather reliable an improvement of the missing data estimation can be achieved by a generalization. A first possible improvement consists in extending the univariate technique to

  12. Efficiently Hiding Sensitive Itemsets with Transaction Deletion Based on Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Chun-Wei Lin

    2014-01-01

    Full Text Available Data mining is used to mine meaningful and useful information or knowledge from a very large database. Some secure or private information can be discovered by data mining techniques, thus resulting in an inherent risk of threats to privacy. Privacy-preserving data mining (PPDM has thus arisen in recent years to sanitize the original database for hiding sensitive information, which can be concerned as an NP-hard problem in sanitization process. In this paper, a compact prelarge GA-based (cpGA2DT algorithm to delete transactions for hiding sensitive itemsets is thus proposed. It solves the limitations of the evolutionary process by adopting both the compact GA-based (cGA mechanism and the prelarge concept. A flexible fitness function with three adjustable weights is thus designed to find the appropriate transactions to be deleted in order to hide sensitive itemsets with minimal side effects of hiding failure, missing cost, and artificial cost. Experiments are conducted to show the performance of the proposed cpGA2DT algorithm compared to the simple GA-based (sGA2DT algorithm and the greedy approach in terms of execution time and three side effects.

  13. Use of multiple objective evolutionary algorithms in optimizing surveillance requirements

    International Nuclear Information System (INIS)

    Martorell, S.; Carlos, S.; Villanueva, J.F.; Sanchez, A.I; Galvan, B.; Salazar, D.; Cepin, M.

    2006-01-01

    This paper presents the development and application of a double-loop Multiple Objective Evolutionary Algorithm that uses a Multiple Objective Genetic Algorithm to perform the simultaneous optimization of periodic Test Intervals (TI) and Test Planning (TP). It takes into account the time-dependent effect of TP performed on stand-by safety-related equipment. TI and TP are part of the Surveillance Requirements within Technical Specifications at Nuclear Power Plants. It addresses the problem of multi-objective optimization in the space of dependable variables, i.e. TI and TP, using a novel flexible structure of the optimization algorithm. Lessons learnt from the cases of application of the methodology to optimize TI and TP for the High-Pressure Injection System are given. The results show that the double-loop Multiple Objective Evolutionary Algorithm is able to find the Pareto set of solutions that represents a surface of non-dominated solutions that satisfy all the constraints imposed on the objective functions and decision variables. Decision makers can adopt then the best solution found depending on their particular preference, e.g. minimum cost, minimum unavailability

  14. Clusterflock: a flocking algorithm for isolating congruent phylogenomic datasets.

    Science.gov (United States)

    Narechania, Apurva; Baker, Richard; DeSalle, Rob; Mathema, Barun; Kolokotronis, Sergios-Orestis; Kreiswirth, Barry; Planet, Paul J

    2016-10-24

    Collective animal behavior, such as the flocking of birds or the shoaling of fish, has inspired a class of algorithms designed to optimize distance-based clusters in various applications, including document analysis and DNA microarrays. In a flocking model, individual agents respond only to their immediate environment and move according to a few simple rules. After several iterations the agents self-organize, and clusters emerge without the need for partitional seeds. In addition to its unsupervised nature, flocking offers several computational advantages, including the potential to reduce the number of required comparisons. In the tool presented here, Clusterflock, we have implemented a flocking algorithm designed to locate groups (flocks) of orthologous gene families (OGFs) that share an evolutionary history. Pairwise distances that measure phylogenetic incongruence between OGFs guide flock formation. We tested this approach on several simulated datasets by varying the number of underlying topologies, the proportion of missing data, and evolutionary rates, and show that in datasets containing high levels of missing data and rate heterogeneity, Clusterflock outperforms other well-established clustering techniques. We also verified its utility on a known, large-scale recombination event in Staphylococcus aureus. By isolating sets of OGFs with divergent phylogenetic signals, we were able to pinpoint the recombined region without forcing a pre-determined number of groupings or defining a pre-determined incongruence threshold. Clusterflock is an open-source tool that can be used to discover horizontally transferred genes, recombined areas of chromosomes, and the phylogenetic 'core' of a genome. Although we used it here in an evolutionary context, it is generalizable to any clustering problem. Users can write extensions to calculate any distance metric on the unit interval, and can use these distances to 'flock' any type of data.

  15. Energetic integration of discontinuous processes by means of genetic algorithms, GABSOBHIN; Integration energetique de procedes discontinus a l'aide d'algorithmes genetiques, GABSOBHIN

    Energy Technology Data Exchange (ETDEWEB)

    Krummenacher, P.; Renaud, B.; Marechal, F.; Favrat, D.

    2001-07-01

    This report presents a new methodological approach for the optimal design of energy-integrated batch processes. The main emphasis is put on indirect and, to some extend, on direct heat exchange networks with the possibility of introducing closed or open storage systems. The study demonstrates the feasibility of optimising with genetic algorithms while highlighting the pros and cons of this type of approach. The study shows that the resolution of such problems should preferably be done in several steps to better target the expected solutions. Demonstration is made that in spite of relatively large computer times (on PCs) the use of genetic algorithm allows the consideration of both continuous decision variables (size, operational rating of equipment, etc.) and integer variables (related to the structure at design and during operation). Comparison of two optimisation strategies is shown with a preference for a two-steps optimisation scheme. One of the strengths of genetic algorithms is the capacity to accommodate heuristic rules, which can be introduced in the model. However, a rigorous modelling strategy is advocated to improve robustness and adequate coding of the decision variables. The practical aspects of the research work are converted into a software developed with MATLAB to solve the energy integration of batch processes with a reasonable number of closed or open stores. This software includes the model of superstructures, including the heat exchangers and the storage alternatives, as well as the link to the Struggle algorithm developed at MIT via a dedicated new interface. The package also includes a user-friendly pre-processing using EXCEL, which is to facilitate to application to other similar industrial problems. These software developments have been validated both on an academic and on an industrial type of problems. (author)

  16. Factors Influencing Parents' Preferences and Parents' Perceptions of Child Preferences of Picturebooks

    Science.gov (United States)

    Wagner, Laura

    2017-01-01

    This study examined factors influencing parents' preferences and their perceptions of their children's preferences for picturebooks. First, a content analysis was conducted on a set of picturebooks (N = 87) drawn from the sample described in Wagner (2013); Then, parents (N = 149) rated the books and several content properties were examined for their ability to predict parents' preferences and their perception of their children's preferences. The initial content analysis found correlated clusters of disparate measures of complexity (linguistic, cognitive, narrative) and identified a distinctive sub-genre of modern books featuring female protagonists. The experimental preference analysis found that parents' own preferences were most influenced by the books' age and status; parents' perceptions of their children's preferences were influenced by gender, with parents perceiving their sons (but not daughters) as dis-preferring books with female protagnoists. In addition, influences of the child's reading ability and the linguistic complexity of the book on preferences suggested a sensitivity to the cultural practice of joint book-reading. PMID:28919869

  17. Genetic Algorithm Design of a 3D Printed Heat Sink

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Tong [ORNL; Ozpineci, Burak [ORNL; Ayers, Curtis William [ORNL

    2016-01-01

    In this paper, a genetic algorithm- (GA-) based approach is discussed for designing heat sinks based on total heat generation and dissipation for a pre-specified size andshape. This approach combines random iteration processesand genetic algorithms with finite element analysis (FEA) to design the optimized heat sink. With an approach that prefers survival of the fittest , a more powerful heat sink can bedesigned which can cool power electronics more efficiently. Some of the resulting designs can only be 3D printed due totheir complexity. In addition to describing the methodology, this paper also includes comparisons of different cases to evaluate the performance of the newly designed heat sinkcompared to commercially available heat sinks.

  18. Estimation of Missing Observations in Two-Level Split-Plot Designs

    DEFF Research Database (Denmark)

    Almimi, Ashraf A.; Kulahci, Murat; Montgomery, Douglas C.

    2008-01-01

    Inserting estimates for the missing observations from split-plot designs restores their balanced or orthogonal structure and alleviates the difficulties in the statistical analysis. In this article, we extend a method due to Draper and Stoneman to estimate the missing observations from unreplicated...... two-level factorial and fractional factorial split-plot (FSP and FFSP) designs. The missing observations, which can either be from the same whole plot, from different whole plots, or comprise entire whole plots, are estimated by equating to zero a number of specific contrast columns equal...... to the number of the missing observations. These estimates are inserted into the design table and the estimates for the remaining effects (or alias chains of effects as the case with FFSP designs) are plotted on two half-normal plots: one for the whole-plot effects and the other for the subplot effects...

  19. Optimal Selection of Clustering Algorithm via Multi-Criteria Decision Analysis (MCDA for Load Profiling Applications

    Directory of Open Access Journals (Sweden)

    Ioannis P. Panapakidis

    2018-02-01

    Full Text Available Due to high implementation rates of smart meter systems, considerable amount of research is placed in machine learning tools for data handling and information retrieval. A key tool in load data processing is clustering. In recent years, a number of researches have proposed different clustering algorithms in the load profiling field. The present paper provides a methodology for addressing the aforementioned problem through Multi-Criteria Decision Analysis (MCDA and namely, using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS. A comparison of the algorithms is employed. Next, a single test case on the selection of an algorithm is examined. User specific weights are applied and based on these weight values, the optimal algorithm is drawn.

  20. A spectral algorithm for the seriation problem

    Energy Technology Data Exchange (ETDEWEB)

    Atkins, J.E. [Michigan Univ., Ann Arbor, MI (United States). Dept. of Mathematics; Boman, E.G. [Stanford Univ., CA (United States). Dept. of Computer Science; Hendrickson, B. [Sandia National Labs., Albuquerque, NM (United States)

    1994-11-01

    Given a set of objects and a correlation function f reflecting the desire for two items to be near each other, find all sequences {pi} of the items so that correlation preferences are preserved; that is if {pi}(i) < {pi}(j) < {pi}(k) then f(i,j) {ge} f(i,k) and f(j,k) {ge} f(i,k). This seriation problem has numerous applications, for instance, solving it yields a solution to the consecutive ones problem. We present a spectral algorithm for this problem that has a number of interesting features. Whereas most previous applications of spectral techniques provided bounds or heuristics, our result is an algorithm for a nontrivial combinatorial problem. Our analysis introduces powerful tools from matrix theory to the theoretical computer science community. Also, spectral methods are being applied as heuristics for a variety of sequencing problems and our result helps explain and justify these applications. Although the worst case running time for our approach is not competitive with that of existing methods for well posed problem instances, unlike combinatorial approaches our algorithm remains a credible heuristic for the important cases where there are errors in the data.

  1. The effect of maternal near miss on adverse infant nutritional outcomes

    Directory of Open Access Journals (Sweden)

    Dulce M. Zanardi

    Full Text Available OBJECTIVES: To evaluate the association between self-reported maternal near miss and adverse nutritional status in children under one year of age. METHODS: This study is a secondary analysis of a study in which women who took their children under one year of age to the national vaccine campaign were interviewed. The self-reported condition of maternal near miss used the criteria of Intensive Care Unit admission; eclampsia; blood transfusion and hysterectomy; and their potential associations with any type of nutritional disorder in children, including deficits in weight-for-age, deficits in height-for-age, obesity and breastfeeding. The rates of near miss for the country, regions and states were initially estimated. The relative risks of infant adverse nutritional status according to near miss and maternal/childbirth characteristics were estimated with their 95% CIs using bivariate and multiple analyses. RESULTS: The overall prevalence of near miss was 2.9% and was slightly higher for the Legal Amazon than for other regions. No significant associations were found with nutritional disorders in children. Only a 12% decrease in overall maternal breastfeeding was associated with near miss. Living in the countryside and child over 6 months of age increased the risk of altered nutritional status by approximately 15%, while female child gender decreased this risk by 30%. Maternal near miss was not associated with an increased risk of any alteration in infant nutritional status. CONCLUSIONS: There was no association between maternal near miss and altered nutritional status in children up to one year of age. The risk of infant adverse nutritional status was greater in women living in the countryside, for children over 6 months of age and for male gender.

  2. Proposed Fuzzy-NN Algorithm with LoRaCommunication Protocol for Clustered Irrigation Systems

    Directory of Open Access Journals (Sweden)

    Sotirios Kontogiannis

    2017-11-01

    Full Text Available Modern irrigation systems utilize sensors and actuators, interconnected together as a single entity. In such entities, A.I. algorithms are implemented, which are responsible for the irrigation process. In this paper, the authors present an irrigation Open Watering System (OWS architecture that spatially clusters the irrigation process into autonomous irrigation sections. Authors’ OWS implementation includes a Neuro-Fuzzy decision algorithm called FITRA, which originates from the Greek word for seed. In this paper, the FITRA algorithm is described in detail, as are experimentation results that indicate significant water conservations from the use of the FITRA algorithm. Furthermore, the authors propose a new communication protocol over LoRa radio as an alternative low-energy and long-range OWS clusters communication mechanism. The experimental scenarios confirm that the FITRA algorithm provides more efficient irrigation on clustered areas than existing non-clustered, time scheduled or threshold adaptive algorithms. This is due to the FITRA algorithm’s frequent monitoring of environmental conditions, fuzzy and neural network adaptation as well as adherence to past irrigation preferences.

  3. "Near-miss" obstetric events and maternal deaths in Sagamu, Nigeria: a retrospective study

    Directory of Open Access Journals (Sweden)

    Daniel Olusoji J

    2005-11-01

    Full Text Available Abstract Aim To determine the frequency of near-miss (severe acute maternal morbidity and the nature of near-miss events, and comparatively analysed near-miss morbidities and maternal deaths among pregnant women managed over a 3-year period in a Nigerian tertiary centre. Methods Retrospective facility-based review of cases of near-miss and maternal death which occurred between 1 January 2002 and 31 December 2004. Near-miss case definition was based on validated disease-specific criteria, comprising of five diagnostic categories: haemorrhage, hypertensive disorders in pregnancy, dystocia, infection and anaemia. The near-miss morbidities were compared with maternal deaths with respect to demographic features and disease profiles. Mortality indices were determined for various disease processes to appreciate the standard of care provided for life-threatening obstetric conditions. The maternal death to near-miss ratios for the three years were compared to assess the trend in the quality of obstetric care. Results There were 1501 deliveries, 211 near-miss cases and 44 maternal deaths. The total near-miss events were 242 with a decreasing trend from 2002 to 2004. Demographic features of cases of near-miss and maternal death were comparable. Besides infectious morbidity, the categories of complications responsible for near-misses and maternal deaths followed the same order of decreasing frequency. Hypertensive disorders in pregnancy and haemorrhage were responsible for 61.1% of near-miss cases and 50.0% of maternal deaths. More women died after developing severe morbidity due to uterine rupture and infection, with mortality indices of 37.5% and 28.6%, respectively. Early pregnancy complications and antepartum haemorrhage had the lowest mortality indices. Majority of the cases of near-miss (82.5% and maternal death (88.6% were unbooked for antenatal care and delivery in this hospital. Maternal mortality ratio for the period was 2931.4 per 100

  4. Comparison of 3D Maximum A Posteriori and Filtered Backprojection algorithms for high resolution animal imaging in microPET

    International Nuclear Information System (INIS)

    Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.

    2000-01-01

    We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior

  5. Statistical analysis with missing exposure data measured by proxy respondents: a misclassification problem within a missing-data problem.

    Science.gov (United States)

    Shardell, Michelle; Hicks, Gregory E

    2014-11-10

    In studies of older adults, researchers often recruit proxy respondents, such as relatives or caregivers, when study participants cannot provide self-reports (e.g., because of illness). Proxies are usually only sought to report on behalf of participants with missing self-reports; thus, either a participant self-report or proxy report, but not both, is available for each participant. Furthermore, the missing-data mechanism for participant self-reports is not identifiable and may be nonignorable. When exposures are binary and participant self-reports are conceptualized as the gold standard, substituting error-prone proxy reports for missing participant self-reports may produce biased estimates of outcome means. Researchers can handle this data structure by treating the problem as one of misclassification within the stratum of participants with missing self-reports. Most methods for addressing exposure misclassification require validation data, replicate data, or an assumption of nondifferential misclassification; other methods may result in an exposure misclassification model that is incompatible with the analysis model. We propose a model that makes none of the aforementioned requirements and still preserves model compatibility. Two user-specified tuning parameters encode the exposure misclassification model. Two proposed approaches estimate outcome means standardized for (potentially) high-dimensional covariates using multiple imputation followed by propensity score methods. The first method is parametric and uses maximum likelihood to estimate the exposure misclassification model (i.e., the imputation model) and the propensity score model (i.e., the analysis model); the second method is nonparametric and uses boosted classification and regression trees to estimate both models. We apply both methods to a study of elderly hip fracture patients. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Missed losses loom larger than missed gains: Electrodermal reactivity to decision choices and outcomes in a gambling task.

    Science.gov (United States)

    Wu, Yin; Van Dijk, Eric; Aitken, Mike; Clark, Luke

    2016-04-01

    Loss aversion is a defining characteristic of prospect theory, whereby responses are stronger to losses than to equivalently sized gains (Kahneman & Tversky Econometrica, 47, 263-291, 1979). By monitoring electrodermal activity (EDA) during a gambling task, in this study we examined physiological activity during risky decisions, as well as to both obtained (e.g., gains and losses) and counterfactual (e.g., narrowly missed gains and losses) outcomes. During the bet selection phase, EDA increased linearly with bet size, highlighting the role of somatic signals in decision-making under uncertainty in a task without any learning requirement. Outcome-related EDA scaled with the magnitudes of monetary wins and losses, and losses had a stronger impact on EDA than did equivalently sized wins. Narrowly missed wins (i.e., near-wins) and narrowly missed losses (i.e., near-losses) also evoked EDA responses, and the change of EDA as a function of the size of the missed outcome was modestly greater for near-losses than for near-wins, suggesting that near-losses have more impact on subjective value than do near-wins. Across individuals, the slope for choice-related EDA (as a function of bet size) correlated with the slope for outcome-related EDA as a function of both the obtained and counterfactual outcome magnitudes, and these correlations were stronger for loss and near-loss conditions than for win and near-win conditions. Taken together, these asymmetrical EDA patterns to objective wins and losses, as well as to near-wins and near-losses, provide a psychophysiological instantiation of the value function curve in prospect theory, which is steeper in the negative than in the positive domain.

  7. [Missed lessons, missed opportunities: a role for public health services in medical absenteeism in young people].

    Science.gov (United States)

    Vanneste, Y T M; van de Goor, L A M; Feron, F J M

    2016-01-01

    Young people who often miss school for health reasons are not only missing education, but also the daily routine of school, and social intercourse with their classmates. Medical absenteeism among students merits greater attention. For a number of years, in various regions in the Netherlands, students with extensive medical absenteeism have been invited to see a youth healthcare specialist. The MASS intervention (Medical Advice of Students reported Sick; in Dutch: Medische Advisering van de Ziekgemelde Leerling, abbreviated as M@ZL) has been developed by the West Brabant Regional Public Health Service together with secondary schools to address school absenteeism due to reporting sick. In this paper we discuss the MASS intervention and explain why attention should be paid by public health services to the problem of school absenteeism, especially absenteeism on health grounds.

  8. Investigating country-specific music preferences and music recommendation algorithms with the LFM-1b dataset.

    Science.gov (United States)

    Schedl, Markus

    2017-01-01

    Recently, the LFM-1b dataset has been proposed to foster research and evaluation in music retrieval and music recommender systems, Schedl (Proceedings of the ACM International Conference on Multimedia Retrieval (ICMR). New York, 2016). It contains more than one billion music listening events created by more than 120,000 users of Last.fm. Each listening event is characterized by artist, album, and track name, and further includes a timestamp. Basic demographic information and a selection of more elaborate listener-specific descriptors are included as well, for anonymized users. In this article, we reveal information about LFM-1b's acquisition and content and we compare it to existing datasets. We furthermore provide an extensive statistical analysis of the dataset, including basic properties of the item sets, demographic coverage, distribution of listening events (e.g., over artists and users), and aspects related to music preference and consumption behavior (e.g., temporal features and mainstreaminess of listeners). Exploiting country information of users and genre tags of artists, we also create taste profiles for populations and determine similar and dissimilar countries in terms of their populations' music preferences. Finally, we illustrate the dataset's usage in a simple artist recommendation task, whose results are intended to serve as baseline against which more elaborate techniques can be assessed.

  9. Patient preference compared with random allocation in short-term psychodynamic supportive psychotherapy with indicated addition of pharmacotherapy for depression.

    NARCIS (Netherlands)

    Van, H.L.; Dekker, J.J.M.; Koelen, J.; Kool, S.; van Aalst, G.; Hendriksen, I.J.M.; Peen, J.; Schoevers, R.A.

    2009-01-01

    Depressed patients randomized to psychotherapy were compared with those who had been chosen for psychotherapy in a treatment algorithm, including addition of an antidepressant in case of early nonresponse. There were no differences between randomized and by-preference patients at baseline in

  10. Estimating range of influence in case of missing spatial data

    DEFF Research Database (Denmark)

    Bihrmann, Kristine; Ersbøll, Annette Kjær

    2015-01-01

    BACKGROUND: The range of influence refers to the average distance between locations at which the observed outcome is no longer correlated. In many studies, missing data occur and a popular tool for handling missing data is multiple imputation. The objective of this study was to investigate how...... the estimated range of influence is affected when 1) the outcome is only observed at some of a given set of locations, and 2) multiple imputation is used to impute the outcome at the non-observed locations. METHODS: The study was based on the simulation of missing outcomes in a complete data set. The range...... of influence was estimated from a logistic regression model with a spatially structured random effect, modelled by a Gaussian field. Results were evaluated by comparing estimates obtained from complete, missing, and imputed data. RESULTS: In most simulation scenarios, the range estimates were consistent...

  11. Simulated annealing algorithm for solving chambering student-case assignment problem

    Science.gov (United States)

    Ghazali, Saadiah; Abdul-Rahman, Syariza

    2015-12-01

    The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.

  12. Agent assisted interactive algorithm for computationally demanding multiobjective optimization problems

    OpenAIRE

    Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa

    2015-01-01

    We generalize the applicability of interactive methods for solving computationally demanding, that is, time-consuming, multiobjective optimization problems. For this purpose we propose a new agent assisted interactive algorithm. It employs a computationally inexpensive surrogate problem and four different agents that intelligently update the surrogate based on the preferences specified by a decision maker. In this way, we decrease the waiting times imposed on the decision maker du...

  13. Neural modelling of ranking data with an application to stated preference data

    Directory of Open Access Journals (Sweden)

    Catherine Krier

    2013-05-01

    Full Text Available Although neural networks are commonly encountered to solve classification problems, ranking data present specificities which require adapting the model. Based on a latent utility function defined on the characteristics of the objects to be ranked, the approach suggested in this paper leads to a perceptron-based algorithm for a highly non linear model. Data on stated preferences obtained through a survey by face-to-face interviews, in the field of freight transport, are used to illustrate the method. Numerical difficulties are pinpointed and a Pocket type algorithm is shown to provide an efficient heuristic to minimize the discrete error criterion. A substantial merit of this approach is to provide a workable estimation of contextually interpretable parameters along with a statistical evaluation of the goodness of fit.

  14. Firefly Algorithm for Polynomial Bézier Surface Parameterization

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2013-01-01

    reality, medical imaging, computer graphics, computer animation, and many others. Very often, the preferred approximating surface is polynomial, usually described in parametric form. This leads to the problem of determining suitable parametric values for the data points, the so-called surface parameterization. In real-world settings, data points are generally irregularly sampled and subjected to measurement noise, leading to a very difficult nonlinear continuous optimization problem, unsolvable with standard optimization techniques. This paper solves the parameterization problem for polynomial Bézier surfaces by applying the firefly algorithm, a powerful nature-inspired metaheuristic algorithm introduced recently to address difficult optimization problems. The method has been successfully applied to some illustrative examples of open and closed surfaces, including shapes with singularities. Our results show that the method performs very well, being able to yield the best approximating surface with a high degree of accuracy.

  15. The Strengths and Limitations of South Africa's Search for Apartheid-Era Missing Persons.

    Science.gov (United States)

    Aronson, Jay D

    2011-07-01

    This article examines efforts to account for missing persons from the apartheid era in South Africa by family members, civil society organizations and the current government's Missing Persons Task Team, which emerged out of the Truth and Reconciliation Commission process. It focuses on how missing persons have been officially defined in the South African context and the extent to which the South African government is able to address the current needs and desires of relatives of the missing. I make two main arguments: that family members ought to have an active role in shaping the initiatives and institutions that seek to resolve the fate of missing people, and that the South African government ought to take a more holistic 'grave-to-grave' approach to the process of identifying, returning and reburying the remains of the missing.

  16. A hospital-level cost-effectiveness analysis model for toxigenic Clostridium difficile detection algorithms.

    Science.gov (United States)

    Verhoye, E; Vandecandelaere, P; De Beenhouwer, H; Coppens, G; Cartuyvels, R; Van den Abeele, A; Frans, J; Laffut, W

    2015-10-01

    Despite thorough analyses of the analytical performance of Clostridium difficile tests and test algorithms, the financial impact at hospital level has not been well described. Such a model should take institution-specific variables into account, such as incidence, request behaviour and infection control policies. To calculate the total hospital costs of different test algorithms, accounting for days on which infected patients with toxigenic strains were not isolated and therefore posed an infectious risk for new/secondary nosocomial infections. A mathematical algorithm was developed to gather the above parameters using data from seven Flemish hospital laboratories (Bilulu Microbiology Study Group) (number of tests, local prevalence and hospital hygiene measures). Measures of sensitivity and specificity for the evaluated tests were taken from the literature. List prices and costs of assays were provided by the manufacturer or the institutions. The calculated cost included reagent costs, personnel costs and the financial burden following due and undue isolations and antibiotic therapies. Five different test algorithms were compared. A dynamic calculation model was constructed to evaluate the cost:benefit ratio of each algorithm for a set of institution- and time-dependent inputted variables (prevalence, cost fluctuations and test performances), making it possible to choose the most advantageous algorithm for its setting. A two-step test algorithm with concomitant glutamate dehydrogenase and toxin testing, followed by a rapid molecular assay was found to be the most cost-effective algorithm. This enabled resolution of almost all cases on the day of arrival, minimizing the number of unnecessary or missing isolations. Copyright © 2015 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  17. Barriers to obstetric care among maternal near-misses | Soma-Pillay ...

    African Journals Online (AJOL)

    One hundred maternal near-misses were prospectively identified using the World Health Organization criteria. ... The above causes were also the most important factors causing delays for the leading causes of maternal near-misses – obstetric haemorrhage, hypertension/pre-eclampsia, and medical and surgical conditions.

  18. TITRATION: A Randomized Study to Assess 2 Treatment Algorithms with New Insulin Glargine 300 units/mL.

    Science.gov (United States)

    Yale, Jean-François; Berard, Lori; Groleau, Mélanie; Javadi, Pasha; Stewart, John; Harris, Stewart B

    2017-10-01

    It was uncertain whether an algorithm that involves increasing insulin dosages by 1 unit/day may cause more hypoglycemia with the longer-acting insulin glargine 300 units/mL (GLA-300). The objective of this study was to compare safety and efficacy of 2 titration algorithms, INSIGHT and EDITION, for GLA-300 in people with uncontrolled type 2 diabetes mellitus, mainly in a primary care setting. This was a 12-week, open-label, randomized, multicentre pilot study. Participants were randomly assigned to 1 of 2 algorithms: they either increased their dosage by 1 unit/day (INSIGHT, n=108) or the dose was adjusted by the investigator at least once weekly, but no more often than every 3 days (EDITION, n=104). The target fasting self-monitored blood glucose was in the range of 4.4 to 5.6 mmol/L. The percentages of participants reaching the primary endpoint of fasting self-monitored blood glucose ≤5.6 mmol/L without nocturnal hypoglycemia were 19.4% (INSIGHT) and 18.3% (EDITION). At week 12, 26.9% (INSIGHT) and 28.8% (EDITION) of participants achieved a glycated hemoglobin value of ≤7%. No differences in the incidence of hypoglycemia of any category were noted between algorithms. Participants in both arms of the study were much more satisfied with their new treatment as assessed by the Diabetes Treatment Satisfaction Questionnaire. Most health-care professionals (86%) preferred the INSIGHT over the EDITION algorithm. The frequency of adverse events was similar between algorithms. A patient-driven titration algorithm of 1 unit/day with GLA-300 is effective and comparable to the previously tested EDITION algorithm and is preferred by health-care professionals. Copyright © 2017 Diabetes Canada. Published by Elsevier Inc. All rights reserved.

  19. Missed Radiation Therapy and Cancer Recurrence

    Science.gov (United States)

    Patients who miss radiation therapy sessions during cancer treatment have an increased risk of their disease returning, even if they eventually complete their course of radiation treatment, according to a new study.

  20. Diabetes or hypertension as risk indicators for missing teeth ...

    African Journals Online (AJOL)

    A negative binomial regression (NBR) model was generated. Results: Mean age was 50.7 ± 16.2 and 50.0% were women. Mean number of missing teeth was 4.98 ± 4.17. In the multivariate NBR model, we observed that individuals with T2DM had higher risk of more missing teeth (incidence rate ratios [IRRs] = 3.13; 95% ...

  1. Optimal platform design using non-dominated sorting genetic algorithm II and technique for order of preference by similarity to ideal solution; application to automotive suspension system

    Science.gov (United States)

    Shojaeefard, Mohammad Hassan; Khalkhali, Abolfazl; Faghihian, Hamed; Dahmardeh, Masoud

    2018-03-01

    Unlike conventional approaches where optimization is performed on a unique component of a specific product, optimum design of a set of components for employing in a product family can cause significant reduction in costs. Increasing commonality and performance of the product platform simultaneously is a multi-objective optimization problem (MOP). Several optimization methods are reported to solve these MOPs. However, what is less discussed is how to find the trade-off points among the obtained non-dominated optimum points. This article investigates the optimal design of a product family using non-dominated sorting genetic algorithm II (NSGA-II) and proposes the employment of technique for order of preference by similarity to ideal solution (TOPSIS) method to find the trade-off points among the obtained non-dominated results while compromising all objective functions together. A case study for a family of suspension systems is presented, considering performance and commonality. The results indicate the effectiveness of the proposed method to obtain the trade-off points with the best possible performance while maximizing the common parts.

  2. Missing Boxes in Central Europe

    DEFF Research Database (Denmark)

    Prockl, Günter; Weibrecht Kristensen, Kirsten

    2015-01-01

    The Chinese New Year is an event that obviously happens every year. Every year however it also causes severe problems for the companies involved in the industry in form of missing containers throughout the chain but in particular in the European Hinterland. Illustrated on the symptoms of the Chin...

  3. Exploratory Factor Analysis With Small Samples and Missing Data.

    Science.gov (United States)

    McNeish, Daniel

    2017-01-01

    Exploratory factor analysis (EFA) is an extremely popular method for determining the underlying factor structure for a set of variables. Due to its exploratory nature, EFA is notorious for being conducted with small sample sizes, and recent reviews of psychological research have reported that between 40% and 60% of applied studies have 200 or fewer observations. Recent methodological studies have addressed small size requirements for EFA models; however, these models have only considered complete data, which are the exception rather than the rule in psychology. Furthermore, the extant literature on missing data techniques with small samples is scant, and nearly all existing studies focus on topics that are not of primary interest to EFA models. Therefore, this article presents a simulation to assess the performance of various missing data techniques for EFA models with both small samples and missing data. Results show that deletion methods do not extract the proper number of factors and estimate the factor loadings with severe bias, even when data are missing completely at random. Predictive mean matching is the best method overall when considering extracting the correct number of factors and estimating factor loadings without bias, although 2-stage estimation was a close second.

  4. Implicit Valuation of the Near-Miss is Dependent on Outcome Context.

    Science.gov (United States)

    Banks, Parker J; Tata, Matthew S; Bennett, Patrick J; Sekuler, Allison B; Gruber, Aaron J

    2018-03-01

    Gambling studies have described a "near-miss effect" wherein the experience of almost winning increases gambling persistence. The near-miss has been proposed to inflate the value of preceding actions through its perceptual similarity to wins. We demonstrate here, however, that it acts as a conditioned stimulus to positively or negatively influence valuation, dependent on reward expectation and cognitive engagement. When subjects are asked to choose between two simulated slot machines, near-misses increase valuation of machines with a low payout rate, whereas they decrease valuation of high payout machines. This contextual effect impairs decisions and persists regardless of manipulations to outcome feedback or financial incentive provided for good performance. It is consistent with proposals that near-misses cause frustration when wins are expected, and we propose that it increases choice stochasticity and overrides avoidance of low-valued options. Intriguingly, the near-miss effect disappears when subjects are required to explicitly value machines by placing bets, rather than choosing between them. We propose that this task increases cognitive engagement and recruits participation of brain regions involved in cognitive processing, causing inhibition of otherwise dominant systems of decision-making. Our results reveal that only implicit, rather than explicit strategies of decision-making are affected by near-misses, and that the brain can fluidly shift between these strategies according to task demands.

  5. Airline Safety Improvement Through Experience with Near-Misses: A Cautionary Tale.

    Science.gov (United States)

    Madsen, Peter; Dillon, Robin L; Tinsley, Catherine H

    2016-05-01

    In recent years, the U.S. commercial airline industry has achieved unprecedented levels of safety, with the statistical risk associated with U.S. commercial aviation falling to 0.003 fatalities per 100 million passengers. But decades of research on organizational learning show that success often breeds complacency and failure inspires improvement. With accidents as rare events, can the airline industry continue safety advancements? This question is complicated by the complex system in which the industry operates where chance combinations of multiple factors contribute to what are largely probabilistic (rather than deterministic) outcomes. Thus, some apparent successes are realized because of good fortune rather than good processes, and this research intends to bring attention to these events, the near-misses. The processes that create these near-misses could pose a threat if multiple contributing factors combine in adverse ways without the intervention of good fortune. Yet, near-misses (if recognized as such) can, theoretically, offer a mechanism for continuing safety improvements, above and beyond learning gleaned from observable failure. We test whether or not this learning is apparent in the airline industry. Using data from 1990 to 2007, fixed effects Poisson regressions show that airlines learn from accidents (their own and others), and from one category of near-misses-those where the possible dangers are salient. Unfortunately, airlines do not improve following near-miss incidents when the focal event has no clear warnings of significant danger. Therefore, while airlines need to and can learn from certain near-misses, we conclude with recommendations for improving airline learning from all near-misses. © 2015 Society for Risk Analysis.

  6. Use of Monte Carlo computation in benchmarking radiotherapy treatment planning system algorithms

    International Nuclear Information System (INIS)

    Lewis, R.D.; Ryde, S.J.S.; Seaby, A.W.; Hancock, D.A.; Evans, C.J.

    2000-01-01

    Radiotherapy treatments are becoming more complex, often requiring the dose to be calculated in three dimensions and sometimes involving the application of non-coplanar beams. The ability of treatment planning systems to accurately calculate dose under a range of these and other irradiation conditions requires evaluation. Practical assessment of such arrangements can be problematical, especially when a heterogeneous medium is used. This work describes the use of Monte Carlo computation as a benchmarking tool to assess the dose distribution of external photon beam plans obtained in a simple heterogeneous phantom by several commercially available 3D and 2D treatment planning system algorithms. For comparison, practical measurements were undertaken using film dosimetry. The dose distributions were calculated for a variety of irradiation conditions designed to show the effects of surface obliquity, inhomogeneities and missing tissue above tangential beams. The results show maximum dose differences of 47% between some planning algorithms and film at a point 1 mm below a tangentially irradiated surface. Overall, the dose distribution obtained from film was most faithfully reproduced by the Monte Carlo N-Particle results illustrating the potential of Monte Carlo computation in evaluating treatment planning system algorithms. (author)

  7. Happy faces are preferred regardless of familiarity--sad faces are preferred only when familiar.

    Science.gov (United States)

    Liao, Hsin-I; Shimojo, Shinsuke; Yeh, Su-Ling

    2013-06-01

    Familiarity leads to preference (e.g., the mere exposure effect), yet it remains unknown whether it is objective familiarity, that is, repetitive exposure, or subjective familiarity that contributes to preference. In addition, it is unexplored whether and how different emotions influence familiarity-related preference. The authors investigated whether happy or sad faces are preferred or perceived as more familiar and whether this subjective familiarity judgment correlates with preference for different emotional faces. An emotional face--happy or sad--was paired with a neutral face, and participants rated the relative preference and familiarity of each of the paired faces. For preference judgment, happy faces were preferred and sad faces were less preferred, compared with neutral faces. For familiarity judgment, happy faces did not show any bias, but sad faces were perceived as less familiar than neutral faces. Item-by-item correlational analyses show preference for sad faces--but not happy faces--positively correlate with familiarity. These results suggest a direct link between positive emotion and preference, and argue at least partly against a common cause for familiarity and preference. Instead, facial expression of different emotional valence modulates the link between familiarity and preference.

  8. Restoring method for missing data of spatial structural stress monitoring based on correlation

    Science.gov (United States)

    Zhang, Zeyu; Luo, Yaozhi

    2017-07-01

    Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.

  9. The handling of missing binary data in language research

    Directory of Open Access Journals (Sweden)

    François Pichette

    2015-01-01

    Full Text Available Researchers are frequently confronted with unanswered questions or items on their questionnaires and tests, due to factors such as item difficulty, lack of testing time, or participant distraction. This paper first presents results from a poll confirming previous claims (Rietveld & van Hout, 2006; Schafer & Gra- ham, 2002 that data replacement and deletion methods are common in research. Language researchers declared that when faced with missing answers of the yes/no type (that translate into zero or one in data tables, the three most common solutions they adopt are to exclude the participant’s data from the analyses, to leave the square empty, or to fill in with zero, as for an incorrect answer. This study then examines the impact on Cronbach’s α of five types of data insertion, using simulated and actual data with various numbers of participants and missing percentages. Our analyses indicate that the three most common methods we identified among language researchers are the ones with the greatest impact  n Cronbach's α coefficients; in other words, they are the least desirable solutions to the missing data problem. On the basis of our results, we make recommendations for language researchers concerning the best way to deal with missing data. Given that none of the most common simple methods works properly, we suggest that the missing data be replaced either by the item’s mean or by the participants’ overall mean to provide a better, more accurate image of the instrument’s internal consistency.

  10. Missed injuries in the era of the trauma scan.

    Science.gov (United States)

    Lawson, Christy M; Daley, Brian J; Ormsby, Christine B; Enderson, Blaine

    2011-02-01

    A rapid computed tomography technique or "trauma scan" (TS) provides high-resolution studies of the head, cervical spine, chest, abdomen, and pelvis. We sought to determine whether TS has decreased missed injuries. A previous study of TS found a 3% missed rate. After institutional review board approval, trauma patients from January 2001 through December 2008 were reviewed for delayed diagnosis (DD) of injury to the head, cervical spine, chest, abdomen, or pelvis. Missed extremity injuries were excluded. Injury Severity Score, length of stay, type of injury, outcomes, and days to detection were captured. Of 26,264 patients reviewed, 90 patients had DD, with an incidence of 0.34%. DD most commonly presented on day 2. Injuries included 16 bowel/mesentery, 12 spine, 11 pelvic, 8 spleen, 6 diaphragm, 5 clavicle, 4 scapula, 4 cervical spine, 4 intracranial, 4 sternum, 3 maxillofacial, 3 liver, 2 heart/aorta, 2 vascular, 2 urethra/bladder, 2 pneumothorax, and 2 pancreas/common bile duct. DD resulted in 1 death, 6 prolonged intensive care unit stays, 19 operative interventions, and 38 additional interventions. TS is an effective way of evaluating trauma patients for intracranial, cervical spine, chest, abdomen, and pelvic injuries that have the potential to impact morbidity and mortality. The incidence of injuries missed in these crucial areas has been reduced at our institution by the use of this radiographic modality. The most common missed injury remains bowel, and so a high index of suspicion and the tertiary survey must remain a mainstay of therapy.

  11. HyRA: A Hybrid Recommendation Algorithm Focused on Smart POI. Ceutí as a Study Scenario

    Science.gov (United States)

    Gómez-Oliva, Andrea; Molina, Germán

    2018-01-01

    Nowadays, Physical Web together with the increase in the use of mobile devices, Global Positioning System (GPS), and Social Networking Sites (SNS) have caused users to share enriched information on the Web such as their tourist experiences. Therefore, an area that has been significantly improved by using the contextual information provided by these technologies is tourism. In this way, the main goals of this work are to propose and develop an algorithm focused on the recommendation of Smart Point of Interaction (Smart POI) for a specific user according to his/her preferences and the Smart POIs’ context. Hence, a novel Hybrid Recommendation Algorithm (HyRA) is presented by incorporating an aggregation operator into the user-based Collaborative Filtering (CF) algorithm as well as including the Smart POIs’ categories and geographical information. For the experimental phase, two real-world datasets have been collected and preprocessed. In addition, one Smart POIs’ categories dataset was built. As a result, a dataset composed of 16 Smart POIs, another constituted by the explicit preferences of 200 respondents, and the last dataset integrated by 13 Smart POIs’ categories are provided. The experimental results show that the recommendations suggested by HyRA are promising. PMID:29562590

  12. HyRA: A Hybrid Recommendation Algorithm Focused on Smart POI. Ceutí as a Study Scenario.

    Science.gov (United States)

    Alvarado-Uribe, Joanna; Gómez-Oliva, Andrea; Barrera-Animas, Ari Yair; Molina, Germán; Gonzalez-Mendoza, Miguel; Parra-Meroño, María Concepción; Jara, Antonio J

    2018-03-17

    Nowadays, Physical Web together with the increase in the use of mobile devices, Global Positioning System (GPS), and Social Networking Sites (SNS) have caused users to share enriched information on the Web such as their tourist experiences. Therefore, an area that has been significantly improved by using the contextual information provided by these technologies is tourism. In this way, the main goals of this work are to propose and develop an algorithm focused on the recommendation of Smart Point of Interaction (Smart POI) for a specific user according to his/her preferences and the Smart POIs' context. Hence, a novel Hybrid Recommendation Algorithm (HyRA) is presented by incorporating an aggregation operator into the user-based Collaborative Filtering (CF) algorithm as well as including the Smart POIs' categories and geographical information. For the experimental phase, two real-world datasets have been collected and preprocessed. In addition, one Smart POIs' categories dataset was built. As a result, a dataset composed of 16 Smart POIs, another constituted by the explicit preferences of 200 respondents, and the last dataset integrated by 13 Smart POIs' categories are provided. The experimental results show that the recommendations suggested by HyRA are promising.

  13. Crystal structure prediction of flexible molecules using parallel genetic algorithms with a standard force field.

    Science.gov (United States)

    Kim, Seonah; Orendt, Anita M; Ferraro, Marta B; Facelli, Julio C

    2009-10-01

    This article describes the application of our distributed computing framework for crystal structure prediction (CSP) the modified genetic algorithms for crystal and cluster prediction (MGAC), to predict the crystal structure of flexible molecules using the general Amber force field (GAFF) and the CHARMM program. The MGAC distributed computing framework includes a series of tightly integrated computer programs for generating the molecule's force field, sampling crystal structures using a distributed parallel genetic algorithm and local energy minimization of the structures followed by the classifying, sorting, and archiving of the most relevant structures. Our results indicate that the method can consistently find the experimentally known crystal structures of flexible molecules, but the number of missing structures and poor ranking observed in some crystals show the need for further improvement of the potential. Copyright 2009 Wiley Periodicals, Inc.

  14. Investigator's Guide to Missing Child Cases. For Law-Enforcement Officers Locating Missing Children. Second Edition.

    Science.gov (United States)

    Patterson, John C.

    This booklet provides guidance to law enforcement officers investigating missing children cases, whether through parental kidnappings, abductions by strangers, runaway or "throwaway" cases, and those in which the circumstances are unknown. The guide describes, step-by-step, the investigative process required for each of the four types of missing…

  15. A dynamic programming approach to missing data estimation using neural networks

    CSIR Research Space (South Africa)

    Nelwamondo, FV

    2013-01-01

    Full Text Available method where dynamic programming is not used. This paper also suggests a different way of formulating a missing data problem such that the dynamic programming is applicable to estimate the missing data....

  16. Establishing a threshold for the number of missing days using 7 d pedometer data.

    Science.gov (United States)

    Kang, Minsoo; Hart, Peter D; Kim, Youngdeok

    2012-11-01

    The purpose of this study was to examine the threshold of the number of missing days of recovery using the individual information (II)-centered approach. Data for this study came from 86 participants, aged from 17 to 79 years old, who had 7 consecutive days of complete pedometer (Yamax SW 200) wear. Missing datasets (1 d through 5 d missing) were created by a SAS random process 10,000 times each. All missing values were replaced using the II-centered approach. A 7 d average was calculated for each dataset, including the complete dataset. Repeated measure ANOVA was used to determine the differences between 1 d through 5 d missing datasets and the complete dataset. Mean absolute percentage error (MAPE) was also computed. Mean (SD) daily step count for the complete 7 d dataset was 7979 (3084). Mean (SD) values for the 1 d through 5 d missing datasets were 8072 (3218), 8066 (3109), 7968 (3273), 7741 (3050) and 8314 (3529), respectively (p > 0.05). The lower MAPEs were estimated for 1 d missing (5.2%, 95% confidence interval (CI) 4.4-6.0) and 2 d missing (8.4%, 95% CI 7.0-9.8), while all others were greater than 10%. The results of this study show that the 1 d through 5 d missing datasets, with replaced values, were not significantly different from the complete dataset. Based on the MAPE results, it is not recommended to replace more than two days of missing step counts.

  17. The algorithmic level is the bridge between computation and brain.

    Science.gov (United States)

    Love, Bradley C

    2015-04-01

    Every scientist chooses a preferred level of analysis and this choice shapes the research program, even determining what counts as evidence. This contribution revisits Marr's (1982) three levels of analysis (implementation, algorithmic, and computational) and evaluates the prospect of making progress at each individual level. After reviewing limitations of theorizing within a level, two strategies for integration across levels are considered. One is top-down in that it attempts to build a bridge from the computational to algorithmic level. Limitations of this approach include insufficient theoretical constraint at the computation level to provide a foundation for integration, and that people are suboptimal for reasons other than capacity limitations. Instead, an inside-out approach is forwarded in which all three levels of analysis are integrated via the algorithmic level. This approach maximally leverages mutual data constraints at all levels. For example, algorithmic models can be used to interpret brain imaging data, and brain imaging data can be used to select among competing models. Examples of this approach to integration are provided. This merging of levels raises questions about the relevance of Marr's tripartite view. Copyright © 2015 Cognitive Science Society, Inc.

  18. Maternal death and near miss measurement

    African Journals Online (AJOL)

    ABEOLUGBENGAS

    2008-05-26

    May 26, 2008 ... Maternal health services need to be accountable more than ever ... of maternal death and near miss audit, surveillance and review is ..... (d) A fundamental principle of these ..... quality assurance in obstetrics in Nigeria - a.

  19. Preferences over Social Risk

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Lau, Morten; Rutström, E. Elisabet

    2013-01-01

    that subjects systematically reveal different risk attitudes in a social setting with no prior knowledge about the risk preferences of others compared to when they solely bear the consequences of the decision. However, we also find that subjects are significantly more risk averse when they know the risk......We elicit individual preferences over social risk. We identify the extent to which these preferences are correlated with preferences over individual risk and the well-being of others. We examine these preferences in the context of laboratory experiments over small, anonymous groups, although...... the methodological issues extend to larger groups that form endogenously (e.g., families, committees, communities). Preferences over social risk can be closely approximated by individual risk attitudes when subjects have no information about the risk preferences of other group members. We find no evidence...

  20. BE-FAST (Balance, Eyes, Face, Arm, Speech, Time): Reducing the Proportion of Strokes Missed Using the FAST Mnemonic.

    Science.gov (United States)

    Aroor, Sushanth; Singh, Rajpreet; Goldstein, Larry B

    2017-02-01

    The FAST algorithm (Face, Arm, Speech, Time) helps identify persons having an acute stroke. We determined the proportion of patients with acute ischemic stroke not captured by FAST and evaluated a revised mnemonic. Records of all patients admitted to the University of Kentucky Stroke Center between January and December 2014 with a discharge International Classification of Diseases, Ninth Revision, Clinical Modification code for acute ischemic stroke were reviewed. Those misclassified, having missing National Institutes of Health Stroke Scale data, or were comatose or intubated were excluded. Presenting symptoms, demographics, and examination findings based on the National Institutes of Health Stroke Scale data were abstracted. Of 858 consecutive records identified, 736 met inclusion criteria; 14.1% did not have any FAST symptoms at presentation. Of these, 42% had gait imbalance or leg weakness, 40% visual symptoms, and 70% either symptom. With their addition, the proportion of stroke patients not identified was reduced to 4.4% (Pface weakness, arm weakness, or speech impairment on admission examination were considered in addition to a history of FAST symptoms, the proportion missed was reduced to 9.9% (P=0.0010). The proportion of stroke patients not identified was also reduced (2.6%) with the addition of a history of gait imbalance/leg weakness or visual symptoms (P<0.0001). Of patients with ischemic stroke with deficits potentially amenable to acute intervention, 14% are not identified using FAST. The inclusion of gait/leg and visual symptoms leads to a reduction in missed strokes. If validated in a prospective study, a revision of public educational programs may be warranted. © 2017 American Heart Association, Inc.

  1. FHSA-SED: Two-Locus Model Detection for Genome-Wide Association Study with Harmony Search Algorithm.

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    Full Text Available Two-locus model is a typical significant disease model to be identified in genome-wide association study (GWAS. Due to intensive computational burden and diversity of disease models, existing methods have drawbacks on low detection power, high computation cost, and preference for some types of disease models.In this study, two scoring functions (Bayesian network based K2-score and Gini-score are used for characterizing two SNP locus as a candidate model, the two criteria are adopted simultaneously for improving identification power and tackling the preference problem to disease models. Harmony search algorithm (HSA is improved for quickly finding the most likely candidate models among all two-locus models, in which a local search algorithm with two-dimensional tabu table is presented to avoid repeatedly evaluating some disease models that have strong marginal effect. Finally G-test statistic is used to further test the candidate models.We investigate our method named FHSA-SED on 82 simulated datasets and a real AMD dataset, and compare it with two typical methods (MACOED and CSE which have been developed recently based on swarm intelligent search algorithm. The results of simulation experiments indicate that our method outperforms the two compared algorithms in terms of detection power, computation time, evaluation times, sensitivity (TPR, specificity (SPC, positive predictive value (PPV and accuracy (ACC. Our method has identified two SNPs (rs3775652 and rs10511467 that may be also associated with disease in AMD dataset.

  2. A new collaborative recommendation approach based on users clustering using artificial bee colony algorithm.

    Science.gov (United States)

    Ju, Chunhua; Xu, Chonghuan

    2013-01-01

    Although there are many good collaborative recommendation methods, it is still a challenge to increase the accuracy and diversity of these methods to fulfill users' preferences. In this paper, we propose a novel collaborative filtering recommendation approach based on K-means clustering algorithm. In the process of clustering, we use artificial bee colony (ABC) algorithm to overcome the local optimal problem caused by K-means. After that we adopt the modified cosine similarity to compute the similarity between users in the same clusters. Finally, we generate recommendation results for the corresponding target users. Detailed numerical analysis on a benchmark dataset MovieLens and a real-world dataset indicates that our new collaborative filtering approach based on users clustering algorithm outperforms many other recommendation methods.

  3. A New Collaborative Recommendation Approach Based on Users Clustering Using Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Chunhua Ju

    2013-01-01

    Full Text Available Although there are many good collaborative recommendation methods, it is still a challenge to increase the accuracy and diversity of these methods to fulfill users’ preferences. In this paper, we propose a novel collaborative filtering recommendation approach based on K-means clustering algorithm. In the process of clustering, we use artificial bee colony (ABC algorithm to overcome the local optimal problem caused by K-means. After that we adopt the modified cosine similarity to compute the similarity between users in the same clusters. Finally, we generate recommendation results for the corresponding target users. Detailed numerical analysis on a benchmark dataset MovieLens and a real-world dataset indicates that our new collaborative filtering approach based on users clustering algorithm outperforms many other recommendation methods.

  4. Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling

    Directory of Open Access Journals (Sweden)

    Hyojin Lee

    2015-01-01

    Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.

  5. MISSE 6 Polymer Film Tensile Experiment

    Science.gov (United States)

    Miller, Sharon K. R.; Dever, Joyce A.; Banks, Bruce A.; Waters, Deborah L.; Sechkar, Edward; Kline, Sara

    2010-01-01

    The Polymer Film Tensile Experiment (PFTE) was flown as part of Materials International Space Station Experiment 6 (MISSE 6). The purpose of the experiment was to expose a variety of polymer films to the low Earth orbital environment under both relaxed and tension conditions. The polymers selected are those commonly used for spacecraft thermal control and those under consideration for use in spacecraft applications such as sunshields, solar sails, and inflatable and deployable structures. The dog-bone shaped samples of polymers that were flown were exposed on both the side of the MISSE 6 Passive Experiment Container (PEC) that was facing into the ram direction (receiving atomic oxygen, ultraviolet (UV) radiation, ionizing radiation, and thermal cycling) and the wake facing side (which was supposed to have experienced predominantly the same environmental effects except for atomic oxygen which was present due to reorientation of the International Space Station). A few of the tensile samples were coated with vapor deposited aluminum on the back and wired to determine the point in the flight when the tensile sample broke as recorded by a change in voltage that was stored on battery powered data loggers for post flight retrieval and analysis. The data returned on the data loggers was not usable. However, post retrieval observation and analysis of the samples was performed. This paper describes the preliminary analysis and observations of the polymers exposed on the MISSE 6 PFTE.

  6. Comparative study between ultrahigh spatial frequency algorithm and high spatial frequency algorithm in high-resolution CT of the lungs

    International Nuclear Information System (INIS)

    Oh, Yu Whan; Kim, Jung Kyuk; Suh, Won Hyuck

    1994-01-01

    To date, the high spatial frequency algorithm (HSFA) which reduces image smoothing and increases spatial resolution has been used for the evaluation of parenchymal lung diseases in thin-section high-resolution CT. In this study, we compared the ultrahigh spatial frequency algorithm (UHSFA) with the high spatial frequency algorithm in the assessment of thin section images of the lung parenchyma. Three radiologists compared the UHSFA and HSFA on identical CT images in a line-pair resolution phantom, one lung specimen, 2 patients with normal lung and 18 patients with abnormal lung parenchyma. Scanning of a line-pair resolution phantom demonstrated no difference in resolution between two techniques but it showed that outer lines of the line pairs with maximal resolution looked thicker on UHSFA than those on HSFA. Lung parenchymal detail with UHSFA was judged equal or superior to HSFA in 95% of images. Lung parenchymal sharpness was improved with UHSFA in all images. Although UHSFA resulted in an increase in visible noise, observers did not found that image noise interfered with image interpretation. The visual CT attenuation of normal lung parenchyma is minimally increased in images with HSFA. The overall visual preference of the images reconstructed on UHSFA was considered equal to or greater than that of those reconstructed on HSFA in 78% of images. The ultrahigh spatial frequency algorithm improved the overall visual quality of the images in pulmonary parenchymal high-resolution CT

  7. Preferred sets of states, predictability, classicality, and the environment-induced decoherence

    International Nuclear Information System (INIS)

    Zurek, W.H.

    1992-01-01

    Selection of the preferred classical set of states in the process of decoherence -- so important for cosmological considerations -- is discussed with an emphasis on the role of information loss and entropy. Persistence of correlations between the observables of two systems (for instance, a record and a state of a system evolved from the initial conditions described by that record) in the presence of the environment is used to define classical behavior. From the view point of an observer (or any system capable of maintaining records) predictability is a measure of such persistence. Predictability sieve -- a procedure which employs both the statistical and algorithmic entropies to systematicaly explore all of the Hilbert space of open system in order to eliminate the majority of the unpredictable and non-classical states and to locate the islands of predictability including the preferred pointer basis is proposed. Predictably evolving states of decohering systems along with the time-ordered sequences of records of their evolution define the effectively classical branches of the universal wavefunction in the context of the ''Many Worlds Interpretation''. The relation between the consistent histories approach and the preferred basis is considered. It is demonstrated that histories of sequences of events corresponding to projections onto the states of the pointer basis are consistent

  8. A study of dentists’ preferences for the restoration of shortened dental arches with partial dentures

    Science.gov (United States)

    Nassani, Mohammad Zakaria; Ibraheem, Shukran; Al-Hallak, Khaled Rateb; Ali El Khalifa, Mohammed Othman; Baroudi, Kusai

    2015-01-01

    Objective: This study aimed to use a utility method in order to assess dentists’ preferences for the restoration of shortened dental arches (SDAs) with partial dentures. Also, the impact of patient age and length of the SDA on dentists’ preferences for the partial dentures was investigated. Materials and Methods: Totally, 104 subjects holding a basic degree in dentistry and working as staff members in a private dental college in Saudi Arabia were interviewed and presented with 12 scenarios for patients of different ages and mandibular SDAs of varying length. Participants were asked to indicate on a standardized visual analog scale how they would value the health of the patient's mouth if the mandibular SDAs were restored with cobalt-chromium removable partial dentures (RPDs). Results: With a utility value of 0.0 representing the worst possible health state for a mouth and 1.0 representing the best, dentists’ average utility value of the RPD for the SDAs was 0.49 (sd= 0.15). Mean utility scores of the RPDs across the 12 SDA scenarios ranged between 0.35 and 0.61. RPDs that restored the extremely SDAs attracted the highest utility values and dentists’ utility of the RPD significantly increased with the increase in the number of missing posterior teeth. No significant differences in dentists’ mean utility values for the RPD were identified among SDA scenarios for patients of different ages. Conclusion: Restoration of the mandibular SDAs by RPDs is not a highly preferred treatment option among the surveyed group of dentists. Length of the SDA affects dentists’ preferences for the RPD, but patient age does not. PMID:26038647

  9. Diagnostic Laparoscopy for Trauma: How Not to Miss Injuries.

    Science.gov (United States)

    Koto, Modise Z; Matsevych, Oleh Y; Aldous, Colleen

    2018-05-01

    Diagnostic laparoscopy (DL) is a well-accepted approach for penetrating abdominal trauma (PAT). However, the steps of procedure and the systematic laparoscopic examination are not clearly defined in the literature. The aim of this study was to clarify the definition of DL in trauma surgery by auditing DL performed for PAT at our institution, and to describe the strategies on how to avoid missed injuries. The data of patients managed with laparoscopy for PAT from January 2012 to December 2015 were retrospectively analyzed. The details of operative technique and strategies on how to avoid missed injuries were discussed. Out of 250 patients managed with laparoscopy for PAT, 113 (45%) patients underwent DL. Stab wounds sustained 94 (83%) patients. The penetration of the peritoneal cavity or retroperitoneum was documented in 67 (59%) of patients. Organ evisceration was present in 21 (19%) patients. Multiple injuries were present in 22% of cases. The chest was the most common associated injury. Two (1.8%) iatrogenic injuries were recorded. The conversion rate was 1.7% (2/115). The mean length of hospital stay was 4 days. There were no missed injuries. In the therapeutic laparoscopy (TL) group, DL was performed as the initial part and identified all injuries. There were no missed injuries in the TL group. The predetermined sequential steps of DL and the standard systematic examination of intraabdominal organs were described. DL is a feasible and safe procedure. It accurately identifies intraabdominal injuries. The selected use of preoperative imaging, adherence to the predetermined steps of procedure and the standard systematic laparoscopic examination will minimize the rate of missed injuries.

  10. Research on Large-Scale Road Network Partition and Route Search Method Combined with Traveler Preferences

    Directory of Open Access Journals (Sweden)

    De-Xin Yu

    2013-01-01

    Full Text Available Combined with improved Pallottino parallel algorithm, this paper proposes a large-scale route search method, which considers travelers’ route choice preferences. And urban road network is decomposed into multilayers effectively. Utilizing generalized travel time as road impedance function, the method builds a new multilayer and multitasking road network data storage structure with object-oriented class definition. Then, the proposed path search algorithm is verified by using the real road network of Guangzhou city as an example. By the sensitive experiments, we make a comparative analysis of the proposed path search method with the current advanced optimal path algorithms. The results demonstrate that the proposed method can increase the road network search efficiency by more than 16% under different search proportion requests, node numbers, and computing process numbers, respectively. Therefore, this method is a great breakthrough in the guidance field of urban road network.

  11. Maternal Near-Miss Due to Unsafe Abortion and Associated Short ...

    African Journals Online (AJOL)

    Abstract. Little is known about maternal near-miss (MNM) due to unsafe abortion in Nigeria. We used the WHO criteria to identify near-miss events and the proportion due to unsafe abortion among women of childbearing age in eight large secondary and tertiary hospitals across the six geo-political zones. We also explored ...

  12. Predictors of missed appointments in patients referred for congenital or pediatric cardiac magnetic resonance

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Jimmy C.; Dorfman, Adam L. [C.S. Mott Children' s Hospital, Department of Pediatrics and Communicable Diseases, Division of Pediatric Cardiology, University of Michigan Health System, University of Michigan Congenital Heart Center, Ann Arbor, MI (United States); C.S. Mott Children' s Hospital, Department of Radiology, University of Michigan Health System, Section of Pediatric Radiology, Ann Arbor, MI (United States); Lowery, Ray; Yu, Sunkyung [C.S. Mott Children' s Hospital, Department of Pediatrics and Communicable Diseases, Division of Pediatric Cardiology, University of Michigan Health System, University of Michigan Congenital Heart Center, Ann Arbor, MI (United States); Ghadimi Mahani, Maryam [C.S. Mott Children' s Hospital, Department of Radiology, University of Michigan Health System, Section of Pediatric Radiology, Ann Arbor, MI (United States); Agarwal, Prachi P. [University of Michigan Health System, Department of Radiology, Division of Cardiothoracic Radiology, Ann Arbor, MI (United States)

    2017-07-15

    Congenital cardiac magnetic resonance is a limited resource because of scanner and physician availability. Missed appointments decrease scheduling efficiency, have financial implications and represent missed care opportunities. To characterize the rate of missed appointments and identify modifiable predictors. This single-center retrospective study included all patients with outpatient congenital or pediatric cardiac MR appointments from Jan. 1, 2014, through Dec. 31, 2015. We identified missed appointments (no-shows or same-day cancellations) from the electronic medical record. We obtained demographic and clinical factors from the medical record and assessed socioeconomic factors by U.S. Census block data by patient ZIP code. Statistically significant variables (P<0.05) were included into a multivariable analysis. Of 795 outpatients (median age 18.5 years, interquartile range 13.4-27.1 years) referred for congenital cardiac MR, a total of 91 patients (11.4%) missed appointments; 28 (3.5%) missed multiple appointments. Reason for missed appointment could be identified in only 38 patients (42%), but of these, 28 (74%) were preventable or could have been identified prior to the appointment. In multivariable analysis, independent predictors of missed appointments were referral by a non-cardiologist (adjusted odds ratio [AOR] 5.8, P=0.0002), referral for research (AOR 3.6, P=0.01), having public insurance (AOR 2.1, P=0.004), and having scheduled cardiac MR from November to April (AOR 1.8, P=0.01). Demographic factors can identify patients at higher risk for missing appointments. These data may inform initiatives to limit missed appointments, such as targeted education of referring providers and patients. Further data are needed to evaluate the efficacy of potential interventions. (orig.)

  13. Protannotator: a semiautomated pipeline for chromosome-wise functional annotation of the "missing" human proteome.

    Science.gov (United States)

    Islam, Mohammad T; Garg, Gagan; Hancock, William S; Risk, Brian A; Baker, Mark S; Ranganathan, Shoba

    2014-01-03

    The chromosome-centric human proteome project (C-HPP) aims to define the complete set of proteins encoded in each human chromosome. The neXtProt database (September 2013) lists 20,128 proteins for the human proteome, of which 3831 human proteins (∼19%) are considered "missing" according to the standard metrics table (released September 27, 2013). In support of the C-HPP initiative, we have extended the annotation strategy developed for human chromosome 7 "missing" proteins into a semiautomated pipeline to functionally annotate the "missing" human proteome. This pipeline integrates a suite of bioinformatics analysis and annotation software tools to identify homologues and map putative functional signatures, gene ontology, and biochemical pathways. From sequential BLAST searches, we have primarily identified homologues from reviewed nonhuman mammalian proteins with protein evidence for 1271 (33.2%) "missing" proteins, followed by 703 (18.4%) homologues from reviewed nonhuman mammalian proteins and subsequently 564 (14.7%) homologues from reviewed human proteins. Functional annotations for 1945 (50.8%) "missing" proteins were also determined. To accelerate the identification of "missing" proteins from proteomics studies, we generated proteotypic peptides in silico. Matching these proteotypic peptides to ENCODE proteogenomic data resulted in proteomic evidence for 107 (2.8%) of the 3831 "missing proteins, while evidence from a recent membrane proteomic study supported the existence for another 15 "missing" proteins. The chromosome-wise functional annotation of all "missing" proteins is freely available to the scientific community through our web server (http://biolinfo.org/protannotator).

  14. Do country-specific preference weights matter in the choice of mapping algorithms? The case of mapping the Diabetes-39 onto eight country-specific EQ-5D-5L value sets.

    Science.gov (United States)

    Lamu, Admassu N; Chen, Gang; Gamst-Klaussen, Thor; Olsen, Jan Abel

    2018-03-22

    To develop mapping algorithms that transform Diabetes-39 (D-39) scores onto EQ-5D-5L utility values for each of eight recently published country-specific EQ-5D-5L value sets, and to compare mapping functions across the EQ-5D-5L value sets. Data include 924 individuals with self-reported diabetes from six countries. The D-39 dimensions, age and gender were used as potential predictors for EQ-5D-5L utilities, which were scored using value sets from eight countries (England, Netherland, Spain, Canada, Uruguay, China, Japan and Korea). Ordinary least squares, generalised linear model, beta binomial regression, fractional regression, MM estimation and censored least absolute deviation were used to estimate the mapping algorithms. The optimal algorithm for each country-specific value set was primarily selected based on normalised root mean square error (NRMSE), normalised mean absolute error (NMAE) and adjusted-r 2 . Cross-validation with fivefold approach was conducted to test the generalizability of each model. The fractional regression model with loglog as a link function consistently performed best in all country-specific value sets. For instance, the NRMSE (0.1282) and NMAE (0.0914) were the lowest, while adjusted-r 2 was the highest (52.5%) when the English value set was considered. Among D-39 dimensions, the energy and mobility was the only one that was consistently significant for all models. The D-39 can be mapped onto the EQ-5D-5L utilities with good predictive accuracy. The fractional regression model, which is appropriate for handling bounded outcomes, outperformed other candidate methods in all country-specific value sets. However, the regression coefficients differed reflecting preference heterogeneity across countries.

  15. A Machine Learning Recommender System to Tailor Preference Assessments to Enhance Person-Centered Care Among Nursing Home Residents.

    Science.gov (United States)

    Gannod, Gerald C; Abbott, Katherine M; Van Haitsma, Kimberly; Martindale, Nathan; Heppner, Alexandra

    2018-05-21

    Nursing homes (NHs) using the Preferences for Everyday Living Inventory (PELI-NH) to assess important preferences and provide person-centered care find the number of items (72) to be a barrier to using the assessment. Using a sample of n = 255 NH resident responses to the PELI-NH, we used the 16 preference items from the MDS 3.0 Section F to develop a machine learning recommender system to identify additional PELI-NH items that may be important to specific residents. Much like the Netflix recommender system, our system is based on the concept of collaborative filtering whereby insights and predictions (e.g., filters) are created using the interests and preferences of many users. The algorithm identifies multiple sets of "you might also like" patterns called association rules, based upon responses to the 16 MDS preferences that recommends an additional set of preferences with a high likelihood of being important to a specific resident. In the evaluation of the combined apriori and logistic regression approach, we obtained a high recall performance (i.e., the ratio of correctly predicted preferences compared with all predicted preferences and nonpreferences) and high precision (i.e., the ratio of correctly predicted rules with respect to the rules predicted to be true) of 80.2% and 79.2%, respectively. The recommender system successfully provides guidance on how to best tailor the preference items asked of residents and can support preference capture in busy clinical environments, contributing to the feasibility of delivering person-centered care.

  16. Bone scan as a screening test for missed fractures in severely injured patients.

    Science.gov (United States)

    Lee, K-J; Jung, K; Kim, J; Kwon, J

    2014-12-01

    In many cases, patients with severe blunt trauma have multiple fractures throughout the body. These fractures are not often detectable by history or physical examination, and their diagnosis can be delayed or even missed. Thus, screening test fractures of the whole body is required after initial management. We performed this study to evaluate the reliability of bone scans for detecting missed fractures in patients with multiple severe traumas and we analyzed the causes of missed fractures by using bone scan. A bone scan is useful as a screening test for fractures of the entire body of severe trauma patients who are passed the acute phase. We reviewed the electronic medical records of severe trauma patients who underwent a bone scan from September 2009 to December 2010. Demographic and medical data were compared and statistically analyzed to determine whether missed fractures were detected after bone scan in the two groups. A total of 382 patients who had an injury severity score [ISS] greater than 16 points with multiple traumas visited the emergency room. One hundred and thirty-one patients underwent bone scan and 81 patients were identified with missed fractures by bone scan. The most frequent location for missed fractures was the rib area (55 cases, 41.98%), followed by the extremities (42 cases, 32.06%). The missed fractures that required surgery or splint were most common in extremities (11 cases). In univariate analysis, higher ISS scores and mechanism of injury were related with the probability that missed fractures would be found with a bone scan. The ISS score was statistically significant in multivariate analysis. Bone scan is an effective method of detecting missed fractures among patients with multiple severe traumas. Level IV, retrospective study. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  17. Neighborhood Hypergraph Based Classification Algorithm for Incomplete Information System

    Directory of Open Access Journals (Sweden)

    Feng Hu

    2015-01-01

    Full Text Available The problem of classification in incomplete information system is a hot issue in intelligent information processing. Hypergraph is a new intelligent method for machine learning. However, it is hard to process the incomplete information system by the traditional hypergraph, which is due to two reasons: (1 the hyperedges are generated randomly in traditional hypergraph model; (2 the existing methods are unsuitable to deal with incomplete information system, for the sake of missing values in incomplete information system. In this paper, we propose a novel classification algorithm for incomplete information system based on hypergraph model and rough set theory. Firstly, we initialize the hypergraph. Second, we classify the training set by neighborhood hypergraph. Third, under the guidance of rough set, we replace the poor hyperedges. After that, we can obtain a good classifier. The proposed approach is tested on 15 data sets from UCI machine learning repository. Furthermore, it is compared with some existing methods, such as C4.5, SVM, NavieBayes, and KNN. The experimental results show that the proposed algorithm has better performance via Precision, Recall, AUC, and F-measure.

  18. Missing mass from low-luminosity stars

    International Nuclear Information System (INIS)

    Hawkins, M.R.S.

    1986-01-01

    Results from a deep photometric survey for low-luminosity stars show a turnup to the luminosity function at faint magnitudes, and reopen the possibility that the missing mass in the solar neighbourhood is made up of stars after all. (author)

  19. Preferred and actual relative height among homosexual male partners vary with preferred dominance and sex role.

    Science.gov (United States)

    Valentova, Jaroslava Varella; Stulp, Gert; Třebický, Vít; Havlíček, Jan

    2014-01-01

    Previous research has shown repeatedly that human stature influences mate preferences and mate choice in heterosexuals. In general, it has been shown that tall men and average height women are most preferred by the opposite sex, and that both sexes prefer to be in a relationship where the man is taller than the woman. However, little is known about such partner preferences in homosexual individuals. Based on an online survey of a large sample of non-heterosexual men (N = 541), we found that the majority of men prefer a partner slightly taller than themselves. However, these preferences were dependent on the participant's own height, such that taller men preferred shorter partners, whereas shorter men preferred taller partners. We also examined whether height preferences predicted the preference for dominance and the adoption of particular sexual roles within a couple. Although a large proportion of men preferred to be in an egalitarian relationship with respect to preferred dominance (although not with respect to preferred sexual role), men that preferred a more dominant and more "active" sexual role preferred shorter partners, whereas those that preferred a more submissive and more "passive" sexual role preferred taller partners. Our results indicate that preferences for relative height in homosexual men are modulated by own height, preferred dominance and sex role, and do not simply resemble those of heterosexual women or men.

  20. Preferred and actual relative height among homosexual male partners vary with preferred dominance and sex role.

    Directory of Open Access Journals (Sweden)

    Jaroslava Varella Valentova

    Full Text Available Previous research has shown repeatedly that human stature influences mate preferences and mate choice in heterosexuals. In general, it has been shown that tall men and average height women are most preferred by the opposite sex, and that both sexes prefer to be in a relationship where the man is taller than the woman. However, little is known about such partner preferences in homosexual individuals. Based on an online survey of a large sample of non-heterosexual men (N = 541, we found that the majority of men prefer a partner slightly taller than themselves. However, these preferences were dependent on the participant's own height, such that taller men preferred shorter partners, whereas shorter men preferred taller partners. We also examined whether height preferences predicted the preference for dominance and the adoption of particular sexual roles within a couple. Although a large proportion of men preferred to be in an egalitarian relationship with respect to preferred dominance (although not with respect to preferred sexual role, men that preferred a more dominant and more "active" sexual role preferred shorter partners, whereas those that preferred a more submissive and more "passive" sexual role preferred taller partners. Our results indicate that preferences for relative height in homosexual men are modulated by own height, preferred dominance and sex role, and do not simply resemble those of heterosexual women or men.

  1. Preferred and Actual Relative Height among Homosexual Male Partners Vary with Preferred Dominance and Sex Role

    Science.gov (United States)

    Valentova, Jaroslava Varella; Stulp, Gert; Třebický, Vít; Havlíček, Jan

    2014-01-01

    Previous research has shown repeatedly that human stature influences mate preferences and mate choice in heterosexuals. In general, it has been shown that tall men and average height women are most preferred by the opposite sex, and that both sexes prefer to be in a relationship where the man is taller than the woman. However, little is known about such partner preferences in homosexual individuals. Based on an online survey of a large sample of non-heterosexual men (N = 541), we found that the majority of men prefer a partner slightly taller than themselves. However, these preferences were dependent on the participant’s own height, such that taller men preferred shorter partners, whereas shorter men preferred taller partners. We also examined whether height preferences predicted the preference for dominance and the adoption of particular sexual roles within a couple. Although a large proportion of men preferred to be in an egalitarian relationship with respect to preferred dominance (although not with respect to preferred sexual role), men that preferred a more dominant and more “active” sexual role preferred shorter partners, whereas those that preferred a more submissive and more “passive” sexual role preferred taller partners. Our results indicate that preferences for relative height in homosexual men are modulated by own height, preferred dominance and sex role, and do not simply resemble those of heterosexual women or men. PMID:24466136

  2. Costs and consequences of automated algorithms versus manual grading for the detection of referable diabetic retinopathy.

    Science.gov (United States)

    Scotland, G S; McNamee, P; Fleming, A D; Goatman, K A; Philip, S; Prescott, G J; Sharp, P F; Williams, G J; Wykes, W; Leese, G P; Olson, J A

    2010-06-01

    To assess the cost-effectiveness of an improved automated grading algorithm for diabetic retinopathy against a previously described algorithm, and in comparison with manual grading. Efficacy of the alternative algorithms was assessed using a reference graded set of images from three screening centres in Scotland (1253 cases with observable/referable retinopathy and 6333 individuals with mild or no retinopathy). Screening outcomes and grading and diagnosis costs were modelled for a cohort of 180 000 people, with prevalence of referable retinopathy at 4%. Algorithm (b), which combines image quality assessment with detection algorithms for microaneurysms (MA), blot haemorrhages and exudates, was compared with a simpler algorithm (a) (using image quality assessment and MA/dot haemorrhage (DH) detection), and the current practice of manual grading. Compared with algorithm (a), algorithm (b) would identify an additional 113 cases of referable retinopathy for an incremental cost of pound 68 per additional case. Compared with manual grading, automated grading would be expected to identify between 54 and 123 fewer referable cases, for a grading cost saving between pound 3834 and pound 1727 per case missed. Extrapolation modelling over a 20-year time horizon suggests manual grading would cost between pound 25,676 and pound 267,115 per additional quality adjusted life year gained. Algorithm (b) is more cost-effective than the algorithm based on quality assessment and MA/DH detection. With respect to the value of introducing automated detection systems into screening programmes, automated grading operates within the recommended national standards in Scotland and is likely to be considered a cost-effective alternative to manual disease/no disease grading.

  3. Exact and approximate Fourier rebinning algorithms for the solution of the data truncation problem in 3-D PET.

    Science.gov (United States)

    Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis

    2007-07-01

    This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence.

  4. Did you miss the eclipse?

    CERN Multimedia

    CERN Bulletin

    2015-01-01

    Not wanting to miss a moment of the beautiful celestial dance that played out on Friday, 20 March, Jens Roder of CERN’s PH group took to the Jura mountains, where he got several shots of the event. Here are a selection of his photos, which he was kind enough to share with the Bulletin and its readers.  

  5. Patterns and Outcome of Missed Injuries in Egyptians Polytrauma Patients

    Directory of Open Access Journals (Sweden)

    Adel Hamed Elbaih

    2018-01-01

    Full Text Available Introduction: “polytrauma” patients are higher risk of complications and death than the summation of expected mortality and morbidity of their individual injuries. The ideal goal in trauma resuscitation care is to identify and treat all injuries. With clinical and technological advanced imaging available for diagnosis and treatment of traumatic patients, missed injuries still significant affect modern trauma services and its outcome. Aim: to improve outcome and determine the incidence and nature of missed injuries in polytrauma patients. Methods: the study is a cross-sectional, prospective study included 600 polytraumatized patients admitted in Suez Canal University Hospital. Firstly assessed and treated accordingly to Advanced Trauma Life Support (ATLS guidelines and treat the life threading conditions if present with follow-up short outcome for 28 days. Results: The most common precipitate factor for missed injuries in my study was clinical evaluation error due to Inadequate diagnostic workup in 42.9%. And the second risk factor was Deficiency in Physical Examination in 35.7%. Lastly Incomplete assessment due to patient instability in 10.7% and incorrect interpretation of imaging10.7%.low rates of missed injuries (40.8% in patients arriving during the day compared with (59.2% of night arrivals. Conclusion: the incidence of missed injuries in the study is 9.0 % which is still high compared to many trauma centers. And mostly increase the period of stay in the hospital and affect the outcome of polytrauma patients.

  6. Genomic Variants Revealed by Invariably Missing Genotypes in Nelore Cattle.

    Directory of Open Access Journals (Sweden)

    Joaquim Manoel da Silva

    Full Text Available High density genotyping panels have been used in a wide range of applications. From population genetics to genome-wide association studies, this technology still offers the lowest cost and the most consistent solution for generating SNP data. However, in spite of the application, part of the generated data is always discarded from final datasets based on quality control criteria used to remove unreliable markers. Some discarded data consists of markers that failed to generate genotypes, labeled as missing genotypes. A subset of missing genotypes that occur in the whole population under study may be caused by technical issues but can also be explained by the presence of genomic variations that are in the vicinity of the assayed SNP and that prevent genotyping probes from annealing. The latter case may contain relevant information because these missing genotypes might be used to identify population-specific genomic variants. In order to assess which case is more prevalent, we used Illumina HD Bovine chip genotypes from 1,709 Nelore (Bos indicus samples. We found 3,200 missing genotypes among the whole population. NGS re-sequencing data from 8 sires were used to verify the presence of genomic variations within their flanking regions in 81.56% of these missing genotypes. Furthermore, we discovered 3,300 novel SNPs/Indels, 31% of which are located in genes that may affect traits of importance for the genetic improvement of cattle production.

  7. VIERS- User Preference Service

    Data.gov (United States)

    Department of Veterans Affairs — The Preferences service provides a means to store, retrieve, and manage user preferences. The service supports definition of enterprise wide preferences, as well as...

  8. Past and future implications of near-misses and their emotional consequences.

    Science.gov (United States)

    Zhang, Qiyuan; Covey, Judith

    2014-01-01

    The Reflection and Evaluation Model (REM) of comparative thinking predicts that temporal perspective could moderate people's emotional reactions to close counterfactuals following near-misses (Markman & McMullen, 2003). The experiments reported in this paper tested predictions derived from this theory by examining how people's emotional reactions to a near-miss at goal during a football match (Experiment 1) or a close score in a TV game show (Experiment 2) depended on the level of perceived future possibility. In support of the theory it was found that the presence of future possibility enhanced affective assimilation (e.g., if the near-miss occurred at the beginning of the game the players who had nearly scored were hopeful of future success) whereas the absence of future possibility enhanced affective contrast (e.g., if the near-miss occurred at the end of the game the players who had nearly scored were disappointed about missing an opportunity). Furthermore the experiments built upon our theoretical understanding by exploring the mechanisms which produce assimilation and contrast effects. In Experiment 1 we examined the incidence of present-oriented or future-oriented thinking, and in Experiment 2 we examined the mediating role of counterfactual thinking in the observed effect of proximity on emotions by testing whether stronger counterfactuals (measured using counterfactual probability estimates) produce bigger contrast and assimilation effects. While the results of these investigations generally support the REM, they also highlight the necessity to consider other psychological mechanisms (e.g., social comparison), in addition to counterfactual thinking, that might contribute to the emotional consequences of near-miss outcomes.

  9. A new honey bee mating optimization algorithm for non-smooth economic dispatch

    International Nuclear Information System (INIS)

    Niknam, Taher; Mojarrad, Hasan Doagou; Meymand, Hamed Zeinoddini; Firouzi, Bahman Bahmani

    2011-01-01

    The non-storage characteristics of electricity and the increasing fuel costs worldwide call for the need to operate the systems more economically. Economic dispatch (ED) is one of the most important optimization problems in power systems. ED has the objective of dividing the power demand among the online generators economically while satisfying various constraints. The importance of economic dispatch is to get maximum usable power using minimum resources. To solve the static ED problem, honey bee mating algorithm (HBMO) can be used. The basic disadvantage of the original HBMO algorithm is the fact that it may miss the optimum and provide a near optimum solution in a limited runtime period. In order to avoid this shortcoming, we propose a new method that improves the mating process of HBMO and also, combines the improved HBMO with a Chaotic Local Search (CLS) called Chaotic Improved Honey Bee Mating Optimization (CIHBMO). The proposed algorithm is used to solve ED problems taking into account the nonlinear generator characteristics such as prohibited operation zones, multi-fuel and valve-point loading effects. The CIHBMO algorithm is tested on three test systems and compared with other methods in the literature. Results have shown that the proposed method is efficient and fast for ED problems with non-smooth and non-continuous fuel cost functions. Moreover, the optimal power dispatch obtained by the algorithm is superior to previous reported results. -- Research highlights: →Economic dispatch. →Reducing electrical energy loss. →Saving electrical energy. →Optimal operation.

  10. Diagnostic miss rate for colorectal cancer: an audit.

    Science.gov (United States)

    Than, Mary; Witherspoon, Jolene; Shami, Javed; Patil, Prachi; Saklani, Avanish

    2015-01-01

    Colorectal cancer (CRC) is a common cancer worldwide. While screening improves survival, avoiding delayed diagnosis in symptomatic patients is crucial. Computed tomographic colonography (CTC) or colonoscopy is recommended as first-line investigation and most societies recommend counseling patients undergoing colonoscopy about a miss rate of 5%. This audit evaluates "miss rate" of colorectal investigations, which have led to diagnostic delay in symptomatic cases in a district general hospital in the United Kingdom. This is a retrospective review of 150 consecutive CRC cases presenting between August 2010 and July 2011. Evidence of bowel investigations done in the 3 years prior to diagnosis was obtained from computerized health records. Data regarding previous bowel investigations such as colonoscopy, CTC, double contrast barium enema (DCBE), and CT abdomen/pelvis were collected. 6.7% cases were identified via screening pathway while 93% were identified through symptomatic pathway. 17% (26/150) of newly diagnosed CRC had been investigated in the preceding 3 years. Of these, 8% (12/150) had false negative results. The false negative rate for CRC diagnosis was 3.5% for colonoscopy (3/85), 6.7% for CTC (1/17), 9.4% for CT (5/53), and 26.7% for DCBE (4/15). Some patients had a missed diagnosis despite more than one diagnostic test. Time delay to diagnosis ranged from 21-456 days. 17% of patients diagnosed with CRC had been investigated in the previous 3 years. Higher miss rate of barium enema should preclude its use as a first-line modality to investigate CRC.

  11. On the meaningfulness of testing preference axioms in stated preference discrete choice experiments

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tjur, Carl Tue; Østerdal, Lars Peter Raahave

    2012-01-01

    A stream of studies on evaluation of health care services and public goods have developed tests of the preference axioms of completeness and transitivity and methods for detecting other preference phenomena such as unstability, learning- and tiredness effects, and random error, in stated preference...... discrete choice experiments. This methodological paper tries to identify the role of the preference axioms and other preference phenomena in the context of such experiments and discusses whether or howsuch axioms and phenomena can be subject to meaningful (statistical) tests....

  12. Eesti Miss teeb videofilmi / Evelyn Mikomägi ; interv. Valdo Jahilo

    Index Scriptorium Estoniae

    Mikomägi, Evelyn

    2001-01-01

    Eesti Miss Estonia 2000 Evelyn Mikomägi tegi koos sõbra, Sven-Olof Svenne Englundiga dokfilmi Küprosel peetud maailma suurimast iludusvõistlusest Miss Universe 2000. Neile kuulub ühisfirma "Living Frames" Rootsis, mille peategevus on suunatud 16mm ja 35mm filmide tootmisele

  13. Network compensation for missing sensors

    Science.gov (United States)

    Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.

    1991-01-01

    A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor position jitter, and sensor loss when there are enough remaining sensors to adequately sample the input images. However, when the images are undersampled and complete compensation is not possible, the algorithm need to be modified. For moderate sensor losses, the algorithm works if the transformation weight adjustment is restricted to the weights to output units affected by the loss.

  14. What determines the preference for future living arrangements of middle-aged and older people in urban China?

    Science.gov (United States)

    Meng, Dijuan; Xu, Guihua; He, Ling; Zhang, Min; Lin, Dan

    2017-01-01

    Living arrangements are important to the elderly. However, it is common for elderly parents in urban China to not have a living situation that they consider ideal. An understanding of their preferences assists us in responding to the needs of the elderly as well as in anticipating future long-term care demands. The aim of this study is to provide a clear understanding of preferences for future living arrangements and their associated factors among middle-aged and older people in urban China. Data were extracted from the CHARLS 2011-2012 national baseline survey of middle-aged and elderly people. In the 2011 wave of the CHARLS, a total of 17,708 individual participants (10,069 main respondents and 7,638 spouses) were interviewed; 2509 of the main respondents lived in urban areas. In this group, 41 people who were younger than 45 years old and 162 who had missing data in the variable "living arrangement preference" were excluded. Additionally, 42 people were excluded because they chose "other" for the variable "living arrangement preference" (which was a choice with no specific answer). Finally, a total of 2264 participants were included in our study. The most popular preference for future living arrangements was living close to their children in the same community/neighborhoods, followed by living with adult children. The degree of community handicapped access, number of surviving children, age, marital status, access to community-based elderly care centers and number of years lived in the same community were significantly associated with the preferences for future living arrangements among the respondents. There is a trend towards preference for living near adult children in urban China. Additionally, age has a positive effect on preference for living close to their children. Considerations should be made in housing design and urban community development plans to fulfill older adults' expectations. In addition, increasing the accessibility of public facilities in

  15. Learning from near misses: from quick fixes to closing off the Swiss-cheese holes.

    Science.gov (United States)

    Jeffs, Lianne; Berta, Whitney; Lingard, Lorelei; Baker, G Ross

    2012-04-01

    The extent to which individuals in healthcare use near misses as learning opportunities remains poorly understood. Thus, an exploratory study was conducted to gain insight into the nature of, and contributing factors to, organisational learning from near misses in clinical practice. A constructivist grounded theory approach was employed which included semi-structured interviews with 24 participants (16 clinicians and 8 administrators) from a large teaching hospital in Canada. This study revealed three scenarios for the responses to near misses, the most common involved 'doing a quick fix' where clinicians recognised and corrected an error with no further action. The second scenario consisted of reporting near misses but not hearing back from management, which some participants characterised as 'going into a black hole'. The third scenario was 'closing off the Swiss-cheese holes', in which a reported near miss generated corrective action at an organisational level. Explanations for 'doing a quick fix' included the pervasiveness of near misses that cause no harm and fear associated with reporting the near miss. 'Going into a black hole' reflected managers' focus on operational duties and events that harmed patients. 'Closing off the Swiss-cheese holes' occurred when managers perceived substantial potential for harm and preventability. Where learning was perceived to occur, leaders played a pivotal role in encouraging near-miss reporting. To optimise learning, organisations will need to determine which near misses are appropriate to be responded to as 'quick fixes' and which ones require further action at the unit and corporate levels.

  16. Using Dynamic Multi-Task Non-Negative Matrix Factorization to Detect the Evolution of User Preferences in Collaborative Filtering.

    Directory of Open Access Journals (Sweden)

    Bin Ju

    Full Text Available Predicting what items will be selected by a target user in the future is an important function for recommendation systems. Matrix factorization techniques have been shown to achieve good performance on temporal rating-type data, but little is known about temporal item selection data. In this paper, we developed a unified model that combines Multi-task Non-negative Matrix Factorization and Linear Dynamical Systems to capture the evolution of user preferences. Specifically, user and item features are projected into latent factor space by factoring co-occurrence matrices into a common basis item-factor matrix and multiple factor-user matrices. Moreover, we represented both within and between relationships of multiple factor-user matrices using a state transition matrix to capture the changes in user preferences over time. The experiments show that our proposed algorithm outperforms the other algorithms on two real datasets, which were extracted from Netflix movies and Last.fm music. Furthermore, our model provides a novel dynamic topic model for tracking the evolution of the behavior of a user over time.

  17. Using Dynamic Multi-Task Non-Negative Matrix Factorization to Detect the Evolution of User Preferences in Collaborative Filtering.

    Science.gov (United States)

    Ju, Bin; Qian, Yuntao; Ye, Minchao; Ni, Rong; Zhu, Chenxi

    2015-01-01

    Predicting what items will be selected by a target user in the future is an important function for recommendation systems. Matrix factorization techniques have been shown to achieve good performance on temporal rating-type data, but little is known about temporal item selection data. In this paper, we developed a unified model that combines Multi-task Non-negative Matrix Factorization and Linear Dynamical Systems to capture the evolution of user preferences. Specifically, user and item features are projected into latent factor space by factoring co-occurrence matrices into a common basis item-factor matrix and multiple factor-user matrices. Moreover, we represented both within and between relationships of multiple factor-user matrices using a state transition matrix to capture the changes in user preferences over time. The experiments show that our proposed algorithm outperforms the other algorithms on two real datasets, which were extracted from Netflix movies and Last.fm music. Furthermore, our model provides a novel dynamic topic model for tracking the evolution of the behavior of a user over time.

  18. Absence of missing mass in clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fesenko, B.I.

    1981-01-01

    The evaluation of the radial-velocity dispersion in clusters of galaxies is considered by using order statistics to reduce the distortions introduced by foreground and background galaxies and by large errors in velocity measurement. For four nearby clusters of galaxies, including the Coma cluster, velocity dispersions are obtained which are approximately four times lower than previously reported values. It is found that more remote galaxies exhibit the same tendency to a reduction in missing mass. A detailed examination of the statistical properties of galaxies in the Virgo cluster reveals that the cluster might not actually exist, but may be just a large excess in the number density of bright galaxies, possibly the result of an increase in the visibility of objects in the appropriate directions. It is concluded that the presence of a large amount of missing mass in clusters of galaxies is yet to be proven.

  19. Power calculations for likelihood ratio tests for offspring genotype risks, maternal effects, and parent-of-origin (POO) effects in the presence of missing parental genotypes when unaffected siblings are available.

    Science.gov (United States)

    Rampersaud, E; Morris, R W; Weinberg, C R; Speer, M C; Martin, E R

    2007-01-01

    Genotype-based likelihood-ratio tests (LRT) of association that examine maternal and parent-of-origin effects have been previously developed in the framework of log-linear and conditional logistic regression models. In the situation where parental genotypes are missing, the expectation-maximization (EM) algorithm has been incorporated in the log-linear approach to allow incomplete triads to contribute to the LRT. We present an extension to this model which we call the Combined_LRT that incorporates additional information from the genotypes of unaffected siblings to improve assignment of incompletely typed families to mating type categories, thereby improving inference of missing parental data. Using simulations involving a realistic array of family structures, we demonstrate the validity of the Combined_LRT under the null hypothesis of no association and provide power comparisons under varying levels of missing data and using sibling genotype data. We demonstrate the improved power of the Combined_LRT compared with the family-based association test (FBAT), another widely used association test. Lastly, we apply the Combined_LRT to a candidate gene analysis in Autism families, some of which have missing parental genotypes. We conclude that the proposed log-linear model will be an important tool for future candidate gene studies, for many complex diseases where unaffected siblings can often be ascertained and where epigenetic factors such as imprinting may play a role in disease etiology.

  20. Missed medical appointment among hypertensive and diabetic ...

    African Journals Online (AJOL)

    Keywords: Missed medical appointments, Hypertensive, Diabetic outpatients, Medication adherence, ... 12 weeks, at 95 % confidence level and 5 % error margin, 300 hypertensive ... monthly income and health insurance status of respondents ...

  1. Applying modern error theory to the problem of missed injuries in trauma.

    Science.gov (United States)

    Clarke, D L; Gouveia, J; Thomson, S R; Muckart, D J J

    2008-06-01

    Modern theory of human error has helped reduce the incidence of adverse events in commercial aviation. It remains unclear whether these lessons are applicable to adverse events in trauma surgery. Missed injuries in a large metropolitan surgical service were prospectively audited and analyzed using a modern error taxonomy to define its applicability to trauma. A prospective database of all patients who experienced a missed injury during a 6-month period in a busy surgical service was maintained from July 2006. A missed injury was defined as one that escaped detection from primary assessment to operative exploration. Each missed injury was recorded and categorized. The clinical significance of the error and the level of physician responsible was documented. Errors were divided into planning or execution errors, acts of omission or commission, or violations, slips, and lapses. A total of 1,024 trauma patients were treated by the surgical services over the 6-month period from July to December 2006 in Pietermaritzburg. Thirty-four patients (2.5%) with missed injuries were identified during this period. There were 29 men and 5 women with an average age of 29 years (range: 21-67 years). In 14 patients, errors were related to inadequate clinical assessment. In 11 patients errors involved the misinterpretation of, or failure to respond to radiological imaging. There were 9 cases in which an injury was missed during surgical exploration. Overall mortality was 27% (9 patients). In 5 cases death was directly attributable to the missed injury. The level of the physicians making the error was consultant surgeon (4 cases), resident in training (15 cases), career medical officer (2 cases), referring doctor (6 cases). Missed injuries are uncommon and are made by all grades of staff. They are associated with increased morbidity and mortality. Understanding the pattern of these errors may help develop error-reduction strategies. Current taxonomies help in understanding the error

  2. Multiobjective optimization with a modified simulated annealing algorithm for external beam radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Aubry, Jean-Francois; Beaulieu, Frederic; Sevigny, Caroline; Beaulieu, Luc; Tremblay, Daniel

    2006-01-01

    Inverse planning in external beam radiotherapy often requires a scalar objective function that incorporates importance factors to mimic the planner's preferences between conflicting objectives. Defining those importance factors is not straightforward, and frequently leads to an iterative process in which the importance factors become variables of the optimization problem. In order to avoid this drawback of inverse planning, optimization using algorithms more suited to multiobjective optimization, such as evolutionary algorithms, has been suggested. However, much inverse planning software, including one based on simulated annealing developed at our institution, does not include multiobjective-oriented algorithms. This work investigates the performance of a modified simulated annealing algorithm used to drive aperture-based intensity-modulated radiotherapy inverse planning software in a multiobjective optimization framework. For a few test cases involving gastric cancer patients, the use of this new algorithm leads to an increase in optimization speed of a little more than a factor of 2 over a conventional simulated annealing algorithm, while giving a close approximation of the solutions produced by a standard simulated annealing. A simple graphical user interface designed to facilitate the decision-making process that follows an optimization is also presented

  3. Open critical area model and extraction algorithm based on the net flow-axis

    International Nuclear Information System (INIS)

    Wang Le; Wang Jun-Ping; Gao Yan-Hong; Xu Dan; Li Bo-Bo; Liu Shi-Gang

    2013-01-01

    In the integrated circuit manufacturing process, the critical area extraction is a bottleneck to the layout optimization and the integrated circuit yield estimation. In this paper, we study the problem that the missing material defects may result in the open circuit fault. Combining the mathematical morphology theory, we present a new computation model and a novel extraction algorithm for the open critical area based on the net flow-axis. Firstly, we find the net flow-axis for different nets. Then, the net flow-edges based on the net flow-axis are obtained. Finally, we can extract the open critical area by the mathematical morphology. Compared with the existing methods, the nets need not to divide into the horizontal nets and the vertical nets, and the experimental results show that our model and algorithm can accurately extract the size of the open critical area and obtain the location information of the open circuit critical area. (interdisciplinary physics and related areas of science and technology)

  4. HyRA: A Hybrid Recommendation Algorithm Focused on Smart POI. Ceutí as a Study Scenario

    Directory of Open Access Journals (Sweden)

    Joanna Alvarado-Uribe

    2018-03-01

    Full Text Available Nowadays, Physical Web together with the increase in the use of mobile devices, Global Positioning System (GPS, and Social Networking Sites (SNS have caused users to share enriched information on the Web such as their tourist experiences. Therefore, an area that has been significantly improved by using the contextual information provided by these technologies is tourism. In this way, the main goals of this work are to propose and develop an algorithm focused on the recommendation of Smart Point of Interaction (Smart POI for a specific user according to his/her preferences and the Smart POIs’ context. Hence, a novel Hybrid Recommendation Algorithm (HyRA is presented by incorporating an aggregation operator into the user-based Collaborative Filtering (CF algorithm as well as including the Smart POIs’ categories and geographical information. For the experimental phase, two real-world datasets have been collected and preprocessed. In addition, one Smart POIs’ categories dataset was built. As a result, a dataset composed of 16 Smart POIs, another constituted by the explicit preferences of 200 respondents, and the last dataset integrated by 13 Smart POIs’ categories are provided. The experimental results show that the recommendations suggested by HyRA are promising.

  5. Modelling non-ignorable missing data mechanisms with item response theory models

    NARCIS (Netherlands)

    Holman, Rebecca; Glas, Cornelis A.W.

    2005-01-01

    A model-based procedure for assessing the extent to which missing data can be ignored and handling non-ignorable missing data is presented. The procedure is based on item response theory modelling. As an example, the approach is worked out in detail in conjunction with item response data modelled

  6. Modelling non-ignorable missing-data mechanisms with item response theory models

    NARCIS (Netherlands)

    Holman, Rebecca; Glas, Cees A. W.

    2005-01-01

    A model-based procedure for assessing the extent to which missing data can be ignored and handling non-ignorable missing data is presented. The procedure is based on item response theory modelling. As an example, the approach is worked out in detail in conjunction with item response data modelled

  7. A Comparison of Missing-Data Procedures for Arima Time-Series Analysis

    Science.gov (United States)

    Velicer, Wayne F.; Colby, Suzanne M.

    2005-01-01

    Missing data are a common practical problem for longitudinal designs. Time-series analysis is a longitudinal method that involves a large number of observations on a single unit. Four different missing-data methods (deletion, mean substitution, mean of adjacent observations, and maximum likelihood estimation) were evaluated. Computer-generated…

  8. Fuzzy Tracking and Control Algorithm for an SSVEP-Based BCI System

    Directory of Open Access Journals (Sweden)

    Yeou-Jiunn Chen

    2016-09-01

    Full Text Available Subjects with amyotrophic lateral sclerosis (ALS consistently experience decreasing quality of life because of this distinctive disease. Thus, a practical brain-computer interface (BCI application can effectively help subjects with ALS to participate in communication or entertainment. In this study, a fuzzy tracking and control algorithm is proposed for developing a BCI remote control system. To represent the characteristics of the measured electroencephalography (EEG signals after visual stimulation, a fast Fourier transform is applied to extract the EEG features. A self-developed fuzzy tracking algorithm quickly traces the changes of EEG signals. The accuracy and stability of a BCI system can be greatly improved by using a fuzzy control algorithm. Fifteen subjects were asked to attend a performance test of this BCI system. The canonical correlation analysis (CCA was adopted to compare the proposed approach, and the average recognition rates are 96.97% and 94.49% for proposed approach and CCA, respectively. The experimental results showed that the proposed approach is preferable to CCA. Overall, the proposed fuzzy tracking and control algorithm applied in the BCI system can profoundly help subjects with ALS to control air swimmer drone vehicles for entertainment purposes.

  9. On the Counterfactual Nature of Gambling Near‐misses: An Experimental Study

    Science.gov (United States)

    van Dijk, Eric; Li, Hong; Aitken, Michael; Clark, Luke

    2017-01-01

    Abstract Research on gambling near‐misses has shown that objectively equivalent outcomes can yield divergent emotional and motivational responses. The subjective processing of gambling outcomes is affected substantially by close but non‐obtained outcomes (i.e. counterfactuals). In the current paper, we investigate how different types of near‐misses influence self‐perceived luck and subsequent betting behavior in a wheel‐of‐fortune task. We investigate the counterfactual mechanism of these effects by testing the relationship with a second task measuring regret/relief processing. Across two experiments (Experiment 1, n = 51; Experiment 2, n = 104), we demonstrate that near‐wins (neutral outcomes that are close to a jackpot) decreased self‐perceived luck, whereas near‐losses (neutral outcomes that are close to a major penalty) increased luck ratings. The effects of near‐misses varied by near‐miss position (i.e. whether the spinner stopped just short of, or passed through, the counterfactual outcome), consistent with established distinctions between upward versus downward, and additive versus subtractive, counterfactual thinking. In Experiment 1, individuals who showed stronger counterfactual processing on the regret/relief task were more responsive to near‐wins and near‐losses on the wheel‐of‐fortune task. The effect of near‐miss position was attenuated when the anticipatory phase (i.e. the spin and deceleration) was removed in Experiment 2. Further differences were observed within the objective gains and losses, between “clear” and “narrow” outcomes. Taken together, these results help substantiate the counterfactual mechanism of near‐misses. © 2017 The Authors Journal of Behavioral Decision Making Published by John Wiley & Sons Ltd. PMID:29081596

  10. Towards Real-Time Detection of Gait Events on Different Terrains Using Time-Frequency Analysis and Peak Heuristics Algorithm.

    Science.gov (United States)

    Zhou, Hui; Ji, Ning; Samuel, Oluwarotimi Williams; Cao, Yafei; Zhao, Zheyi; Chen, Shixiong; Li, Guanglin

    2016-10-01

    Real-time detection of gait events can be applied as a reliable input to control drop foot correction devices and lower-limb prostheses. Among the different sensors used to acquire the signals associated with walking for gait event detection, the accelerometer is considered as a preferable sensor due to its convenience of use, small size, low cost, reliability, and low power consumption. Based on the acceleration signals, different algorithms have been proposed to detect toe off (TO) and heel strike (HS) gait events in previous studies. While these algorithms could achieve a relatively reasonable performance in gait event detection, they suffer from limitations such as poor real-time performance and are less reliable in the cases of up stair and down stair terrains. In this study, a new algorithm is proposed to detect the gait events on three walking terrains in real-time based on the analysis of acceleration jerk signals with a time-frequency method to obtain gait parameters, and then the determination of the peaks of jerk signals using peak heuristics. The performance of the newly proposed algorithm was evaluated with eight healthy subjects when they were walking on level ground, up stairs, and down stairs. Our experimental results showed that the mean F1 scores of the proposed algorithm were above 0.98 for HS event detection and 0.95 for TO event detection on the three terrains. This indicates that the current algorithm would be robust and accurate for gait event detection on different terrains. Findings from the current study suggest that the proposed method may be a preferable option in some applications such as drop foot correction devices and leg prostheses.

  11. What We’re Missing

    OpenAIRE

    Camille R. Whitney; Jing Liu

    2017-01-01

    For schools and teachers to help students develop knowledge and skills, students need to show up to class. Yet absenteeism is prevalent, especially in secondary schools. This study uses a rich data set tracking class attendance by day for over 50,000 middle and high school students from an urban district in academic years 2007–2008 through 2012–2013. Our results extend and modify the extant findings on absenteeism that have been based almost exclusively on full-day absenteeism, missing class-...

  12. 基于ISM的动态优先级调度算法%Dynamic Priority Schedule Algorithm Based on ISM

    Institute of Scientific and Technical Information of China (English)

    余祖峰; 蔡启先; 刘明

    2011-01-01

    The EDF schedule algorithm, one of main real-time schedule algorithms of the embedded Linux operating system, can not solve the overload schedule.For this, the paper introduces SLAD algorithm and BACKSLASH algorithm, which have good performance of system load.According to thinking of ISM algorithm, it puts forward a kind of dynamic priority schedule algorithm.According to case of overloads within some time, the algorithm can adjust EDF algorithm and SLAD algorithm neatly, thus improves schedule efficiency of system in usual load and overload cases.Test results for real-time tasks Deadline Miss Ratio(DMR) show its improvement effect.%在嵌入式Linux操作系统的实时调度算法中,EDF调度算法不能解决负载过载问题.为此,引进对系统负载有着良好表现的SLAD算法和BACKSLASH算法.基于ISM算法思路,提出一种动态优先级调度算法.该算法能根据一段时间内负载过载的情况,灵活地调度EDF算法和SLAD算法,从面提高系统在正常负载和过载情况下的调度效率.对实时任务截止期错失率DMR指标的测试结果证明了其改进效果.

  13. Cognitive distortions and gambling near-misses in Internet Gaming Disorder: A preliminary study.

    Directory of Open Access Journals (Sweden)

    Yin Wu

    Full Text Available Increased cognitive distortions (i.e. biased processing of chance, probability and skill are a key psychopathological process in disordered gambling. The present study investigated state and trait aspects of cognitive distortions in 22 individuals with Internet Gaming Disorder (IGD and 22 healthy controls. Participants completed the Gambling Related Cognitions Scale as a trait measure of cognitive distortions, and played a slot machine task delivering wins, near-misses and full-misses. Ratings of pleasure ("liking" and motivation to play ("wanting" were taken following the different outcomes, and gambling persistence was measured after a mandatory phase. IGD was associated with elevated trait cognitive distortions, in particular skill-oriented cognitions. On the slot machine task, the IGD group showed increased "wanting" ratings compared with control participants, while the two groups did not differ regarding their "liking" of the game. The IGD group displayed increased persistence on the slot machine task. Near-miss outcomes did not elicit stronger motivation to play compared to full-miss outcomes overall, and there was no group difference on this measure. However, a near-miss position effect was observed, such that near-misses stopping before the payline were rated as more motivating than near-misses that stopped after the payline, and this differentiation was attenuated in the IGD group, suggesting possible counterfactual thinking deficits in this group. These data provide preliminary evidence for increased incentive motivation and cognitive distortions in IGD, at least in the context of a chance-based gambling environment.

  14. Cognitive distortions and gambling near-misses in Internet Gaming Disorder: A preliminary study.

    Science.gov (United States)

    Wu, Yin; Sescousse, Guillaume; Yu, Hongbo; Clark, Luke; Li, Hong

    2018-01-01

    Increased cognitive distortions (i.e. biased processing of chance, probability and skill) are a key psychopathological process in disordered gambling. The present study investigated state and trait aspects of cognitive distortions in 22 individuals with Internet Gaming Disorder (IGD) and 22 healthy controls. Participants completed the Gambling Related Cognitions Scale as a trait measure of cognitive distortions, and played a slot machine task delivering wins, near-misses and full-misses. Ratings of pleasure ("liking") and motivation to play ("wanting") were taken following the different outcomes, and gambling persistence was measured after a mandatory phase. IGD was associated with elevated trait cognitive distortions, in particular skill-oriented cognitions. On the slot machine task, the IGD group showed increased "wanting" ratings compared with control participants, while the two groups did not differ regarding their "liking" of the game. The IGD group displayed increased persistence on the slot machine task. Near-miss outcomes did not elicit stronger motivation to play compared to full-miss outcomes overall, and there was no group difference on this measure. However, a near-miss position effect was observed, such that near-misses stopping before the payline were rated as more motivating than near-misses that stopped after the payline, and this differentiation was attenuated in the IGD group, suggesting possible counterfactual thinking deficits in this group. These data provide preliminary evidence for increased incentive motivation and cognitive distortions in IGD, at least in the context of a chance-based gambling environment.

  15. What's Missing? Anti-Racist Sex Education!

    Science.gov (United States)

    Whitten, Amanda; Sethna, Christabelle

    2014-01-01

    Contemporary sexual health curricula in Canada include information about sexual diversity and queer identities, but what remains missing is any explicit discussion of anti-racist sex education. Although there exists federal and provincial support for multiculturalism and anti-racism in schools, contemporary Canadian sex education omits crucial…

  16. Missing Strands? Dealing with Hair Loss

    Science.gov (United States)

    ... 2017 Print this issue Missing Strands? Dealing with Hair Loss En español Send us your comments Hair loss is often associated with men and aging, but ... or their treatments, and many other things cause hair loss. The most common type of hair loss is ...

  17. The case of the missing third.

    Science.gov (United States)

    Robertson, Robin

    2005-01-01

    How is it that form arises out of chaos? In attempting to deal with this primary question, time and again a "Missing Third" is posited that lies between extremes. The problem of the "Missing Third" can be traced through nearly the entire history of thought. The form it takes, the problems that arise from it, the solutions suggested for resolving it, are each representative of an age. This paper traces the issue from Plato and Parmenides in the 4th--5th centuries, B.C.; to Neoplatonism in the 3rd--5th centuries; to Locke and Descartes in the 17th century; on to Berkeley and Kant in the 18th century; Fechner and Wundt in the 19th century; to behaviorism and Gestalt psychology, Jung, early in the 20th century, ethology and cybernetics later in the 20th century, then culminates late in the 20th century, with chaos theory.

  18. Color preferences are not universal.

    Science.gov (United States)

    Taylor, Chloe; Clifford, Alexandra; Franklin, Anna

    2013-11-01

    Claims of universality pervade color preference research. It has been argued that there are universal preferences for some colors over others (e.g., Eysenck, 1941), universal sex differences (e.g., Hurlbert & Ling, 2007), and universal mechanisms or dimensions that govern these preferences (e.g., Palmer & Schloss, 2010). However, there have been surprisingly few cross-cultural investigations of color preference and none from nonindustrialized societies that are relatively free from the common influence of global consumer culture. Here, we compare the color preferences of British adults to those of Himba adults who belong to a nonindustrialized culture in rural Namibia. British and Himba color preferences are found to share few characteristics, and Himba color preferences display none of the so-called "universal" patterns or sex differences. Several significant predictors of color preference are identified, such as cone-contrast between stimulus and background (Hurlbert & Ling, 2007), the valence of color-associated objects (Palmer & Schloss, 2010), and the colorfulness of the color. However, the relationship of these predictors to color preference was strikingly different for the two cultures. No one model of color preference is able to account for both British and Himba color preferences. We suggest that not only do patterns of color preference vary across individuals and groups but the underlying mechanisms and dimensions of color preference vary as well. The findings have implications for broader debate on the extent to which our perception and experience of color is culturally relative or universally constrained. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  19. Missing mass of the Universe

    International Nuclear Information System (INIS)

    Mazure, A.

    1989-01-01

    The first evidence for missing mass or dark matter comes from the 30's. On one hand, Oort noted that in the solar neighbourhood the mass of the stars (inferred from count numbers) cannot account for their observed velocities. On the other hand, observation on the sky of various galaxy condensations like the Coma cluster let suppose that they are actual bound systems and not only statistical fluctuations. However, with such an assumption, Zwicky concluded that the velocity dispersion of galaxies in Coma required 100 times more mass than contained in galaxies. Since this period, refined observations, analyses and a reevaluation of the cosmic distance scale reduced this factor but the problem is still present. It is particularly striking for spiral galaxies where systematic observations of rotation curves lead to infer the presence of spherical massive halos. These dynamical evidences form the first missing mass problem. The second one appears with the development of Great Unified Theories for which the natural laboratory is the very early Universe. A consequence of these theories is that our Universe could be closed by exotic particles which interact only gravitationally [fr

  20. Known and missing left ventricular ejection fraction and survival in patients with heart failure

    DEFF Research Database (Denmark)

    Poppe, Katrina K; Squire, Iain B; Whalley, Gillian A

    2013-01-01

    Treatment of patients with heart failure (HF) relies on measurement of LVEF. However, the extent to which EF is recorded varies markedly. We sought to characterize the patient group that is missing a measure of EF, and to explore the association between missing EF and outcome.......Treatment of patients with heart failure (HF) relies on measurement of LVEF. However, the extent to which EF is recorded varies markedly. We sought to characterize the patient group that is missing a measure of EF, and to explore the association between missing EF and outcome....

  1. Predicting appointment misses in hospitals using data analytics

    Science.gov (United States)

    Karpagam, Sylvia; Ma, Nang Laik

    2017-01-01

    Background There is growing attention over the last few years about non-attendance in hospitals and its clinical and economic consequences. There have been several studies documenting the various aspects of non-attendance in hospitals. Project Predicting Appoint Misses (PAM) was started with the intention of being able to predict the type of patients that would not come for appointments after making bookings. Methods Historic hospital appointment data merged with “distance from hospital” variable was used to run Logistic Regression, Support Vector Machine and Recursive Partitioning to decide the contributing variables to missed appointments. Results Variables that are “class”, “time”, “demographics” related have an effect on the target variable, however, prediction models may not perform effectively due to very subtle influence on the target variable. Previously assumed major contributors like “age”, “distance” did not have a major effect on the target variable. Conclusions With the given data it will be very difficult to make any moderate/strong prediction of the Appointment misses. That being said with the help of the cut off we are able to capture all of the “appointment misses” in addition to also capturing the actualized appointments. PMID:28567409

  2. Research on the time optimization model algorithm of Customer Collaborative Product Innovation

    Directory of Open Access Journals (Sweden)

    Guodong Yu

    2014-01-01

    Full Text Available Purpose: To improve the efficiency of information sharing among the innovation agents of customer collaborative product innovation and shorten the product design cycle, an improved genetic annealing algorithm of the time optimization was presented. Design/methodology/approach: Based on the analysis of the objective relationship between the design tasks, the paper takes job shop problems for machining model and proposes the improved genetic algorithm to solve the problems, which is based on the niche technology and thus a better product collaborative innovation design time schedule is got to improve the efficiency. Finally, through the collaborative innovation design of a certain type of mobile phone, the proposed model and method were verified to be correct and effective. Findings and Originality/value: An algorithm with obvious advantages in terms of searching capability and optimization efficiency of customer collaborative product innovation was proposed. According to the defects of the traditional genetic annealing algorithm, the niche genetic annealing algorithm was presented. Firstly, it avoided the effective gene deletions at the early search stage and guaranteed the diversity of solution; Secondly, adaptive double point crossover and swap mutation strategy were introduced to overcome the defects of long solving process and easily converging local minimum value due to the fixed crossover and mutation probability; Thirdly, elite reserved strategy was imported that optimal solution missing was avoided effectively and evolution speed was accelerated. Originality/value: Firstly, the improved genetic simulated annealing algorithm overcomes some defects such as effective gene easily lost in early search. It is helpful to shorten the calculation process and improve the accuracy of the convergence value. Moreover, it speeds up the evolution and ensures the reliability of the optimal solution. Meanwhile, it has obvious advantages in efficiency of

  3. A preference for migration

    OpenAIRE

    Stark, Oded

    2007-01-01

    At least to some extent migration behavior is the outcome of a preference for migration. The pattern of migration as an outcome of a preference for migration depends on two key factors: imitation technology and migration feasibility. We show that these factors jointly determine the outcome of a preference for migration and we provide examples that illustrate how the prevalence and transmission of a migration-forming preference yield distinct migration patterns. In particular, the imitation of...

  4. The Home Care Crew Scheduling Problem: Preference-based visit clustering and temporal dependencies

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel; Justesen, Tor Fog; Dohn, Anders Høeg

    2012-01-01

    In the Home Care Crew Scheduling Problem a staff of home carers has to be assigned a number of visits to patients’ homes, such that the overall service level is maximised. The problem is a generalisation of the vehicle routing problem with time windows. Required travel time between visits and time...... preference constraints. The algorithm is tested both on real-life problem instances and on generated test instances inspired by realistic settings. The use of the specialised branching scheme on real-life problems is novel. The visit clustering decreases run times significantly, and only gives a loss...... windows of the visits must be respected. The challenge when assigning visits to home carers lies in the existence of soft preference constraints and in temporal dependencies between the start times of visits.We model the problem as a set partitioning problem with side constraints and develop an exact...

  5. Assessing Women's Preferences and Preference Modeling for Breast Reconstruction Decision-Making.

    Science.gov (United States)

    Sun, Clement S; Cantor, Scott B; Reece, Gregory P; Crosby, Melissa A; Fingeret, Michelle C; Markey, Mia K

    2014-03-01

    Women considering breast reconstruction must make challenging trade-offs amongst issues that often conflict. It may be useful to quantify possible outcomes using a single summary measure to aid a breast cancer patient in choosing a form of breast reconstruction. In this study, we used multiattribute utility theory to combine multiple objectives to yield a summary value using nine different preference models. We elicited the preferences of 36 women, aged 32 or older with no history of breast cancer, for the patient-reported outcome measures of breast satisfaction, psychosocial well-being, chest well-being, abdominal well-being, and sexual wellbeing as measured by the BREAST-Q in addition to time lost to reconstruction and out-of-pocket cost. Participants ranked hypothetical breast reconstruction outcomes. We examined each multiattribute utility preference model and assessed how often each model agreed with participants' rankings. The median amount of time required to assess preferences was 34 minutes. Agreement among the nine preference models with the participants ranged from 75.9% to 78.9%. None of the preference models performed significantly worse than the best performing risk averse multiplicative model. We hypothesize an average theoretical agreement of 94.6% for this model if participant error is included. There was a statistically significant positive correlation with more unequal distribution of weight given to the seven attributes. We recommend the risk averse multiplicative model for modeling the preferences of patients considering different forms of breast reconstruction because it agreed most often with the participants in this study.

  6. Orthodontic Management of Congenitally Missing Maxillary Lateral Incisors: A Case Report

    OpenAIRE

    Paduano, Sergio; Cioffi, Iacopo; Rongo, Roberto; Cupo, Antonello; Bucci, Rosaria; Valletta, Rosa

    2014-01-01

    This case report describes the orthodontic treatment of a woman, aged 15 years, with permanent dentition, brachyfacial typology, with congenitally missing maxillary lateral incisors. Multibracket straightwire fixed appliance was used to open the space for dental implant placement, and treat the impaired occlusion. The missing lateral incisors were substituted with oral implants.

  7. Satisfaction in motion: Subsequent search misses are more likely in moving search displays.

    Science.gov (United States)

    Stothart, Cary; Clement, Andrew; Brockmole, James R

    2018-02-01

    People often conduct visual searches in which multiple targets are possible (e.g., medical X-rays can contain multiple abnormalities). In this type of search, observers are more likely to miss a second target after having found a first one (a subsequent search miss). Recent evidence has suggested that this effect may be due to a depletion of cognitive resources from tracking the identities and locations of found targets. Given that tracking moving objects is resource-demanding, would finding a moving target further increase the chances of missing a subsequent one? To address this question, we had participants search for one or more targets hidden among distractors. Subsequent search misses were more likely when the targets and distractors moved throughout the display than when they remained stationary. However, when the found targets were highlighted in a unique color, subsequent search misses were no more likely in moving displays. Together, these results suggest that the effect of movement is likely due to the increased cognitive demands of tracking moving targets. Overall, our findings reveal that activities that involve searching for moving targets (e.g., driving) are more susceptible to subsequent search misses than are those that involve searching for stationary targets (e.g., baggage screening).

  8. Experimental Evaluation of a Braille-Reading-Inspired Finger Motion Adaptive Algorithm.

    Science.gov (United States)

    Ulusoy, Melda; Sipahi, Rifat

    2016-01-01

    Braille reading is a complex process involving intricate finger-motion patterns and finger-rubbing actions across Braille letters for the stimulation of appropriate nerves. Although Braille reading is performed by smoothly moving the finger from left-to-right, research shows that even fluent reading requires right-to-left movements of the finger, known as "reversal". Reversals are crucial as they not only enhance stimulation of nerves for correctly reading the letters, but they also show one to re-read the letters that were missed in the first pass. Moreover, it is known that reversals can be performed as often as in every sentence and can start at any location in a sentence. Here, we report experimental results on the feasibility of an algorithm that can render a machine to automatically adapt to reversal gestures of one's finger. Through Braille-reading-analogous tasks, the algorithm is tested with thirty sighted subjects that volunteered in the study. We find that the finger motion adaptive algorithm (FMAA) is useful in achieving cooperation between human finger and the machine. In the presence of FMAA, subjects' performance metrics associated with the tasks have significantly improved as supported by statistical analysis. In light of these encouraging results, preliminary experiments are carried out with five blind subjects with the aim to put the algorithm to test. Results obtained from carefully designed experiments showed that subjects' Braille reading accuracy in the presence of FMAA was more favorable then when FMAA was turned off. Utilization of FMAA in future generation Braille reading devices thus holds strong promise.

  9. A Network-Based Algorithm for Clustering Multivariate Repeated Measures Data

    Science.gov (United States)

    Koslovsky, Matthew; Arellano, John; Schaefer, Caroline; Feiveson, Alan; Young, Millennia; Lee, Stuart

    2017-01-01

    The National Aeronautics and Space Administration (NASA) Astronaut Corps is a unique occupational cohort for which vast amounts of measures data have been collected repeatedly in research or operational studies pre-, in-, and post-flight, as well as during multiple clinical care visits. In exploratory analyses aimed at generating hypotheses regarding physiological changes associated with spaceflight exposure, such as impaired vision, it is of interest to identify anomalies and trends across these expansive datasets. Multivariate clustering algorithms for repeated measures data may help parse the data to identify homogeneous groups of astronauts that have higher risks for a particular physiological change. However, available clustering methods may not be able to accommodate the complex data structures found in NASA data, since the methods often rely on strict model assumptions, require equally-spaced and balanced assessment times, cannot accommodate missing data or differing time scales across variables, and cannot process continuous and discrete data simultaneously. To fill this gap, we propose a network-based, multivariate clustering algorithm for repeated measures data that can be tailored to fit various research settings. Using simulated data, we demonstrate how our method can be used to identify patterns in complex data structures found in practice.

  10. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  11. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  12. Professional experience and traffic accidents/near-miss accidents among truck drivers.

    Science.gov (United States)

    Girotto, Edmarlon; Andrade, Selma Maffei de; González, Alberto Durán; Mesas, Arthur Eumann

    2016-10-01

    To investigate the relationship between the time working as a truck driver and the report of involvement in traffic accidents or near-miss accidents. A cross-sectional study was performed with truck drivers transporting products from the Brazilian grain harvest to the Port of Paranaguá, Paraná, Brazil. The drivers were interviewed regarding sociodemographic characteristics, working conditions, behavior in traffic and involvement in accidents or near-miss accidents in the previous 12 months. Subsequently, the participants answered a self-applied questionnaire on substance use. The time of professional experience as drivers was categorized in tertiles. Statistical analyses were performed through the construction of models adjusted by multinomial regression to assess the relationship between the length of experience as a truck driver and the involvement in accidents or near-miss accidents. This study included 665 male drivers with an average age of 42.2 (±11.1) years. Among them, 7.2% and 41.7% of the drivers reported involvement in accidents and near-miss accidents, respectively. In fully adjusted analysis, the 3rd tertile of professional experience (>22years) was shown to be inversely associated with involvement in accidents (odds ratio [OR] 0.29; 95% confidence interval [CI] 0.16-0.52) and near-miss accidents (OR 0.17; 95% CI 0.05-0.53). The 2nd tertile of professional experience (11-22 years) was inversely associated with involvement in accidents (OR 0.63; 95% CI 0.40-0.98). An evident relationship was observed between longer professional experience and a reduction in reporting involvement in accidents and near-miss accidents, regardless of age, substance use, working conditions and behavior in traffic. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Ma_MISS on ExoMars: Mineralogical Characterization of the Martian Subsurface

    Science.gov (United States)

    De Sanctis, Maria Cristina; Altieri, Francesca; Ammannito, Eleonora; Biondi, David; De Angelis, Simone; Meini, Marco; Mondello, Giuseppe; Novi, Samuele; Paolinetti, Riccardo; Soldani, Massimo; Mugnuolo, Raffaele; Pirrotta, Simone; Vago, Jorge L.; Ma_MISS Team

    2017-07-01

    The Ma_MISS (Mars Multispectral Imager for Subsurface Studies) experiment is the visible and near infrared (VNIR) miniaturized spectrometer hosted by the drill system of the ExoMars 2020 rover. Ma_MISS will perform IR spectral reflectance investigations in the 0.4-2.2 μm range to characterize the mineralogy of excavated borehole walls at different depths (between 0 and 2 m). The spectral sampling is about 20 nm, whereas the spatial resolution over the target is 120 μm. Making use of the drill's movement, the instrument slit can scan a ring and build up hyperspectral images of a borehole. The main goal of the Ma_MISS instrument is to study the martian subsurface environment. Access to the martian subsurface is crucial to our ability to constrain the nature, timing, and duration of alteration and sedimentation processes on Mars, as well as habitability conditions. Subsurface deposits likely host and preserve H2O ice and hydrated materials that will contribute to our understanding of the H2O geochemical environment (both in the liquid and in the solid state) at the ExoMars 2020 landing site. The Ma_MISS spectral range and sampling capabilities have been carefully selected to allow the study of minerals and ices in situ before the collection of samples. Ma_MISS will be implemented to accomplish the following scientific objectives: (1) determine the composition of subsurface materials, (2) map the distribution of subsurface H2O and volatiles, (3) characterize important optical and physical properties of materials (e.g., grain size), and (4) produce a stratigraphic column that will inform with regard to subsurface geological processes. The Ma_MISS findings will help to refine essential criteria that will aid in our selection of the most interesting subsurface formations from which to collect samples.

  14. Orthodontic Management of Congenitally Missing Maxillary Lateral Incisors: A Case Report

    Directory of Open Access Journals (Sweden)

    Sergio Paduano

    2014-01-01

    Full Text Available This case report describes the orthodontic treatment of a woman, aged 15 years, with permanent dentition, brachyfacial typology, with congenitally missing maxillary lateral incisors. Multibracket straightwire fixed appliance was used to open the space for dental implant placement, and treat the impaired occlusion. The missing lateral incisors were substituted with oral implants.

  15. Social Media Use and the Fear of Missing out (FoMO) While Studying Abroad

    Science.gov (United States)

    Hetz, Patricia R.; Dawson, Christi L.; Cullen, Theresa A.

    2015-01-01

    Fear of Missing Out (FoMO) is a social construct that examines whether students are concerned that they are missing out on experiences that others are having, and we examined this relation to their concerns over missing activities in their home culture. This mixed-methods pilot study sought to determine how social media affects the study abroad…

  16. Dealing with missing data in a multi-question depression scale: a comparison of imputation methods

    Directory of Open Access Journals (Sweden)

    Stuart Heather

    2006-12-01

    Full Text Available Abstract Background Missing data present a challenge to many research projects. The problem is often pronounced in studies utilizing self-report scales, and literature addressing different strategies for dealing with missing data in such circumstances is scarce. The objective of this study was to compare six different imputation techniques for dealing with missing data in the Zung Self-reported Depression scale (SDS. Methods 1580 participants from a surgical outcomes study completed the SDS. The SDS is a 20 question scale that respondents complete by circling a value of 1 to 4 for each question. The sum of the responses is calculated and respondents are classified as exhibiting depressive symptoms when their total score is over 40. Missing values were simulated by randomly selecting questions whose values were then deleted (a missing completely at random simulation. Additionally, a missing at random and missing not at random simulation were completed. Six imputation methods were then considered; 1 multiple imputation, 2 single regression, 3 individual mean, 4 overall mean, 5 participant's preceding response, and 6 random selection of a value from 1 to 4. For each method, the imputed mean SDS score and standard deviation were compared to the population statistics. The Spearman correlation coefficient, percent misclassified and the Kappa statistic were also calculated. Results When 10% of values are missing, all the imputation methods except random selection produce Kappa statistics greater than 0.80 indicating 'near perfect' agreement. MI produces the most valid imputed values with a high Kappa statistic (0.89, although both single regression and individual mean imputation also produced favorable results. As the percent of missing information increased to 30%, or when unbalanced missing data were introduced, MI maintained a high Kappa statistic. The individual mean and single regression method produced Kappas in the 'substantial agreement' range

  17. Causal inference with missing exposure information: Methods and applications to an obstetric study.

    Science.gov (United States)

    Zhang, Zhiwei; Liu, Wei; Zhang, Bo; Tang, Li; Zhang, Jun

    2016-10-01

    Causal inference in observational studies is frequently challenged by the occurrence of missing data, in addition to confounding. Motivated by the Consortium on Safe Labor, a large observational study of obstetric labor practice and birth outcomes, this article focuses on the problem of missing exposure information in a causal analysis of observational data. This problem can be approached from different angles (i.e. missing covariates and causal inference), and useful methods can be obtained by drawing upon the available techniques and insights in both areas. In this article, we describe and compare a collection of methods based on different modeling assumptions, under standard assumptions for missing data (i.e. missing-at-random and positivity) and for causal inference with complete data (i.e. no unmeasured confounding and another positivity assumption). These methods involve three models: one for treatment assignment, one for the dependence of outcome on treatment and covariates, and one for the missing data mechanism. In general, consistent estimation of causal quantities requires correct specification of at least two of the three models, although there may be some flexibility as to which two models need to be correct. Such flexibility is afforded by doubly robust estimators adapted from the missing covariates literature and the literature on causal inference with complete data, and by a newly developed triply robust estimator that is consistent if any two of the three models are correct. The methods are applied to the Consortium on Safe Labor data and compared in a simulation study mimicking the Consortium on Safe Labor. © The Author(s) 2013.

  18. Obstetric near-miss and maternal mortality in maternity university hospital, Damascus, Syria: a retrospective study

    Directory of Open Access Journals (Sweden)

    Al Chamat Ahmad

    2010-10-01

    Full Text Available Abstract Background Investigating severe maternal morbidity (near-miss is a newly recognised tool that identifies women at highest risk of maternal death and helps allocate resources especially in low income countries. This study aims to i. document the frequency and nature of maternal near-miss at hospital level in Damascus, Capital of Syria, ii. evaluate the level of care at maternal life-saving emergency services by comparatively analysing near-misses and maternal mortalities. Methods Retrospective facility-based review of cases of near-miss and maternal mortality that took place in the years 2006-2007 at Damascus Maternity University Hospital, Syria. Near-miss cases were defined based on disease-specific criteria (Filippi 2005 including: haemorrhage, hypertensive disorders in pregnancy, dystocia, infection and anaemia. Main outcomes included maternal mortality ratio (MMR, maternal near miss ratio (MNMR, mortality indices and proportion of near-miss cases and mortality cases to hospital admissions. Results There were 28 025 deliveries, 15 maternal deaths and 901 near-miss cases. The study showed a MNMR of 32.9/1000 live births, a MMR of 54.8/100 000 live births and a relatively low mortality index of 1.7%. Hypertensive disorders (52% and haemorrhage (34% were the top causes of near-misses. Late pregnancy haemorrhage was the leading cause of maternal mortality (60% while sepsis had the highest mortality index (7.4%. Most cases (93% were referred in critical conditions from other facilities; namely traditional birth attendants homes (67%, primary (5% and secondary (10% healthcare unites and private practices (11%. 26% of near-miss cases were admitted to Intensive Care Unit (ICU. Conclusion Near-miss analyses provide valuable information on obstetric care. The study highlights the need to improve antenatal care which would help early identification of high risk pregnancies. It also emphasises the importance of both: developing protocols to

  19. Obstetric near-miss and maternal mortality in maternity university hospital, Damascus, Syria: a retrospective study.

    Science.gov (United States)

    Almerie, Yara; Almerie, Muhammad Q; Matar, Hosam E; Shahrour, Yasser; Al Chamat, Ahmad Abo; Abdulsalam, Asmaa

    2010-10-19

    Investigating severe maternal morbidity (near-miss) is a newly recognised tool that identifies women at highest risk of maternal death and helps allocate resources especially in low income countries. This study aims to i. document the frequency and nature of maternal near-miss at hospital level in Damascus, Capital of Syria, ii. evaluate the level of care at maternal life-saving emergency services by comparatively analysing near-misses and maternal mortalities. Retrospective facility-based review of cases of near-miss and maternal mortality that took place in the years 2006-2007 at Damascus Maternity University Hospital, Syria. Near-miss cases were defined based on disease-specific criteria (Filippi 2005) including: haemorrhage, hypertensive disorders in pregnancy, dystocia, infection and anaemia. Main outcomes included maternal mortality ratio (MMR), maternal near miss ratio (MNMR), mortality indices and proportion of near-miss cases and mortality cases to hospital admissions. There were 28,025 deliveries, 15 maternal deaths and 901 near-miss cases. The study showed a MNMR of 32.9/1000 live births, a MMR of 54.8/100,000 live births and a relatively low mortality index of 1.7%. Hypertensive disorders (52%) and haemorrhage (34%) were the top causes of near-misses. Late pregnancy haemorrhage was the leading cause of maternal mortality (60%) while sepsis had the highest mortality index (7.4%). Most cases (93%) were referred in critical conditions from other facilities; namely traditional birth attendants homes (67%), primary (5%) and secondary (10%) healthcare unites and private practices (11%). 26% of near-miss cases were admitted to Intensive Care Unit (ICU). Near-miss analyses provide valuable information on obstetric care. The study highlights the need to improve antenatal care which would help early identification of high risk pregnancies. It also emphasises the importance of both: developing protocols to prevent/manage post-partum haemorrhage and training health

  20. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.