WorldWideScience

Sample records for method outperforms previously

  1. Reciprocity Outperforms Conformity to Promote Cooperation.

    Science.gov (United States)

    Romano, Angelo; Balliet, Daniel

    2017-10-01

    Evolutionary psychologists have proposed two processes that could give rise to the pervasiveness of human cooperation observed among individuals who are not genetically related: reciprocity and conformity. We tested whether reciprocity outperformed conformity in promoting cooperation, especially when these psychological processes would promote a different cooperative or noncooperative response. To do so, across three studies, we observed participants' cooperation with a partner after learning (a) that their partner had behaved cooperatively (or not) on several previous trials and (b) that their group members had behaved cooperatively (or not) on several previous trials with that same partner. Although we found that people both reciprocate and conform, reciprocity has a stronger influence on cooperation. Moreover, we found that conformity can be partly explained by a concern about one's reputation-a finding that supports a reciprocity framework.

  2. Bayesian methods outperform parsimony but at the expense of precision in the estimation of phylogeny from discrete morphological data.

    Science.gov (United States)

    O'Reilly, Joseph E; Puttick, Mark N; Parry, Luke; Tanner, Alastair R; Tarver, James E; Fleming, James; Pisani, Davide; Donoghue, Philip C J

    2016-04-01

    Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. © 2016 The Authors.

  3. Stochastic gradient ascent outperforms gamers in the Quantum Moves game

    Science.gov (United States)

    Sels, Dries

    2018-04-01

    In a recent work on quantum state preparation, Sørensen and co-workers [Nature (London) 532, 210 (2016), 10.1038/nature17620] explore the possibility of using video games to help design quantum control protocols. The authors present a game called "Quantum Moves" (https://www.scienceathome.org/games/quantum-moves/) in which gamers have to move an atom from A to B by means of optical tweezers. They report that, "players succeed where purely numerical optimization fails." Moreover, by harnessing the player strategies, they can "outperform the most prominent established numerical methods." The aim of this Rapid Communication is to analyze the problem in detail and show that those claims are untenable. In fact, without any prior knowledge and starting from a random initial seed, a simple stochastic local optimization method finds near-optimal solutions which outperform all players. Counterdiabatic driving can even be used to generate protocols without resorting to numeric optimization. The analysis results in an accurate analytic estimate of the quantum speed limit which, apart from zero-point motion, is shown to be entirely classical in nature. The latter might explain why gamers are reasonably good at the game. A simple modification of the BringHomeWater challenge is proposed to test this hypothesis.

  4. Smiling on the Inside: The Social Benefits of Suppressing Positive Emotions in Outperformance Situations.

    Science.gov (United States)

    Schall, Marina; Martiny, Sarah E; Goetz, Thomas; Hall, Nathan C

    2016-05-01

    Although expressing positive emotions is typically socially rewarded, in the present work, we predicted that people suppress positive emotions and thereby experience social benefits when outperformed others are present. We tested our predictions in three experimental studies with high school students. In Studies 1 and 2, we manipulated the type of social situation (outperformance vs. non-outperformance) and assessed suppression of positive emotions. In both studies, individuals reported suppressing positive emotions more in outperformance situations than in non-outperformance situations. In Study 3, we manipulated the social situation (outperformance vs. non-outperformance) as well as the videotaped person's expression of positive emotions (suppression vs. expression). The findings showed that when outperforming others, individuals were indeed evaluated more positively when they suppressed rather than expressed their positive emotions, and demonstrate the importance of the specific social situation with respect to the effects of suppression. © 2016 by the Society for Personality and Social Psychology, Inc.

  5. Female Chess Players Outperform Expectations When Playing Men.

    Science.gov (United States)

    Stafford, Tom

    2018-03-01

    Stereotype threat has been offered as a potential explanation of differential performance between men and women in some cognitive domains. Questions remain about the reliability and generality of the phenomenon. Previous studies have found that stereotype threat is activated in female chess players when they are matched against male players. I used data from over 5.5 million games of international tournament chess and found no evidence of a stereotype-threat effect. In fact, female players outperform expectations when playing men. Further analysis showed no influence of degree of challenge, player age, nor prevalence of female role models in national chess leagues on differences in performance when women play men versus when they play women. Though this analysis contradicts one specific mechanism of influence of gender stereotypes, the persistent differences between male and female players suggest that systematic factors do exist and remain to be uncovered.

  6. Do Private Firms Outperform SOE Firms after Going Public in China Given their Different Governance Characteristics?

    Directory of Open Access Journals (Sweden)

    Shenghui Tong

    2013-06-01

    Full Text Available This study examines the characteristics of board structure that affect Chinese public firm’s financial performance. Using a sample of 871 firms with 699 observations of previously private firms and 1,914 observations of previously state-owned enterprise (SOE firms, we investigate the differences in corporate governance between publicly listed firms that used to be pure private firms before going public and listed firms that used to be SOEs before their initial public offerings (IPOs. Our main finding is that previously private firms outperform previously SOE firms in China after IPOs. In the wake of becoming listed firms, previously SOE firms might be faced with difficulties adjusting to professional business practices to build and extend competitive advantages. In addition, favorable policies and assistance from the government to the SOE firms might have triggered complacency, especially in early years after getting listed. On the other hand, professional savvy and acumen, combined with efficiency and favorable business climate created by the government have probably led the previously private firms to improve their values stronger and faster.

  7. Weak-value measurements can outperform conventional measurements

    International Nuclear Information System (INIS)

    Magaña-Loaiza, Omar S; Boyd, Robert W; Harris, Jérémie; Lundeen, Jeff S

    2017-01-01

    In this paper we provide a simple, straightforward example of a specific situation in which weak-value amplification (WVA) clearly outperforms conventional measurement in determining the angular orientation of an optical component. We also offer a perspective reconciling the views of some theorists, who claim WVA to be inherently sub-optimal for parameter estimation, with the perspective of the many experimentalists and theorists who have used the procedure to successfully access otherwise elusive phenomena. (invited comment)

  8. Do bilinguals outperform monolinguals?

    Directory of Open Access Journals (Sweden)

    Sejdi Sejdiu

    2016-11-01

    Full Text Available The relationship between second dialect acquisition and the psychological capacity of the learner is still a divisive topic that generates a lot of debate. A few researchers contend that the acquisition of the second dialect tends to improve the cognitive abilities in various individuals, but at the same time it could hinder the same abilities in other people. Currently, immersion is a common occurrence in some countries. In the recent past, it has significantly increased in its popularity, which has caused parents, professionals, and researchers to question whether second language acquisition has a positive impact on cognitive development, encompassing psychological ability. In rundown, the above might decide to comprehend the effects of using a second language based on the literal aptitudes connected with the native language. The issue of bilingualism was seen as a disadvantage until recently because of two languages being present which would hinder or delay the development of languages. However, recent studies have proven that bilinguals outperform monolinguals in tasks which require more attention.

  9. Complementary Variety: When Can Cooperation in Uncertain Environments Outperform Competitive Selection?

    Directory of Open Access Journals (Sweden)

    Martin Hilbert

    2017-01-01

    Full Text Available Evolving biological and socioeconomic populations can sometimes increase their growth rate by cooperatively redistributing resources among their members. In unchanging environments, this simply comes down to reallocating resources to fitter types. In uncertain and fluctuating environments, cooperation cannot always outperform blind competitive selection. When can it? The conditions depend on the particular shape of the fitness landscape. The article derives a single measure that quantifies by how much an intervention in stochastic environments can possibly outperform the blind forces of natural selection. It is a multivariate and multilevel measure that essentially quantifies the amount of complementary variety between different population types and environmental states. The more complementary the fitness of types in different environmental states, the proportionally larger the potential benefit of strategic cooperation over competitive selection. With complementary variety, holding population shares constant will always outperform natural and market selection (including bet-hedging, portfolio management, and stochastic switching. The result can be used both to determine the acceptable cost of learning the details of a fitness landscape and to design multilevel classification systems of population types and environmental states that maximize population growth. Two empirical cases are explored, one from the evolving economy and the other one from migrating birds.

  10. Sex Differences in Spatial Memory in Brown-Headed Cowbirds: Males Outperform Females on a Touchscreen Task.

    Directory of Open Access Journals (Sweden)

    Mélanie F Guigueno

    Full Text Available Spatial cognition in females and males can differ in species in which there are sex-specific patterns in the use of space. Brown-headed cowbirds are brood parasites that show a reversal of sex-typical space use often seen in mammals. Female cowbirds, search for, revisit and parasitize hosts nests, have a larger hippocampus than males and have better memory than males for a rewarded location in an open spatial environment. In the current study, we tested female and male cowbirds in breeding and non-breeding conditions on a touchscreen delayed-match-to-sample task using both spatial and colour stimuli. Our goal was to determine whether sex differences in spatial memory in cowbirds generalizes to all spatial tasks or is task-dependant. Both sexes performed better on the spatial than on the colour touchscreen task. On the spatial task, breeding males outperformed breeding females. On the colour task, females and males did not differ, but females performed better in breeding condition than in non-breeding condition. Although female cowbirds were observed to outperform males on a previous larger-scale spatial task, males performed better than females on a task testing spatial memory in the cowbirds' immediate visual field. Spatial abilities in cowbirds can favour males or females depending on the type of spatial task, as has been observed in mammals, including humans.

  11. Sex Differences in Spatial Memory in Brown-Headed Cowbirds: Males Outperform Females on a Touchscreen Task

    Science.gov (United States)

    Guigueno, Mélanie F.; MacDougall-Shackleton, Scott A.; Sherry, David F.

    2015-01-01

    Spatial cognition in females and males can differ in species in which there are sex-specific patterns in the use of space. Brown-headed cowbirds are brood parasites that show a reversal of sex-typical space use often seen in mammals. Female cowbirds, search for, revisit and parasitize hosts nests, have a larger hippocampus than males and have better memory than males for a rewarded location in an open spatial environment. In the current study, we tested female and male cowbirds in breeding and non-breeding conditions on a touchscreen delayed-match-to-sample task using both spatial and colour stimuli. Our goal was to determine whether sex differences in spatial memory in cowbirds generalizes to all spatial tasks or is task-dependant. Both sexes performed better on the spatial than on the colour touchscreen task. On the spatial task, breeding males outperformed breeding females. On the colour task, females and males did not differ, but females performed better in breeding condition than in non-breeding condition. Although female cowbirds were observed to outperform males on a previous larger-scale spatial task, males performed better than females on a task testing spatial memory in the cowbirds’ immediate visual field. Spatial abilities in cowbirds can favour males or females depending on the type of spatial task, as has been observed in mammals, including humans. PMID:26083573

  12. Self-directed learning can outperform direct instruction in the course of a modern German medical curriculum - results of a mixed methods trial.

    Science.gov (United States)

    Peine, Arne; Kabino, Klaus; Spreckelsen, Cord

    2016-06-03

    Modernised medical curricula in Germany (so called "reformed study programs") rely increasingly on alternative self-instructed learning forms such as e-learning and curriculum-guided self-study. However, there is a lack of evidence that these methods can outperform conventional teaching methods such as lectures and seminars. This study was conducted in order to compare extant traditional teaching methods with new instruction forms in terms of learning effect and student satisfaction. In a randomised trial, 244 students of medicine in their third academic year were assigned to one of four study branches representing self-instructed learning forms (e-learning and curriculum-based self-study) and instructed learning forms (lectures and seminars). All groups participated in their respective learning module with standardised materials and instructions. Learning effect was measured with pre-test and post-test multiple-choice questionnaires. Student satisfaction and learning style were examined via self-assessment. Of 244 initial participants, 223 completed the respective module and were included in the study. In the pre-test, the groups showed relatively homogenous scores. All students showed notable improvements compared with the pre-test results. Participants in the non-self-instructed learning groups reached scores of 14.71 (seminar) and 14.37 (lecture), while the groups of self-instructed learners reached higher scores with 17.23 (e-learning) and 15.81 (self-study). All groups improved significantly (p learning group, whose self-assessment improved by 2.36. The study shows that students in modern study curricula learn better through modern self-instructed methods than through conventional methods. These methods should be used more, as they also show good levels of student acceptance and higher scores in personal self-assessment of knowledge.

  13. Using Outperformance Pay to Motivate Academics: Insiders' Accounts of Promises and Problems

    Science.gov (United States)

    Field, Laurie

    2015-01-01

    Many researchers have investigated the appropriateness of pay for outperformance, (also called "merit-based pay" and "performance-based pay") for academics, but a review of this body of work shows that the voice of academics themselves is largely absent. This article is a contribution to addressing this gap, summarising the…

  14. Do new wipe materials outperform traditional lead dust cleaning methods?

    Science.gov (United States)

    Lewis, Roger D; Ong, Kee Hean; Emo, Brett; Kennedy, Jason; Brown, Christopher A; Condoor, Sridhar; Thummalakunta, Laxmi

    2012-01-01

    Government guidelines have traditionally recommended the use of wet mopping, sponging, or vacuuming for removal of lead-contaminated dust from hard surfaces in homes. The emergence of new technologies, such as the electrostatic dry cloth and wet disposable clothes used on mopheads, for removal of dust provides an opportunity to evaluate their ability to remove lead compared with more established methods. The purpose of this study was to determine if relative differences exist between two new and two older methods for removal of lead-contaminated dust (LCD) from three wood surfaces that were characterized by different roughness or texture. Standard leaded dust, coefficient of friction was performed for each wipe material. Analysis of variance was used to evaluate the surface and cleaning methods. There were significant interactions between cleaning method and surface types, p = 0.007. Cleaning method was found be a significant factor in removal of lead, p coefficient of friction, significantly different among the three wipes, is likely to influence the cleaning action. Cleaning method appears to be more important than texture in LCD removal from hard surfaces. There are some small but important factors in cleaning LCD from hard surfaces, including the limits of a Swiffer mop to conform to curved surfaces and the efficiency of the wetted shop towel and vacuuming for cleaning all surface textures. The mean percentage reduction in lead dust achieved by the traditional methods (vacuuming and wet wiping) was greater and more consistent compared to the new methods (electrostatic dry cloth and wet Swiffer mop). Vacuuming and wet wiping achieved lead reductions of 92% ± 4% and 91%, ± 4%, respectively, while the electrostatic dry cloth and wet Swiffer mops achieved lead reductions of only 89 ± 8% and  81 ± 17%, respectively.

  15. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  16. De novo clustering methods outperform reference-based methods for assigning 16S rRNA gene sequences to operational taxonomic units

    Directory of Open Access Journals (Sweden)

    Sarah L. Westcott

    2015-12-01

    Full Text Available Background. 16S rRNA gene sequences are routinely assigned to operational taxonomic units (OTUs that are then used to analyze complex microbial communities. A number of methods have been employed to carry out the assignment of 16S rRNA gene sequences to OTUs leading to confusion over which method is optimal. A recent study suggested that a clustering method should be selected based on its ability to generate stable OTU assignments that do not change as additional sequences are added to the dataset. In contrast, we contend that the quality of the OTU assignments, the ability of the method to properly represent the distances between the sequences, is more important.Methods. Our analysis implemented six de novo clustering algorithms including the single linkage, complete linkage, average linkage, abundance-based greedy clustering, distance-based greedy clustering, and Swarm and the open and closed-reference methods. Using two previously published datasets we used the Matthew’s Correlation Coefficient (MCC to assess the stability and quality of OTU assignments.Results. The stability of OTU assignments did not reflect the quality of the assignments. Depending on the dataset being analyzed, the average linkage and the distance and abundance-based greedy clustering methods generated OTUs that were more likely to represent the actual distances between sequences than the open and closed-reference methods. We also demonstrated that for the greedy algorithms VSEARCH produced assignments that were comparable to those produced by USEARCH making VSEARCH a viable free and open source alternative to USEARCH. Further interrogation of the reference-based methods indicated that when USEARCH or VSEARCH were used to identify the closest reference, the OTU assignments were sensitive to the order of the reference sequences because the reference sequences can be identical over the region being considered. More troubling was the observation that while both USEARCH and

  17. Parametric methods outperformed non-parametric methods in comparisons of discrete numerical variables

    Directory of Open Access Journals (Sweden)

    Sandvik Leiv

    2011-04-01

    Full Text Available Abstract Background The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Methods Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. Results The Welch U test (the T test with adjustment for unequal variances and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group. The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. Conclusions The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.

  18. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.

  19. Proteome Profiling Outperforms Transcriptome Profiling for Coexpression Based Gene Function Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jing; Ma, Zihao; Carr, Steven A.; Mertins, Philipp; Zhang, Hui; Zhang, Zhen; Chan, Daniel W.; Ellis, Matthew J. C.; Townsend, R. Reid; Smith, Richard D.; McDermott, Jason E.; Chen, Xian; Paulovich, Amanda G.; Boja, Emily S.; Mesri, Mehdi; Kinsinger, Christopher R.; Rodriguez, Henry; Rodland, Karin D.; Liebler, Daniel C.; Zhang, Bing

    2016-11-11

    Coexpression of mRNAs under multiple conditions is commonly used to infer cofunctionality of their gene products despite well-known limitations of this “guilt-by-association” (GBA) approach. Recent advancements in mass spectrometry-based proteomic technologies have enabled global expression profiling at the protein level; however, whether proteome profiling data can outperform transcriptome profiling data for coexpression based gene function prediction has not been systematically investigated. Here, we address this question by constructing and analyzing mRNA and protein coexpression networks for three cancer types with matched mRNA and protein profiling data from The Cancer Genome Atlas (TCGA) and the Clinical Proteomic Tumor Analysis Consortium (CPTAC). Our analyses revealed a marked difference in wiring between the mRNA and protein coexpression networks. Whereas protein coexpression was driven primarily by functional similarity between coexpressed genes, mRNA coexpression was driven by both cofunction and chromosomal colocalization of the genes. Functionally coherent mRNA modules were more likely to have their edges preserved in corresponding protein networks than functionally incoherent mRNA modules. Proteomic data strengthened the link between gene expression and function for at least 75% of Gene Ontology (GO) biological processes and 90% of KEGG pathways. A web application Gene2Net (http://cptac.gene2net.org) developed based on the three protein coexpression networks revealed novel gene-function relationships, such as linking ERBB2 (HER2) to lipid biosynthetic process in breast cancer, identifying PLG as a new gene involved in complement activation, and identifying AEBP1 as a new epithelial-mesenchymal transition (EMT) marker. Our results demonstrate that proteome profiling outperforms transcriptome profiling for coexpression based gene function prediction. Proteomics should be integrated if not preferred in gene function and human disease studies

  20. HINTS outperforms ABCD2 to screen for stroke in acute continuous vertigo and dizziness.

    Science.gov (United States)

    Newman-Toker, David E; Kerber, Kevin A; Hsieh, Yu-Hsiang; Pula, John H; Omron, Rodney; Saber Tehrani, Ali S; Mantokoudis, Georgios; Hanley, Daniel F; Zee, David S; Kattah, Jorge C

    2013-10-01

    younger than 60 years old (28.9%). HINTS stroke sensitivity was 96.5%, specificity was 84.4%, LR+ was 6.19, and LR- was 0.04 and did not vary by age. For any central lesion, sensitivity was 96.8%, specificity was 98.5%, LR+ was 63.9, and LR- was 0.03 for HINTS, and sensitivity was 99.2%, specificity was 97.0%, LR+ was 32.7, and LR- was 0.01 for HINTS "plus" (any new hearing loss added to HINTS). Initial MRIs were falsely negative in 15 of 105 (14.3%) infarctions; all but one was obtained before 48 hours after onset, and all were confirmed by delayed MRI. HINTS substantially outperforms ABCD2 for stroke diagnosis in ED patients with AVS. It also outperforms MRI obtained within the first 2 days after symptom onset. While HINTS testing has traditionally been performed by specialists, methods for empowering emergency physicians (EPs) to leverage this approach for stroke screening in dizziness should be investigated. © 2013 by the Society for Academic Emergency Medicine.

  1. Sensitivity of monthly streamflow forecasts to the quality of rainfall forcing: When do dynamical climate forecasts outperform the Ensemble Streamflow Prediction (ESP) method?

    Science.gov (United States)

    Tanguy, M.; Prudhomme, C.; Harrigan, S.; Smith, K. A.; Parry, S.

    2017-12-01

    Forecasting hydrological extremes is challenging, especially at lead times over 1 month for catchments with limited hydrological memory and variable climates. One simple way to derive monthly or seasonal hydrological forecasts is to use historical climate data to drive hydrological models using the Ensemble Streamflow Prediction (ESP) method. This gives a range of possible future streamflow given known initial hydrologic conditions alone. The degree of skill of ESP depends highly on the forecast initialisation month and catchment type. Using dynamic rainfall forecasts as driving data instead of historical data could potentially improve streamflow predictions. A lot of effort is being invested within the meteorological community to improve these forecasts. However, while recent progress shows promise (e.g. NAO in winter), the skill of these forecasts at monthly to seasonal timescales is generally still limited, and the extent to which they might lead to improved hydrological forecasts is an area of active research. Additionally, these meteorological forecasts are currently being produced at 1 month or seasonal time-steps in the UK, whereas hydrological models require forcings at daily or sub-daily time-steps. Keeping in mind these limitations of available rainfall forecasts, the objectives of this study are to find out (i) how accurate monthly dynamical rainfall forecasts need to be to outperform ESP, and (ii) how the method used to disaggregate monthly rainfall forecasts into daily rainfall time series affects results. For the first objective, synthetic rainfall time series were created by increasingly degrading observed data (proxy for a `perfect forecast') from 0 % to +/-50 % error. For the second objective, three different methods were used to disaggregate monthly rainfall data into daily time series. These were used to force a simple lumped hydrological model (GR4J) to generate streamflow predictions at a one-month lead time for over 300 catchments

  2. A feature point identification method for positron emission particle tracking with multiple tracers

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)

    2017-01-21

    A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.

  3. A method for acetylcholinesterase staining of brain sections previously processed for receptor autoradiography.

    Science.gov (United States)

    Lim, M M; Hammock, E A D; Young, L J

    2004-02-01

    Receptor autoradiography using selective radiolabeled ligands allows visualization of brain receptor distribution and density on film. The resolution of specific brain regions on the film often can be difficult to discern owing to the general spread of the radioactive label and the lack of neuroanatomical landmarks on film. Receptor binding is a chemically harsh protocol that can render the tissue virtually unstainable by Nissl and other conventional stains used to delineate neuroanatomical boundaries of brain regions. We describe a method for acetylcholinesterase (AChE) staining of slides previously processed for receptor binding. AChE staining is a useful tool for delineating major brain nuclei and tracts. AChE staining on sections that have been processed for receptor autoradiography provides a direct comparison of brain regions for more precise neuroanatomical description. We report a detailed thiocholine protocol that is a modification of the Koelle-Friedenwald method to amplify the AChE signal in brain sections previously processed for autoradiography. We also describe several temporal and experimental factors that can affect the density and clarity of the AChE signal when using this protocol.

  4. Generalized Truncated Methods for an Efficient Solution of Retrial Systems

    Directory of Open Access Journals (Sweden)

    Ma Jose Domenech-Benlloch

    2008-01-01

    Full Text Available We are concerned with the analytic solution of multiserver retrial queues including the impatience phenomenon. As there are not closed-form solutions to these systems, approximate methods are required. We propose two different generalized truncated methods to effectively solve this type of systems. The methods proposed are based on the homogenization of the state space beyond a given number of users in the retrial orbit. We compare the proposed methods with the most well-known methods appeared in the literature in a wide range of scenarios. We conclude that the proposed methods generally outperform previous proposals in terms of accuracy for the most common performance parameters used in retrial systems with a moderated growth in the computational cost.

  5. Comparison of DNA preservation methods for environmental bacterial community samples.

    Science.gov (United States)

    Gray, Michael A; Pratte, Zoe A; Kellogg, Christina A

    2013-02-01

    Field collections of environmental samples, for example corals, for molecular microbial analyses present distinct challenges. The lack of laboratory facilities in remote locations is common, and preservation of microbial community DNA for later study is critical. A particular challenge is keeping samples frozen in transit. Five nucleic acid preservation methods that do not require cold storage were compared for effectiveness over time and ease of use. Mixed microbial communities of known composition were created and preserved by DNAgard(™), RNAlater(®), DMSO-EDTA-salt (DESS), FTA(®) cards, and FTA Elute(®) cards. Automated ribosomal intergenic spacer analysis and clone libraries were used to detect specific changes in the faux communities over weeks and months of storage. A previously known bias in FTA(®) cards that results in lower recovery of pure cultures of Gram-positive bacteria was also detected in mixed community samples. There appears to be a uniform bias across all five preservation methods against microorganisms with high G + C DNA. Overall, the liquid-based preservatives (DNAgard(™), RNAlater(®), and DESS) outperformed the card-based methods. No single liquid method clearly outperformed the others, leaving method choice to be based on experimental design, field facilities, shipping constraints, and allowable cost. © 2012 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  6. Multifunctional Cellulolytic Enzymes Outperform Processive Fungal Cellulases for Coproduction of Nanocellulose and Biofuels.

    Science.gov (United States)

    Yarbrough, John M; Zhang, Ruoran; Mittal, Ashutosh; Vander Wall, Todd; Bomble, Yannick J; Decker, Stephen R; Himmel, Michael E; Ciesielski, Peter N

    2017-03-28

    Producing fuels, chemicals, and materials from renewable resources to meet societal demands remains an important step in the transition to a sustainable, clean energy economy. The use of cellulolytic enzymes for the production of nanocellulose enables the coproduction of sugars for biofuels production in a format that is largely compatible with the process design employed by modern lignocellulosic (second generation) biorefineries. However, yields of enzymatically produced nanocellulose are typically much lower than those achieved by mineral acid production methods. In this study, we compare the capacity for coproduction of nanocellulose and fermentable sugars using two vastly different cellulase systems: the classical "free enzyme" system of the saprophytic fungus, Trichoderma reesei (T. reesei) and the complexed, multifunctional enzymes produced by the hot springs resident, Caldicellulosiruptor bescii (C. bescii). We demonstrate by comparative digestions that the C. bescii system outperforms the fungal enzyme system in terms of total cellulose conversion, sugar production, and nanocellulose production. In addition, we show by multimodal imaging and dynamic light scattering that the nanocellulose produced by the C. bescii cellulase system is substantially more uniform than that produced by the T. reesei system. These disparities in the yields and characteristics of the nanocellulose produced by these disparate systems can be attributed to the dramatic differences in the mechanisms of action of the dominant enzymes in each system.

  7. Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments

    Science.gov (United States)

    Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Storrs, Katherine R.; Mur, Marieke

    2017-01-01

    Recent advances in Deep convolutional Neural Networks (DNNs) have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16). To create conceptual models, other human observers generated visual-feature labels (e.g., “eye”) and category labels (e.g., “animal”) for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from visual features

  8. A New Fast Vertical Method for Mining Frequent Patterns

    Directory of Open Access Journals (Sweden)

    Zhihong Deng

    2010-12-01

    Full Text Available Vertical mining methods are very effective for mining frequent patterns and usually outperform horizontal mining methods. However, the vertical methods become ineffective since the intersection time starts to be costly when the cardinality of tidset (tid-list or diffset is very large or there are a very large number of transactions. In this paper, we propose a novel vertical algorithm called PPV for fast frequent pattern discovery. PPV works based on a data structure called Node-lists, which is obtained from a coding prefix-tree called PPC-tree. The efficiency of PPV is achieved with three techniques. First, the Node-list is much more compact compared with previous proposed vertical structure (such as tid-lists or diffsets since transactions with common prefixes share the same nodes of the PPC-tree. Second, the counting of support is transformed into the intersection of Node-lists and the complexity of intersecting two Node-lists can be reduced to O(m+n by an efficient strategy, where m and n are the cardinalities of the two Node-lists respectively. Third, the ancestor-descendant relationship of two nodes, which is the basic step of intersecting Node-lists, can be very efficiently verified by Pre-Post codes of nodes. We experimentally compare our algorithm with FP-growth, and two prominent vertical algorithms (Eclat and dEclat on a number of databases. The experimental results show that PPV is an efficient algorithm that outperforms FP-growth, Eclat, and dEclat.

  9. Using a Forensic Research Method for Establishing an Alternative Method for Audience Measurement in Print Advertising

    DEFF Research Database (Denmark)

    Schmidt, Marcus; Krause, Niels; Solgaard, Hans Stubbe

    2012-01-01

    Advantages and disadvantages of the survey approach are discussed. It is hypothesized that observational methods sometimes constitute reasonable and powerful substitute to traditional survey methods. Under certain circumstances, unobtrusive methods may even outperform traditional techniques. Non...... amount of pages, the method appears applicable for flyers with multiple pages....

  10. Atomic-Layer-Deposited AZO Outperforms ITO in High-Efficiency Polymer Solar Cells

    KAUST Repository

    Kan, Zhipeng

    2018-05-11

    Tin-doped indium oxide (ITO) transparent conducting electrodes are widely used across the display industry, and are currently the cornerstone of photovoltaic device developments, taking a substantial share in the manufacturing cost of large-area modules. However, cost and supply considerations are set to limit the extensive use of indium for optoelectronic device applications and, in turn, alternative transparent conducting oxide (TCO) materials are required. In this report, we show that aluminum-doped zinc oxide (AZO) thin films grown by atomic layer deposition (ALD) are sufficiently conductive and transparent to outperform ITO as the cathode in inverted polymer solar cells. Reference polymer solar cells made with atomic-layer-deposited AZO cathodes, PCE10 as the polymer donor and PC71BM as the fullerene acceptor (model systems), reach power conversion efficiencies of ca. 10% (compared to ca. 9% with ITO-coated glass), without compromising other figures of merit. These ALD-grown AZO electrodes are promising for a wide range of optoelectronic device applications relying on TCOs.

  11. Atomic-Layer-Deposited AZO Outperforms ITO in High-Efficiency Polymer Solar Cells

    KAUST Repository

    Kan, Zhipeng; Wang, Zhenwei; Firdaus, Yuliar; Babics, Maxime; Alshareef, Husam N.; Beaujuge, Pierre

    2018-01-01

    Tin-doped indium oxide (ITO) transparent conducting electrodes are widely used across the display industry, and are currently the cornerstone of photovoltaic device developments, taking a substantial share in the manufacturing cost of large-area modules. However, cost and supply considerations are set to limit the extensive use of indium for optoelectronic device applications and, in turn, alternative transparent conducting oxide (TCO) materials are required. In this report, we show that aluminum-doped zinc oxide (AZO) thin films grown by atomic layer deposition (ALD) are sufficiently conductive and transparent to outperform ITO as the cathode in inverted polymer solar cells. Reference polymer solar cells made with atomic-layer-deposited AZO cathodes, PCE10 as the polymer donor and PC71BM as the fullerene acceptor (model systems), reach power conversion efficiencies of ca. 10% (compared to ca. 9% with ITO-coated glass), without compromising other figures of merit. These ALD-grown AZO electrodes are promising for a wide range of optoelectronic device applications relying on TCOs.

  12. Solving Eigenvalue response matrix equations with Jacobian-Free Newton-Krylov methods

    International Nuclear Information System (INIS)

    Roberts, Jeremy A.; Forget, Benoit

    2011-01-01

    The response matrix method for reactor eigenvalue problems is motivated as a technique for solving coarse mesh transport equations, and the classical approach of power iteration (PI) for solution is described. The method is then reformulated as a nonlinear system of equations, and the associated Jacobian is derived. A Jacobian-Free Newton-Krylov (JFNK) method is employed to solve the system, using an approximate Jacobian coupled with incomplete factorization as a preconditioner. The unpreconditioned JFNK slightly outperforms PI, and preconditioned JFNK outperforms both PI and Steffensen-accelerated PI significantly. (author)

  13. Efficient Method to Approximately Solve Retrial Systems with Impatience

    Directory of Open Access Journals (Sweden)

    Jose Manuel Gimenez-Guzman

    2012-01-01

    Full Text Available We present a novel technique to solve multiserver retrial systems with impatience. Unfortunately these systems do not present an exact analytic solution, so it is mandatory to resort to approximate techniques. This novel technique does not rely on the numerical solution of the steady-state Kolmogorov equations of the Continuous Time Markov Chain as it is common for this kind of systems but it considers the system in its Markov Decision Process setting. This technique, known as value extrapolation, truncates the infinite state space using a polynomial extrapolation method to approach the states outside the truncated state space. A numerical evaluation is carried out to evaluate this technique and to compare its performance with previous techniques. The obtained results show that value extrapolation greatly outperforms the previous approaches appeared in the literature not only in terms of accuracy but also in terms of computational cost.

  14. Change in end-tidal carbon dioxide outperforms other surrogates for change in cardiac output during fluid challenge.

    Science.gov (United States)

    Lakhal, K; Nay, M A; Kamel, T; Lortat-Jacob, B; Ehrmann, S; Rozec, B; Boulain, T

    2017-03-01

    During fluid challenge, volume expansion (VE)-induced increase in cardiac output (Δ VE CO) is seldom measured. In patients with shock undergoing strictly controlled mechanical ventilation and receiving VE, we assessed minimally invasive surrogates for Δ VE CO (by transthoracic echocardiography): fluid-induced increases in end-tidal carbon dioxide (Δ VE E'CO2 ); pulse (Δ VE PP), systolic (Δ VE SBP), and mean systemic blood pressure (Δ VE MBP); and femoral artery Doppler flow (Δ VE FemFlow). In the absence of arrhythmia, fluid-induced decrease in heart rate (Δ VE HR) and in pulse pressure respiratory variation (Δ VE PPV) were also evaluated. Areas under the receiver operating characteristic curves (AUC ROC s) reflect the ability to identify a response to VE (Δ VE CO ≥15%). In 86 patients, Δ VE E'CO2 had an AUC ROC =0.82 [interquartile range 0.73-0.90], significantly higher than the AUC ROC for Δ VE PP, Δ VE SBP, Δ VE MBP, and Δ VE FemFlow (AUC ROC =0.61-0.65, all P  1 mm Hg (>0.13 kPa) had good positive (5.0 [2.6-9.8]) and fair negative (0.29 [0.2-0.5]) likelihood ratios. The 16 patients with arrhythmia had similar relationships between Δ VE E'CO2 and Δ VE CO to patients with regular rhythm ( r 2 =0.23 in both subgroups). In 60 patients with no arrhythmia, Δ VE E'CO2 (AUC ROC =0.84 [0.72-0.92]) outperformed Δ VE HR (AUC ROC =0.52 [0.39-0.66], P AUC ROC =0.73 [0.60-0.84], P =0.21). In the 45 patients with no arrhythmia and receiving ventilation with tidal volume AUC ROC =0.86 [0.72-0.95] vs 0.66 [0.49-0.80], P =0.02. Δ VE E'CO2 outperformed Δ VE PP, Δ VE SBP, Δ VE MBP, Δ VE FemFlow, and Δ VE HR and, during protective ventilation, arrhythmia, or both, it also outperformed Δ VE PPV. A value of Δ VE E'CO2 >1 mm Hg (>0.13 kPa) indicated a likely response to VE. © The Author 2017. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  15. Microscopy outperformed in a comparison of five methods for detecting Trichomonas vaginalis in symptomatic women.

    Science.gov (United States)

    Nathan, B; Appiah, J; Saunders, P; Heron, D; Nichols, T; Brum, R; Alexander, S; Baraitser, P; Ison, C

    2015-03-01

    In the UK, despite its low sensitivity, wet mount microscopy is often the only method of detecting Trichomonas vaginalis infection. A study was conducted in symptomatic women to compare the performance of five methods for detecting T. vaginalis: an in-house polymerase chain reaction (PCR); Aptima T. vaginalis kit; OSOM ®Trichomonas Rapid Test; culture and microscopy. Symptomatic women underwent routine testing; microscopy and further swabs were taken for molecular testing, OSOM and culture. A true positive was defined as a sample that was positive for T. vaginalis by two or more different methods. Two hundred and forty-six women were recruited: 24 patients were positive for T. vaginalis by two or more different methods. Of these 24 patients, 21 patients were detected by real-time PCR (sensitivity 88%); 22 patients were detected by the Aptima T. vaginalis kit (sensitivity 92%); 22 patients were detected by OSOM (sensitivity 92%); nine were detected by wet mount microscopy (sensitivity 38%); and 21 were detected by culture (sensitivity 88%). Two patients were positive by just one method and were not considered true positives. All the other detection methods had a sensitivity to detect T. vaginalis that was significantly greater than wet mount microscopy, highlighting the number of cases that are routinely missed even in symptomatic women if microscopy is the only diagnostic method available. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  16. THE INFLUENCE OF THE ASSESSMENT MODEL AND METHOD TOWARD THE SCIENCE LEARNING ACHIEVEMENT BY CONTROLLING THE STUDENTS? PREVIOUS KNOWLEDGE OF MATHEMATICS.

    OpenAIRE

    Adam rumbalifar; I. g. n. Agung; Burhanuddin tola.

    2018-01-01

    This research aims to study the influence of the assessment model and method toward the science learning achievement by controlling the students? previous knowledge of mathematics. This study was conducted at SMP East Seram district with the population of 295 students. This study applied a quasi-experimental method with 2 X 2 factorial design using the ANCOVA model. The findings after controlling the students\\' previous knowledge of mathematics show that the science learning achievement of th...

  17. A paclitaxel-loaded recombinant polypeptide nanoparticle outperforms Abraxane in multiple murine cancer models

    Science.gov (United States)

    Bhattacharyya, Jayanta; Bellucci, Joseph J.; Weitzhandler, Isaac; McDaniel, Jonathan R.; Spasojevic, Ivan; Li, Xinghai; Lin, Chao-Chieh; Chi, Jen-Tsan Ashley; Chilkoti, Ashutosh

    2015-08-01

    Packaging clinically relevant hydrophobic drugs into a self-assembled nanoparticle can improve their aqueous solubility, plasma half-life, tumour-specific uptake and therapeutic potential. To this end, here we conjugated paclitaxel (PTX) to recombinant chimeric polypeptides (CPs) that spontaneously self-assemble into ~60 nm near-monodisperse nanoparticles that increased the systemic exposure of PTX by sevenfold compared with free drug and twofold compared with the Food and Drug Administration-approved taxane nanoformulation (Abraxane). The tumour uptake of the CP-PTX nanoparticle was fivefold greater than free drug and twofold greater than Abraxane. In a murine cancer model of human triple-negative breast cancer and prostate cancer, CP-PTX induced near-complete tumour regression after a single dose in both tumour models, whereas at the same dose, no mice treated with Abraxane survived for >80 days (breast) and 60 days (prostate), respectively. These results show that a molecularly engineered nanoparticle with precisely engineered design features outperforms Abraxane, the current gold standard for PTX delivery.

  18. Evaluation of common methods for sampling invertebrate pollinator assemblages: net sampling out-perform pan traps.

    Science.gov (United States)

    Popic, Tony J; Davila, Yvonne C; Wardle, Glenda M

    2013-01-01

    Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2) area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service.

  19. Automated Facial Coding Software Outperforms People in Recognizing Neutral Faces as Neutral from Standardized Datasets

    Directory of Open Access Journals (Sweden)

    Peter eLewinski

    2015-09-01

    Full Text Available Little is known about people’s accuracy of recognizing neutral faces as neutral. In this paper, I demonstrate the importance of knowing how well people recognize neutral faces. I contrasted human recognition scores of 100 typical, neutral front-up facial images with scores of an arguably objective judge – automated facial coding (AFC software. I hypothesized that the software would outperform humans in recognizing neutral faces because of the inherently objective nature of computer algorithms. Results confirmed this hypothesis. I provided the first-ever evidence that computer software (90% was more accurate in recognizing neutral faces than people were (59%. I posited two theoretical mechanisms, i.e. smile-as-a-baseline and false recognition of emotion, as possible explanations for my findings.

  20. Evaluation of common methods for sampling invertebrate pollinator assemblages: net sampling out-perform pan traps.

    Directory of Open Access Journals (Sweden)

    Tony J Popic

    Full Text Available Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2 area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service.

  1. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian

    2012-01-01

    with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all......The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time...

  2. Gender differences in primary and secondary education: Are girls really outperforming boys?

    Science.gov (United States)

    Driessen, Geert; van Langen, Annemarie

    2013-06-01

    A moral panic has broken out in several countries after recent studies showed that girls were outperforming boys in education. Commissioned by the Dutch Ministry of Education, the present study examines the position of boys and girls in Dutch primary education and in the first phase of secondary education over the past ten to fifteen years. On the basis of several national and international large-scale databases, the authors examined whether one can indeed speak of a gender gap, at the expense of boys. Three domains were investigated, namely cognitive competencies, non-cognitive competencies, and school career features. The results as expressed in effect sizes show that there are hardly any differences with regard to language and mathematics proficiency. However, the position of boys in terms of educational level and attitudes and behaviour is much more unfavourable than that of girls. Girls, on the other hand, score more unfavourably with regard to sector and subject choice. While the present situation in general does not differ very much from that of a decade ago, it is difficult to predict in what way the balances might shift in the years to come.

  3. Connecting clinical and actuarial prediction with rule-based methods

    NARCIS (Netherlands)

    Fokkema, M.; Smits, N.; Kelderman, H.; Penninx, B.W.J.H.

    2015-01-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction

  4. The Variance-covariance Method using IOWGA Operator for Tourism Forecast Combination

    Directory of Open Access Journals (Sweden)

    Liangping Wu

    2014-08-01

    Full Text Available Three combination methods commonly used in tourism forecasting are the simple average method, the variance-covariance method and the discounted MSFE method. These methods assign the different weights that can not change at each time point to each individual forecasting model. In this study, we introduce the IOWGA operator combination method which can overcome the defect of previous three combination methods into tourism forecasting. Moreover, we further investigate the performance of the four combination methods through the theoretical evaluation and the forecasting evaluation. The results of the theoretical evaluation show that the IOWGA operator combination method obtains extremely well performance and outperforms the other forecast combination methods. Furthermore, the IOWGA operator combination method can be of well forecast performance and performs almost the same to the variance-covariance combination method for the forecasting evaluation. The IOWGA operator combination method mainly reflects the maximization of improving forecasting accuracy and the variance-covariance combination method mainly reflects the decrease of the forecast error. For future research, it may be worthwhile introducing and examining other new combination methods that may improve forecasting accuracy or employing other techniques to control the time for updating the weights in combined forecasts.

  5. Laparoscopy After Previous Laparotomy

    Directory of Open Access Journals (Sweden)

    Zulfo Godinjak

    2006-11-01

    Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.

  6. The review and results of different methods for facial recognition

    Science.gov (United States)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  7. Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set.

    Science.gov (United States)

    Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P

    2017-08-14

    The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi

  8. A Learning-Based Steganalytic Method against LSB Matching Steganography

    Directory of Open Access Journals (Sweden)

    Z. Xia

    2011-04-01

    Full Text Available This paper considers the detection of spatial domain least significant bit (LSB matching steganography in gray images. Natural images hold some inherent properties, such as histogram, dependence between neighboring pixels, and dependence among pixels that are not adjacent to each other. These properties are likely to be disturbed by LSB matching. Firstly, histogram will become smoother after LSB matching. Secondly, the two kinds of dependence will be weakened by the message embedding. Accordingly, three features, which are respectively based on image histogram, neighborhood degree histogram and run-length histogram, are extracted at first. Then, support vector machine is utilized to learn and discriminate the difference of features between cover and stego images. Experimental results prove that the proposed method possesses reliable detection ability and outperforms the two previous state-of-the-art methods. Further more, the conclusions are drawn by analyzing the individual performance of three features and their fused feature.

  9. [A brief history of resuscitation - the influence of previous experience on modern techniques and methods].

    Science.gov (United States)

    Kucmin, Tomasz; Płowaś-Goral, Małgorzata; Nogalski, Adam

    2015-02-01

    Cardiopulmonary resuscitation (CPR) is relatively novel branch of medical science, however first descriptions of mouth-to-mouth ventilation are to be found in the Bible and literature is full of descriptions of different resuscitation methods - from flagellation and ventilation with bellows through hanging the victims upside down and compressing the chest in order to stimulate ventilation to rectal fumigation with tobacco smoke. The modern history of CPR starts with Kouwenhoven et al. who in 1960 published a paper regarding heart massage through chest compressions. Shortly after that in 1961Peter Safar presented a paradigm promoting opening the airway, performing rescue breaths and chest compressions. First CPR guidelines were published in 1966. Since that time guidelines were modified and improved numerously by two leading world expert organizations ERC (European Resuscitation Council) and AHA (American Heart Association) and published in a new version every 5 years. Currently 2010 guidelines should be obliged. In this paper authors made an attempt to present history of development of resuscitation techniques and methods and assess the influence of previous lifesaving methods on nowadays technologies, equipment and guidelines which allow to help those women and men whose life is in danger due to sudden cardiac arrest. © 2015 MEDPRESS.

  10. Reference Values for Spirometry Derived Using Lambda, Mu, Sigma (LMS) Method in Korean Adults: in Comparison with Previous References.

    Science.gov (United States)

    Jo, Bum Seak; Myong, Jun Pyo; Rhee, Chin Kook; Yoon, Hyoung Kyu; Koo, Jung Wan; Kim, Hyoung Ryoul

    2018-01-15

    The present study aimed to update the prediction equations for spirometry and their lower limits of normal (LLN) by using the lambda, mu, sigma (LMS) method and to compare the outcomes with the values of previous spirometric reference equations. Spirometric data of 10,249 healthy non-smokers (8,776 females) were extracted from the fourth and fifth versions of the Korea National Health and Nutrition Examination Survey (KNHANES IV, 2007-2009; V, 2010-2012). Reference equations were derived using the LMS method which allows modeling skewness (lambda [L]), mean (mu [M]), and coefficient of variation (sigma [S]). The outcome equations were compared with previous reference values. Prediction equations were presented in the following form: predicted value = e{a + b × ln(height) + c × ln(age) + M - spline}. The new predicted values for spirometry and their LLN derived using the LMS method were shown to more accurately reflect transitions in pulmonary function in young adults than previous prediction equations derived using conventional regression analysis in 2013. There were partial discrepancies between the new reference values and the reference values from the Global Lung Function Initiative in 2012. The results should be interpreted with caution for young adults and elderly males, particularly in terms of the LLN for forced expiratory volume in one second/forced vital capacity in elderly males. Serial spirometry follow-up, together with correlations with other clinical findings, should be emphasized in evaluating the pulmonary function of individuals. Future studies are needed to improve the accuracy of reference data and to develop continuous reference values for spirometry across all ages. © 2018 The Korean Academy of Medical Sciences.

  11. CP Methods for Scheduling and Routing with Time-Dependent Task Costs

    DEFF Research Database (Denmark)

    Tierney, Kevin; Kelareva, Elena; Kilby, Philip

    2013-01-01

    a cost function, and Mixed Integer Programming (MIP) are often used for solving such problems. However, Constraint Programming (CP), particularly with Lazy Clause Genera- tion (LCG), has been found to be faster than MIP for some scheduling problems with time-varying action costs. In this paper, we...... compare CP and LCG against a solve-and-improve approach for two recently introduced problems in maritime logistics with time-varying action costs: the Liner Shipping Fleet Repositioning Problem (LSFRP) and the Bulk Port Cargo Throughput Optimisation Problem (BPCTOP). We present a novel CP model...... for the LSFRP, which is faster than all previous methods and outperforms a simplified automated planning model without time-varying costs. We show that a LCG solver is faster for solving the BPCTOP than a standard finite domain CP solver with a simplified model. We find that CP and LCG are effective methods...

  12. Comparing methods of targeting obesity interventions in populations: An agent-based simulation.

    Science.gov (United States)

    Beheshti, Rahmatollah; Jalalpour, Mehdi; Glass, Thomas A

    2017-12-01

    Social networks as well as neighborhood environments have been shown to effect obesity-related behaviors including energy intake and physical activity. Accordingly, harnessing social networks to improve targeting of obesity interventions may be promising to the extent this leads to social multiplier effects and wider diffusion of intervention impact on populations. However, the literature evaluating network-based interventions has been inconsistent. Computational methods like agent-based models (ABM) provide researchers with tools to experiment in a simulated environment. We develop an ABM to compare conventional targeting methods (random selection, based on individual obesity risk, and vulnerable areas) with network-based targeting methods. We adapt a previously published and validated model of network diffusion of obesity-related behavior. We then build social networks among agents using a more realistic approach. We calibrate our model first against national-level data. Our results show that network-based targeting may lead to greater population impact. We also present a new targeting method that outperforms other methods in terms of intervention effectiveness at the population level.

  13. Physiological outperformance at the morphologically-transformed edge of the cyanobacteriosponge Terpios hoshinota (Suberitidae: Hadromerida when confronting opponent corals.

    Directory of Open Access Journals (Sweden)

    Jih-Terng Wang

    Full Text Available Terpios hoshinota, an encrusting cyanosponge, is known as a strong substrate competitor of reef-building corals that kills encountered coral by overgrowth. Terpios outbreaks cause significant declines in living coral cover in Indo-Pacific coral reefs, with the damage usually lasting for decades. Recent studies show that there are morphological transformations at a sponge's growth front when confronting corals. Whether these morphological transformations at coral contacts are involved with physiological outperformance (e.g., higher metabolic activity or nutritional status over other portions of Terpios remains equivocal. In this study, we compared the indicators of photosynthetic capability and nitrogen status of a sponge-cyanobacteria association at proximal, middle, and distal portions of opponent corals. Terpios tissues in contact with corals displayed significant increases in photosynthetic oxygen production (ca. 61%, the δ13C value (ca. 4%, free proteinogenic amino acid content (ca. 85%, and Gln/Glu ratio (ca. 115% compared to middle and distal parts of the sponge. In contrast, the maximum quantum yield (Fv/Fm, which is the indicator usually used to represent the integrity of photosystem II, of cyanobacteria photosynthesis was low (0.256~0.319 and showed an inverse trend of higher values in the distal portion of the sponge that might be due to high and variable levels of cyanobacterial phycocyanin. The inconsistent results between photosynthetic oxygen production and Fv/Fm values indicated that maximum quantum yields might not be a suitable indicator to represent the photosynthetic function of the Terpios-cyanobacteria association. Our data conclusively suggest that Terpios hoshinota competes with opponent corals not only by the morphological transformation of the sponge-cyanobacteria association but also by physiological outperformance in accumulating resources for the battle.

  14. Learning Algorithm of Boltzmann Machine Based on Spatial Monte Carlo Integration Method

    Directory of Open Access Journals (Sweden)

    Muneki Yasuda

    2018-04-01

    Full Text Available The machine learning techniques for Markov random fields are fundamental in various fields involving pattern recognition, image processing, sparse modeling, and earth science, and a Boltzmann machine is one of the most important models in Markov random fields. However, the inference and learning problems in the Boltzmann machine are NP-hard. The investigation of an effective learning algorithm for the Boltzmann machine is one of the most important challenges in the field of statistical machine learning. In this paper, we study Boltzmann machine learning based on the (first-order spatial Monte Carlo integration method, referred to as the 1-SMCI learning method, which was proposed in the author’s previous paper. In the first part of this paper, we compare the method with the maximum pseudo-likelihood estimation (MPLE method using a theoretical and a numerical approaches, and show the 1-SMCI learning method is more effective than the MPLE. In the latter part, we compare the 1-SMCI learning method with other effective methods, ratio matching and minimum probability flow, using a numerical experiment, and show the 1-SMCI learning method outperforms them.

  15. Deviation-based spam-filtering method via stochastic approach

    Science.gov (United States)

    Lee, Daekyung; Lee, Mi Jin; Kim, Beom Jun

    2018-03-01

    In the presence of a huge number of possible purchase choices, ranks or ratings of items by others often play very important roles for a buyer to make a final purchase decision. Perfectly objective rating is an impossible task to achieve, and we often use an average rating built on how previous buyers estimated the quality of the product. The problem of using a simple average rating is that it can easily be polluted by careless users whose evaluation of products cannot be trusted, and by malicious spammers who try to bias the rating result on purpose. In this letter we suggest how trustworthiness of individual users can be systematically and quantitatively reflected to build a more reliable rating system. We compute the suitably defined reliability of each user based on the user's rating pattern for all products she evaluated. We call our proposed method as the deviation-based ranking, since the statistical significance of each user's rating pattern with respect to the average rating pattern is the key ingredient. We find that our deviation-based ranking method outperforms existing methods in filtering out careless random evaluators as well as malicious spammers.

  16. The MIMIC Method with Scale Purification for Detecting Differential Item Functioning

    Science.gov (United States)

    Wang, Wen-Chung; Shih, Ching-Lin; Yang, Chih-Chien

    2009-01-01

    This study implements a scale purification procedure onto the standard MIMIC method for differential item functioning (DIF) detection and assesses its performance through a series of simulations. It is found that the MIMIC method with scale purification (denoted as M-SP) outperforms the standard MIMIC method (denoted as M-ST) in controlling…

  17. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    Science.gov (United States)

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  18. A Mozart is not a Pavarotti: singers outperform instrumentalists on foreign accent imitation.

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne Maria

    2015-01-01

    Recent findings have shown that people with higher musical aptitude were also better in oral language imitation tasks. However, whether singing capacity and instrument playing contribute differently to the imitation of speech has been ignored so far. Research has just recently started to understand that instrumentalists develop quite distinct skills when compared to vocalists. In the same vein the role of the vocal motor system in language acquisition processes has poorly been investigated as most investigations (neurobiological and behavioral) favor to examine speech perception. We set out to test whether the vocal motor system can influence an ability to learn, produce and perceive new languages by contrasting instrumentalists and vocalists. Therefore, we investigated 96 participants, 27 instrumentalists, 33 vocalists and 36 non-musicians/non-singers. They were tested for their abilities to imitate foreign speech: unknown language (Hindi), second language (English) and their musical aptitude. Results revealed that both instrumentalists and vocalists have a higher ability to imitate unintelligible speech and foreign accents than non-musicians/non-singers. Within the musician group, vocalists outperformed instrumentalists significantly. First, adaptive plasticity for speech imitation is not reliant on audition alone but also on vocal-motor induced processes. Second, vocal flexibility of singers goes together with higher speech imitation aptitude. Third, vocal motor training, as of singers, may speed up foreign language acquisition processes.

  19. Human dental age estimation using third molar developmental stages: does a Bayesian approach outperform regression models to discriminate between juveniles and adults?

    Science.gov (United States)

    Thevissen, P W; Fieuws, S; Willems, G

    2010-01-01

    Dental age estimation methods based on the radiologically detected third molar developmental stages are implemented in forensic age assessments to discriminate between juveniles and adults considering the judgment of young unaccompanied asylum seekers. Accurate and unbiased age estimates combined with appropriate quantified uncertainties are the required properties for accurate forensic reporting. In this study, a subset of 910 individuals uniformly distributed in age between 16 and 22 years was selected from an existing dataset collected by Gunst et al. containing 2,513 panoramic radiographs with known third molar developmental stages of Belgian Caucasian men and women. This subset was randomly split in a training set to develop a classical regression analysis and a Bayesian model for the multivariate distribution of the third molar developmental stages conditional on age and in a test set to assess the performance of both models. The aim of this study was to verify if the Bayesian approach differentiates the age of maturity more precisely and removes the bias, which disadvantages the systematically overestimated young individuals. The Bayesian model offers the discrimination of subjects being older than 18 years more appropriate and produces more meaningful prediction intervals but does not strongly outperform the classical approaches.

  20. Hip fracture risk assessment: artificial neural network outperforms conditional logistic regression in an age- and sex-matched case control study.

    Science.gov (United States)

    Tseng, Wo-Jan; Hung, Li-Wei; Shieh, Jiann-Shing; Abbod, Maysam F; Lin, Jinn

    2013-07-15

    Osteoporotic hip fractures with a significant morbidity and excess mortality among the elderly have imposed huge health and economic burdens on societies worldwide. In this age- and sex-matched case control study, we examined the risk factors of hip fractures and assessed the fracture risk by conditional logistic regression (CLR) and ensemble artificial neural network (ANN). The performances of these two classifiers were compared. The study population consisted of 217 pairs (149 women and 68 men) of fractures and controls with an age older than 60 years. All the participants were interviewed with the same standardized questionnaire including questions on 66 risk factors in 12 categories. Univariate CLR analysis was initially conducted to examine the unadjusted odds ratio of all potential risk factors. The significant risk factors were then tested by multivariate analyses. For fracture risk assessment, the participants were randomly divided into modeling and testing datasets for 10-fold cross validation analyses. The predicting models built by CLR and ANN in modeling datasets were applied to testing datasets for generalization study. The performances, including discrimination and calibration, were compared with non-parametric Wilcoxon tests. In univariate CLR analyses, 16 variables achieved significant level, and six of them remained significant in multivariate analyses, including low T score, low BMI, low MMSE score, milk intake, walking difficulty, and significant fall at home. For discrimination, ANN outperformed CLR in both 16- and 6-variable analyses in modeling and testing datasets (p?hip fracture are more personal than environmental. With adequate model construction, ANN may outperform CLR in both discrimination and calibration. ANN seems to have not been developed to its full potential and efforts should be made to improve its performance.

  1. An Improved Ensemble Learning Method for Classifying High-Dimensional and Imbalanced Biomedicine Data.

    Science.gov (United States)

    Yu, Hualong; Ni, Jun

    2014-01-01

    Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.

  2. Improving local clustering based top-L link prediction methods via asymmetric link clustering information

    Science.gov (United States)

    Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan

    2018-02-01

    Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.

  3. A Novel Activated-Charcoal-Doped Multiwalled Carbon Nanotube Hybrid for Quasi-Solid-State Dye-Sensitized Solar Cell Outperforming Pt Electrode.

    Science.gov (United States)

    Arbab, Alvira Ayoub; Sun, Kyung Chul; Sahito, Iftikhar Ali; Qadir, Muhammad Bilal; Choi, Yun Seon; Jeong, Sung Hoon

    2016-03-23

    Highly conductive mesoporous carbon structures based on multiwalled carbon nanotubes (MWCNTs) and activated charcoal (AC) were synthesized by an enzymatic dispersion method. The synthesized carbon configuration consists of synchronized structures of highly conductive MWCNT and porous activated charcoal morphology. The proposed carbon structure was used as counter electrode (CE) for quasi-solid-state dye-sensitized solar cells (DSSCs). The AC-doped MWCNT hybrid showed much enhanced electrocatalytic activity (ECA) toward polymer gel electrolyte and revealed a charge transfer resistance (RCT) of 0.60 Ω, demonstrating a fast electron transport mechanism. The exceptional electrocatalytic activity and high conductivity of the AC-doped MWCNT hybrid CE are associated with its synchronized features of high surface area and electronic conductivity, which produces higher interfacial reaction with the quasi-solid electrolyte. Morphological studies confirm the forms of amorphous and conductive 3D carbon structure with high density of CNT colloid. The excessive oxygen surface groups and defect-rich structure can entrap an excessive volume of quasi-solid electrolyte and locate multiple sites for iodide/triiodide catalytic reaction. The resultant D719 DSSC composed of this novel hybrid CE fabricated with polymer gel electrolyte demonstrated an efficiency of 10.05% with a high fill factor (83%), outperforming the Pt electrode. Such facile synthesis of CE together with low cost and sustainability supports the proposed DSSCs' structure to stand out as an efficient next-generation photovoltaic device.

  4. Adjusted Empirical Likelihood Method in the Presence of Nuisance Parameters with Application to the Sharpe Ratio

    Directory of Open Access Journals (Sweden)

    Yuejiao Fu

    2018-04-01

    Full Text Available The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any distributional assumption on the data, we develop the adjusted empirical likelihood method to obtain inference for a parameter of interest in the presence of nuisance parameters. We show that the log adjusted empirical likelihood ratio statistic is asymptotically distributed as the chi-square distribution. The proposed method is applied to obtain inference for the Sharpe ratio. Simulation results illustrate that the proposed method is comparable to Jobson and Korkie’s method (1981 and outperforms the empirical likelihood method when the data are from a symmetric distribution. In addition, when the data are from a skewed distribution, the proposed method significantly outperforms all other existing methods. A real-data example is analyzed to exemplify the application of the proposed method.

  5. Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness

    Energy Technology Data Exchange (ETDEWEB)

    Chelouche, Doron; Pozo-Nuñez, Francisco [Department of Physics, Faculty of Natural Sciences, University of Haifa, Haifa 3498838 (Israel); Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il [Department of Geosciences, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 6997801 (Israel)

    2017-08-01

    A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discrete correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.

  6. Comparison of Bayesian clustering and edge detection methods for inferring boundaries in landscape genetics

    Science.gov (United States)

    Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.

    2011-01-01

    Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.

  7. RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.

    Science.gov (United States)

    Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang

    2017-01-03

    The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.

  8. The influence of previous subject experience on interactions during peer instruction in an introductory physics course: A mixed methods analysis

    Science.gov (United States)

    Vondruska, Judy A.

    Over the past decade, peer instruction and the introduction of student response systems has provided a means of improving student engagement and achievement in large-lecture settings. While the nature of the student discourse occurring during peer instruction is less understood, existing studies have shown student ideas about the subject, extraneous cues, and confidence level appear to matter in the student-student discourse. Using a mixed methods research design, this study examined the influence of previous subject experience on peer instruction in an introductory, one-semester Survey of Physics course. Quantitative results indicated students in discussion pairs where both had previous subject experience were more likely to answer clicker question correctly both before and after peer discussion compared to student groups where neither partner had previous subject experience. Students in mixed discussion pairs were not statistically different in correct response rates from the other pairings. There was no statistically significant difference between the experience pairs on unit exam scores or the Peer Instruction Partner Survey. Although there was a statistically significant difference between the pre-MPEX and post-MPEX scores, there was no difference between the members of the various subject experience peer discussion pairs. The qualitative study, conducted after the quantitative study, helped to inform the quantitative results by exploring the nature of the peer interactions through survey questions and a series of focus groups discussions. While the majority of participants described a benefit to the use of clickers in the lecture, their experience with their discussion partners varied. Students with previous subject experience tended to describe peer instruction more positively than students who did not have previous subject experience, regardless of the experience level of their partner. They were also more likely to report favorable levels of comfort with

  9. Importance of a species' socioecology: Wolves outperform dogs in a conspecific cooperation task.

    Science.gov (United States)

    Marshall-Pescini, Sarah; Schwarz, Jonas F L; Kostelnik, Inga; Virányi, Zsófia; Range, Friederike

    2017-10-31

    A number of domestication hypotheses suggest that dogs have acquired a more tolerant temperament than wolves, promoting cooperative interactions with humans and conspecifics. This selection process has been proposed to resemble the one responsible for our own greater cooperative inclinations in comparison with our closest living relatives. However, the socioecology of wolves and dogs, with the former relying more heavily on cooperative activities, predicts that at least with conspecifics, wolves should cooperate better than dogs. Here we tested similarly raised wolves and dogs in a cooperative string-pulling task with conspecifics and found that wolves outperformed dogs, despite comparable levels of interest in the task. Whereas wolves coordinated their actions so as to simultaneously pull the rope ends, leading to success, dogs pulled the ropes in alternate moments, thereby never succeeding. Indeed in dog dyads it was also less likely that both members simultaneously engaged in other manipulative behaviors on the apparatus. Different conflict-management strategies are likely responsible for these results, with dogs' avoidance of potential competition over the apparatus constraining their capacity to coordinate actions. Wolves, in contrast, did not hesitate to manipulate the ropes simultaneously, and once cooperation was initiated, rapidly learned to coordinate in more complex conditions as well. Social dynamics (rank and affiliation) played a key role in success rates. Results call those domestication hypotheses that suggest dogs evolved greater cooperative inclinations into question, and rather support the idea that dogs' and wolves' different social ecologies played a role in affecting their capacity for conspecific cooperation and communication. Published under the PNAS license.

  10. An automated patient recognition method based on an image-matching technique using previous chest radiographs in the picture archiving and communication system environment

    International Nuclear Information System (INIS)

    Morishita, Junji; Katsuragawa, Shigehiko; Kondo, Keisuke; Doi, Kunio

    2001-01-01

    An automated patient recognition method for correcting 'wrong' chest radiographs being stored in a picture archiving and communication system (PACS) environment has been developed. The method is based on an image-matching technique that uses previous chest radiographs. For identification of a 'wrong' patient, the correlation value was determined for a previous image of a patient and a new, current image of the presumed corresponding patient. The current image was shifted horizontally and vertically and rotated, so that we could determine the best match between the two images. The results indicated that the correlation values between the current and previous images for the same, 'correct' patients were generally greater than those for different, 'wrong' patients. Although the two histograms for the same patient and for different patients overlapped at correlation values greater than 0.80, most parts of the histograms were separated. The correlation value was compared with a threshold value that was determined based on an analysis of the histograms of correlation values obtained for the same patient and for different patients. If the current image is considered potentially to belong to a 'wrong' patient, then a warning sign with the probability for a 'wrong' patient is provided to alert radiology personnel. Our results indicate that at least half of the 'wrong' images in our database can be identified correctly with the method described in this study. The overall performance in terms of a receiver operating characteristic curve showed a high performance of the system. The results also indicate that some readings of 'wrong' images for a given patient in the PACS environment can be prevented by use of the method we developed. Therefore an automated warning system for patient recognition would be useful in correcting 'wrong' images being stored in the PACS environment

  11. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

    International Nuclear Information System (INIS)

    Ernst, Floris; Schweikard, Achim

    2008-01-01

    Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

  12. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

    Energy Technology Data Exchange (ETDEWEB)

    Ernst, Floris; Schweikard, Achim [University of Luebeck, Institute for Robotics and Cognitive Systems, Luebeck (Germany)

    2008-06-15

    Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

  13. Outperforming markets

    DEFF Research Database (Denmark)

    Nielsen, Christian; Rimmel, Gunnar; Yosano, Tadanori

    2015-01-01

    This article studies the effects of disclosure practices of Japanese IPO prospectuses on long-term stock performance and bid-ask spread, as a proxy for cost of capital, after a company is admitted to the stock exchange. A disclosure index methodology is applied to 120 IPO prospectuses from 2003....... Intellectual capital information leads to significantly better long-term performance against a reference portfolio, and is thus important to the capital market. Further, superior disclosure of IC reduces bid-ask spread in the long-term, indicating that such disclosures are important in an IPO setting. Analysts...

  14. Semi-automated extraction of longitudinal subglacial bedforms from digital terrain models - Two new methods

    Science.gov (United States)

    Jorge, Marco G.; Brennand, Tracy A.

    2017-07-01

    Relict drumlin and mega-scale glacial lineation (positive relief, longitudinal subglacial bedforms - LSBs) morphometry has been used as a proxy for paleo ice-sheet dynamics. LSB morphometric inventories have relied on manual mapping, which is slow and subjective and thus potentially difficult to reproduce. Automated methods are faster and reproducible, but previous methods for LSB semi-automated mapping have not been highly successful. Here, two new object-based methods for the semi-automated extraction of LSBs (footprints) from digital terrain models are compared in a test area in the Puget Lowland, Washington, USA. As segmentation procedures to create LSB-candidate objects, the normalized closed contour method relies on the contouring of a normalized local relief model addressing LSBs on slopes, and the landform elements mask method relies on the classification of landform elements derived from the digital terrain model. For identifying which LSB-candidate objects correspond to LSBs, both methods use the same LSB operational definition: a ruleset encapsulating expert knowledge, published morphometric data, and the morphometric range of LSBs in the study area. The normalized closed contour method was separately applied to four different local relief models, two computed in moving windows and two hydrology-based. Overall, the normalized closed contour method outperformed the landform elements mask method. The normalized closed contour method performed on a hydrological relief model from a multiple direction flow routing algorithm performed best. For an assessment of its transferability, the normalized closed contour method was evaluated on a second area, the Chautauqua drumlin field, Pennsylvania and New York, USA where it performed better than in the Puget Lowland. A broad comparison to previous methods suggests that the normalized relief closed contour method may be the most capable method to date, but more development is required.

  15. High Spatial Resolution Visual Band Imagery Outperforms Medium Resolution Spectral Imagery for Ecosystem Assessment in the Semi-Arid Brazilian Sertão

    Directory of Open Access Journals (Sweden)

    Ran Goldblatt

    2017-12-01

    Full Text Available Semi-arid ecosystems play a key role in global agricultural production, seasonal carbon cycle dynamics, and longer-run climate change. Because semi-arid landscapes are heterogeneous and often sparsely vegetated, repeated and large-scale ecosystem assessments of these regions have to date been impossible. Here, we assess the potential of high-spatial resolution visible band imagery for semi-arid ecosystem mapping. We use WorldView satellite imagery at 0.3–0.5 m resolution to develop a reference data set of nearly 10,000 labeled examples of three classes—trees, shrubs/grasses, and bare land—across 1000 km 2 of the semi-arid Sertão region of northeast Brazil. Using Google Earth Engine, we show that classification with low-spectral but high-spatial resolution input (WorldView outperforms classification with the full spectral information available from Landsat 30 m resolution imagery as input. Classification with high spatial resolution input improves detection of sparse vegetation and distinction between trees and seasonal shrubs and grasses, two features which are lost at coarser spatial (but higher spectral resolution input. Our total tree cover estimates for the study area disagree with recent estimates using other methods that may underestimate treecover because they confuse trees with seasonal vegetation (shrubs and grasses. This distinction is important for monitoring seasonal and long-run carbon cycle and ecosystem health. Our results suggest that newer remote sensing products that promise high frequency global coverage at high spatial but lower spectral resolution may offer new possibilities for direct monitoring of the world’s semi-arid ecosystems, and we provide methods that could be scaled to do so.

  16. Charged-particle thermonuclear reaction rates: IV. Comparison to previous work

    International Nuclear Information System (INIS)

    Iliadis, C.; Longland, R.; Champagne, A.E.; Coc, A.

    2010-01-01

    We compare our Monte Carlo reaction rates (see Paper II of this issue) to previous results that were obtained by using the classical method of computing thermonuclear reaction rates. For each reaction, the comparison is presented using two types of graphs: the first shows the change in reaction rate uncertainties, while the second displays our new results normalized to the previously recommended reaction rate. We find that the rates have changed significantly for almost all reactions considered here. The changes are caused by (i) our new Monte Carlo method of computing reaction rates (see Paper I of this issue), and (ii) newly available nuclear physics information (see Paper III of this issue).

  17. Doubly stochastic radial basis function methods

    Science.gov (United States)

    Yang, Fenglian; Yan, Liang; Ling, Leevan

    2018-06-01

    We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).

  18. New Method of Calculating a Multiplication by using the Generalized Bernstein-Vazirani Algorithm

    Science.gov (United States)

    Nagata, Koji; Nakamura, Tadao; Geurdes, Han; Batle, Josep; Abdalla, Soliman; Farouk, Ahmed

    2018-06-01

    We present a new method of more speedily calculating a multiplication by using the generalized Bernstein-Vazirani algorithm and many parallel quantum systems. Given the set of real values a1,a2,a3,\\ldots ,aN and a function g:bf {R}→ {0,1}, we shall determine the following values g(a1),g(a2),g(a3),\\ldots , g(aN) simultaneously. The speed of determining the values is shown to outperform the classical case by a factor of N. Next, we consider it as a number in binary representation; M 1 = ( g( a 1), g( a 2), g( a 3),…, g( a N )). By using M parallel quantum systems, we have M numbers in binary representation, simultaneously. The speed of obtaining the M numbers is shown to outperform the classical case by a factor of M. Finally, we calculate the product; M1× M2× \\cdots × MM. The speed of obtaining the product is shown to outperform the classical case by a factor of N × M.

  19. Using the Nearest Neighbour method to substitute missing daily solar radiation data

    International Nuclear Information System (INIS)

    Bezuidenhout, C.N.

    2002-01-01

    Ground level solar radiation reflects the amount of energy that reaches the earth's surface and is utilised by people, animals and plants. Biological models often require such radiation records for long periods of time, however, a lack of radiation data is common to many countries. Consequently various methods were developed to estimate daily radiation from other meteorological measurements. These methods normally need site specific calibration, require a fixed amount of input variables and do not include uncertainties introduced by global climate change. In this paper an attempt was made to develop a station independent substitution method without sacrificing accuracy. Meteorological data from different climatic regions in South Africa were used to assess the suitability of the Nearest Neighbour (NN) method. This method is based on the re-occurrence of events similar to those in the past. Different statistical approaches were assessed to calibrate the distance equation. An attempt was made to find suitable weights, calibrated universally, for the distance equation that would still produce good estimates for individual stations. The universally calibrated NN method outperformed previously developed equations (locally calibrated) by as much as 20% and overcomes various shortcomings identified in these equations. More detailed analyses also confirmed that the NN approach generates more representative statistical distributions and estimates extreme instances of solar radiation more accurately. (author)

  20. Human Detection System by Fusing Depth Map-Based Method and Convolutional Neural Network-Based Method

    Directory of Open Access Journals (Sweden)

    Anh Vu Le

    2017-01-01

    Full Text Available In this paper, the depth images and the colour images provided by Kinect sensors are used to enhance the accuracy of human detection. The depth-based human detection method is fast but less accurate. On the other hand, the faster region convolutional neural network-based human detection method is accurate but requires a rather complex hardware configuration. To simultaneously leverage the advantages and relieve the drawbacks of each method, one master and one client system is proposed. The final goal is to make a novel Robot Operation System (ROS-based Perception Sensor Network (PSN system, which is more accurate and ready for the real time application. The experimental results demonstrate the outperforming of the proposed method compared with other conventional methods in the challenging scenarios.

  1. Yet Another Method for Image Segmentation based on Histograms and Heuristics

    Directory of Open Access Journals (Sweden)

    Horia-Nicolai L. Teodorescu

    2012-07-01

    Full Text Available We introduce a method for image segmentation that requires little computations, yet providing comparable results to other methods. While the proposed method resembles to the known ones based on histograms, it is still different in the use of the gray level distribution. When to the basic procedure we add several heuristic rules, the method produces results that, in some cases, may outperform the results produced by the known methods. The paper reports preliminary results. More details on the method, improvements, and results will be presented in a future paper.

  2. Native Honey Bees Outperform Adventive Honey Bees in Increasing Pyrus bretschneideri (Rosales: Rosaceae) Pollination.

    Science.gov (United States)

    Gemeda, Tolera Kumsa; Shao, Youquan; Wu, Wenqin; Yang, Huipeng; Huang, Jiaxing; Wu, Jie

    2017-12-05

    The foraging behavior of different bee species is a key factor influencing the pollination efficiency of different crops. Most pear species exhibit full self-incompatibility and thus depend entirely on cross-pollination. However, as little is known about the pear visitation preferences of native Apis cerana (Fabricius; Hymenoptera: Apidae) and adventive Apis mellifera (L.; Hymenoptera: Apidae) in China. A comparative analysis was performed to explore the pear-foraging differences of these species under the natural conditions of pear growing areas. The results show significant variability in the pollen-gathering tendency of these honey bees. Compared to A. mellifera, A. cerana begins foraging at an earlier time of day and gathers a larger amount of pollen in the morning. Based on pollen collection data, A. mellifera shows variable preferences: vigorously foraging on pear on the first day of observation but collecting pollen from non-target floral resources on other experimental days. Conversely, A. cerana persists in pear pollen collection, without shifting preference to other competitive flowers. Therefore, A. cerana outperforms adventive A. mellifera with regard to pear pollen collection under natural conditions, which may lead to increased pear pollination. This study supports arguments in favor of further multiplication and maintenance of A. cerana for pear and other native crop pollination. Moreover, it is essential to develop alternative pollination management techniques to utilize A. mellifera for pear pollination. © The Author(s) 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Data-driven execution of fast multipole methods

    KAUST Repository

    Ltaief, Hatem

    2013-09-17

    Fast multipole methods (FMMs) have O (N) complexity, are compute bound, and require very little synchronization, which makes them a favorable algorithm on next-generation supercomputers. Their most common application is to accelerate N-body problems, but they can also be used to solve boundary integral equations. When the particle distribution is irregular and the tree structure is adaptive, load balancing becomes a non-trivial question. A common strategy for load balancing FMMs is to use the work load from the previous step as weights to statically repartition the next step. The authors discuss in the paper another approach based on data-driven execution to efficiently tackle this challenging load balancing problem. The core idea consists of breaking the most time-consuming stages of the FMMs into smaller tasks. The algorithm can then be represented as a directed acyclic graph where nodes represent tasks and edges represent dependencies among them. The execution of the algorithm is performed by asynchronously scheduling the tasks using the queueing and runtime for kernels runtime environment, in a way such that data dependencies are not violated for numerical correctness purposes. This asynchronous scheduling results in an out-of-order execution. The performance results of the data-driven FMM execution outperform the previous strategy and show linear speedup on a quad-socket quad-core Intel Xeon system.Copyright © 2013 John Wiley & Sons, Ltd. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Efficient spectral computation of the stationary states of rotating Bose-Einstein condensates by preconditioned nonlinear conjugate gradient methods

    Science.gov (United States)

    Antoine, Xavier; Levitt, Antoine; Tang, Qinglin

    2017-08-01

    We propose a preconditioned nonlinear conjugate gradient method coupled with a spectral spatial discretization scheme for computing the ground states (GS) of rotating Bose-Einstein condensates (BEC), modeled by the Gross-Pitaevskii Equation (GPE). We first start by reviewing the classical gradient flow (also known as imaginary time (IMT)) method which considers the problem from the PDE standpoint, leading to numerically solve a dissipative equation. Based on this IMT equation, we analyze the forward Euler (FE), Crank-Nicolson (CN) and the classical backward Euler (BE) schemes for linear problems and recognize classical power iterations, allowing us to derive convergence rates. By considering the alternative point of view of minimization problems, we propose the preconditioned steepest descent (PSD) and conjugate gradient (PCG) methods for the GS computation of the GPE. We investigate the choice of the preconditioner, which plays a key role in the acceleration of the convergence process. The performance of the new algorithms is tested in 1D, 2D and 3D. We conclude that the PCG method outperforms all the previous methods, most particularly for 2D and 3D fast rotating BECs, while being simple to implement.

  5. Evaluation of different methods to model near-surface turbulent fluxes for a mountain glacier in the Cariboo Mountains, BC, Canada

    Science.gov (United States)

    Radić, Valentina; Menounos, Brian; Shea, Joseph; Fitzpatrick, Noel; Tessema, Mekdes A.; Déry, Stephen J.

    2017-12-01

    As part of surface energy balance models used to simulate glacier melting, choosing parameterizations to adequately estimate turbulent heat fluxes is extremely challenging. This study aims to evaluate a set of four aerodynamic bulk methods (labeled as C methods), commonly used to estimate turbulent heat fluxes for a sloped glacier surface, and two less commonly used bulk methods developed from katabatic flow models. The C methods differ in their parameterizations of the bulk exchange coefficient that relates the fluxes to the near-surface measurements of mean wind speed, air temperature, and humidity. The methods' performance in simulating 30 min sensible- and latent-heat fluxes is evaluated against the measured fluxes from an open-path eddy-covariance (OPEC) method. The evaluation is performed at a point scale of a mountain glacier, using one-level meteorological and OPEC observations from multi-day periods in the 2010 and 2012 summer seasons. The analysis of the two independent seasons yielded the same key findings, which include the following: first, the bulk method, with or without the commonly used Monin-Obukhov (M-O) stability functions, overestimates the turbulent heat fluxes over the observational period, mainly due to a substantial overestimation of the friction velocity. This overestimation is most pronounced during the katabatic flow conditions, corroborating the previous findings that the M-O theory works poorly in the presence of a low wind speed maximum. Second, the method based on a katabatic flow model (labeled as the KInt method) outperforms any C method in simulating the friction velocity; however, the C methods outperform the KInt method in simulating the sensible-heat fluxes. Third, the best overall performance is given by a hybrid method, which combines the KInt approach with the C method; i.e., it parameterizes eddy viscosity differently than eddy diffusivity. An error analysis reveals that the uncertainties in the measured meteorological

  6. Evaluation of different methods to model near-surface turbulent fluxes for a mountain glacier in the Cariboo Mountains, BC, Canada

    Directory of Open Access Journals (Sweden)

    V. Radić

    2017-12-01

    Full Text Available As part of surface energy balance models used to simulate glacier melting, choosing parameterizations to adequately estimate turbulent heat fluxes is extremely challenging. This study aims to evaluate a set of four aerodynamic bulk methods (labeled as C methods, commonly used to estimate turbulent heat fluxes for a sloped glacier surface, and two less commonly used bulk methods developed from katabatic flow models. The C methods differ in their parameterizations of the bulk exchange coefficient that relates the fluxes to the near-surface measurements of mean wind speed, air temperature, and humidity. The methods' performance in simulating 30 min sensible- and latent-heat fluxes is evaluated against the measured fluxes from an open-path eddy-covariance (OPEC method. The evaluation is performed at a point scale of a mountain glacier, using one-level meteorological and OPEC observations from multi-day periods in the 2010 and 2012 summer seasons. The analysis of the two independent seasons yielded the same key findings, which include the following: first, the bulk method, with or without the commonly used Monin–Obukhov (M–O stability functions, overestimates the turbulent heat fluxes over the observational period, mainly due to a substantial overestimation of the friction velocity. This overestimation is most pronounced during the katabatic flow conditions, corroborating the previous findings that the M–O theory works poorly in the presence of a low wind speed maximum. Second, the method based on a katabatic flow model (labeled as the KInt method outperforms any C method in simulating the friction velocity; however, the C methods outperform the KInt method in simulating the sensible-heat fluxes. Third, the best overall performance is given by a hybrid method, which combines the KInt approach with the C method; i.e., it parameterizes eddy viscosity differently than eddy diffusivity. An error analysis reveals that the uncertainties in

  7. PAFit: A Statistical Method for Measuring Preferential Attachment in Temporal Complex Networks.

    Directory of Open Access Journals (Sweden)

    Thong Pham

    Full Text Available Preferential attachment is a stochastic process that has been proposed to explain certain topological features characteristic of complex networks from diverse domains. The systematic investigation of preferential attachment is an important area of research in network science, not only for the theoretical matter of verifying whether this hypothesized process is operative in real-world networks, but also for the practical insights that follow from knowledge of its functional form. Here we describe a maximum likelihood based estimation method for the measurement of preferential attachment in temporal complex networks. We call the method PAFit, and implement it in an R package of the same name. PAFit constitutes an advance over previous methods primarily because we based it on a nonparametric statistical framework that enables attachment kernel estimation free of any assumptions about its functional form. We show this results in PAFit outperforming the popular methods of Jeong and Newman in Monte Carlo simulations. What is more, we found that the application of PAFit to a publically available Flickr social network dataset yielded clear evidence for a deviation of the attachment kernel from the popularly assumed log-linear form. Independent of our main work, we provide a correction to a consequential error in Newman's original method which had evidently gone unnoticed since its publication over a decade ago.

  8. SuBSENSE: a universal change detection method with local adaptive sensitivity.

    Science.gov (United States)

    St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert

    2015-01-01

    Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.

  9. A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search

    Science.gov (United States)

    Ghani, Nur Hamizah Abdul; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.

  10. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Science.gov (United States)

    Zhang, Ai-bing; Feng, Jie; Ward, Robert D; Wan, Ping; Gao, Qiang; Wu, Jun; Zhao, Wei-zhong

    2012-01-01

    Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI) region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS) genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF) to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish) and two representing non-coding ITS barcodes (rust fungi and brown algae). Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ) and Maximum likelihood (ML) methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI) of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40%) for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37%) for 1094 brown algae queries, both using ITS barcodes.

  11. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Directory of Open Access Journals (Sweden)

    Ai-bing Zhang

    Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.

  12. Gradient method for blind chaotic signal separation based on proliferation exponent

    International Nuclear Information System (INIS)

    Lü Shan-Xiang; Wang Zhao-Shan; Hu Zhi-Hui; Feng Jiu-Chao

    2014-01-01

    A new method to perform blind separation of chaotic signals is articulated in this paper, which takes advantage of the underlying features in the phase space for identifying various chaotic sources. Without incorporating any prior information about the source equations, the proposed algorithm can not only separate the mixed signals in just a few iterations, but also outperforms the fast independent component analysis (FastICA) method when noise contamination is considerable. (general)

  13. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.

    Science.gov (United States)

    Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben

    2018-02-22

    This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  14. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves

    Directory of Open Access Journals (Sweden)

    Madaín Pérez-Patricio

    2018-02-01

    Full Text Available This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance, a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica, Canavalia ensiforme, and Lycopersicon esculentum. Experimental results showed that—in terms of accuracy and processing speed—the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica, where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  15. SONOGRAPHIC PREDICTION OF SCAR DEHISCENCE IN WOMEN WITH PREVIOUS CAESAREAN SECTION

    Directory of Open Access Journals (Sweden)

    Shubhada Suhas Jajoo

    2018-01-01

    Full Text Available BACKGROUND Caesarean section (Sectio Caesarea is a surgical method for the completion of delivery. After various historical modifications of operative techniques, modern approach consists in the transverse dissection of the anterior wall of the uterus. The rate of vaginal birth after caesarean section was significantly reduced from year to year and the rate of repeated caesarean section is increased during the past 10 years. Evaluation of scar thickness is done by ultrasound, but it is still debatable size of thick scar that would be guiding “cut-off value” for the completion of the delivery method. To better assess the risk of uterine rupture, some authors have proposed sonographic measurement of lower uterine segment thickness near term assuming that there is an inverse correlation between LUS thickness and the risk of uterine scar defect. Therefore, this assessment for the management of women with prior CS may increase safety during labour by selecting women with the lowest risk of uterine rupture. The aim of the study is to study the diagnostic accuracy of sonographic measurements of the Lower Uterine Segment (LUS thickness near term in predicting uterine scar defects in women with prior Caesarean Section (CS. We aim to ascertain the best cut-off values for predicting uterine rupture. MATERIALS AND METHODS 100 antenatal women with history of previous one LSCS who come to attend antenatal clinic will be assessed for scar thickness by transabdominal ultrasonography and its correlation with intraoperative findings. This prospective longitudinal study was conducted for 1 year after IEC approval with inclusion criteria previous one LSCS. Exclusion criteria- 1 Previous myomectomy scar; 2 Previous 2 LSCS; 3 Previous hysterotomy scar. RESULTS Our findings indicate that there is a strong association between degree of LUS thinning measured near term and the risk of uterine scar defect at birth. In our study, optimal cut-off value for predicting

  16. Method for restoring contaminants to base levels in previously leached formations

    International Nuclear Information System (INIS)

    Strom, E.T.; Espencheid, W.F.

    1983-01-01

    The present invention relates to a method for restoring to environmentally acceptable levels the soluble contaminants in a subterranean formation that has been subjected to oxidative leaching. The contaminants are defined as those ionic species that when subjected to calcium ions form precipitates which are insoluble in the formation fluids. In accordance with the present invention, soluble calcium values are introduced into the formation. The level of contaminants is monitored and when such reaches the desired level, the introduction of soluble calcium values is stopped. The introduction of calcium values may be achieved in several ways one of which is to inject into the formation an aqueous solution containing therein solubilized calcium values. Another method of introducing calcium values into a formation, is to inject into the formation an aqueous solution containing carbon dioxide to solubilize calcium values, such as calcium carbonates, found in the formation

  17. A novel anisotropic fast marching method and its application to blood flow computation in phase-contrast MRI.

    Science.gov (United States)

    Schwenke, M; Hennemuth, A; Fischer, B; Friman, O

    2012-01-01

    Phase-contrast MRI (PC MRI) can be used to assess blood flow dynamics noninvasively inside the human body. The acquired images can be reconstructed into flow vector fields. Traditionally, streamlines can be computed based on the vector fields to visualize flow patterns and particle trajectories. The traditional methods may give a false impression of precision, as they do not consider the measurement uncertainty in the PC MRI images. In our prior work, we incorporated the uncertainty of the measurement into the computation of particle trajectories. As a major part of the contribution, a novel numerical scheme for solving the anisotropic Fast Marching problem is presented. A computing time comparison to state-of-the-art methods is conducted on artificial tensor fields. A visual comparison of healthy to pathological blood flow patterns is given. The comparison shows that the novel anisotropic Fast Marching solver outperforms previous schemes in terms of computing time. The visual comparison of flow patterns directly visualizes large deviations of pathological flow from healthy flow. The novel anisotropic Fast Marching solver efficiently resolves even strongly anisotropic path costs. The visualization method enables the user to assess the uncertainty of particle trajectories derived from PC MRI images.

  18. Comparison of time-series registration methods in breast dynamic infrared imaging

    Science.gov (United States)

    Riyahi-Alam, S.; Agostini, V.; Molinari, F.; Knaflitz, M.

    2015-03-01

    Automated motion reduction in dynamic infrared imaging is on demand in clinical applications, since movement disarranges time-temperature series of each pixel, thus originating thermal artifacts that might bias the clinical decision. All previously proposed registration methods are feature based algorithms requiring manual intervention. The aim of this work is to optimize the registration strategy specifically for Breast Dynamic Infrared Imaging and to make it user-independent. We implemented and evaluated 3 different 3D time-series registration methods: 1. Linear affine, 2. Non-linear Bspline, 3. Demons applied to 12 datasets of healthy breast thermal images. The results are evaluated through normalized mutual information with average values of 0.70 ±0.03, 0.74 ±0.03 and 0.81 ±0.09 (out of 1) for Affine, Bspline and Demons registration, respectively, as well as breast boundary overlap and Jacobian determinant of the deformation field. The statistical analysis of the results showed that symmetric diffeomorphic Demons' registration method outperforms also with the best breast alignment and non-negative Jacobian values which guarantee image similarity and anatomical consistency of the transformation, due to homologous forces enforcing the pixel geometric disparities to be shortened on all the frames. We propose Demons' registration as an effective technique for time-series dynamic infrared registration, to stabilize the local temperature oscillation.

  19. The extended reciprocity: Strong belief outperforms persistence.

    Science.gov (United States)

    Kurokawa, Shun

    2017-05-21

    The existence of cooperation is a mysterious phenomenon and demands explanation, and direct reciprocity is one key potential explanation for the evolution of cooperation. Direct reciprocity allows cooperation to evolve for cooperators who switch their behavior on the basis of information about the opponent's behavior. Here, relevant to direct reciprocity is information deficiency. When the opponent's last move is unknown, how should players behave? One possibility is to choose cooperation with some default probability without using any further information. In fact, our previous paper (Kurokawa, 2016a) examined this strategy. However, there might be beneficial information other than the opponent's last move. A subsequent study of ours (Kurokawa, 2017) examined the strategy which uses the own last move when the opponent's last move is unknown, and revealed that referring to the own move and trying to imitate it when information is absent is beneficial. Is there any other beneficial information else? How about strong belief (i.e., have infinite memory and believe that the opponent's behavior is unchanged)? Here, we examine the evolution of strategies with strong belief. Analyzing the repeated prisoner's dilemma game and using evolutionarily stable strategy (ESS) analysis against an invasion by unconditional defectors, we find the strategy with strong belief is more likely to evolve than the strategy which does not use information other than the opponent player's last move and more likely to evolve than the strategy which uses not only the opponent player's last move but also the own last move. Strong belief produces the extended reciprocity and facilitates the evolution of cooperation. Additionally, we consider the two strategies game between strategies with strong belief and any strategy, and we consider the four strategies game in which unconditional cooperators, unconditional defectors, pessimistic reciprocators with strong belief, and optimistic reciprocators with

  20. Real-Time PCR Typing of Escherichia coli Based on Multiple Single Nucleotide Polymorphisms--a Convenient and Rapid Method.

    Science.gov (United States)

    Lager, Malin; Mernelius, Sara; Löfgren, Sture; Söderman, Jan

    2016-01-01

    Healthcare-associated infections caused by Escherichia coli and antibiotic resistance due to extended-spectrum beta-lactamase (ESBL) production constitute a threat against patient safety. To identify, track, and control outbreaks and to detect emerging virulent clones, typing tools of sufficient discriminatory power that generate reproducible and unambiguous data are needed. A probe based real-time PCR method targeting multiple single nucleotide polymorphisms (SNP) was developed. The method was based on the multi locus sequence typing scheme of Institute Pasteur and by adaptation of previously described typing assays. An 8 SNP-panel that reached a Simpson's diversity index of 0.95 was established, based on analysis of sporadic E. coli cases (ESBL n = 27 and non-ESBL n = 53). This multi-SNP assay was used to identify the sequence type 131 (ST131) complex according to the Achtman's multi locus sequence typing scheme. However, it did not fully discriminate within the complex but provided a diagnostic signature that outperformed a previously described detection assay. Pulsed-field gel electrophoresis typing of isolates from a presumed outbreak (n = 22) identified two outbreaks (ST127 and ST131) and three different non-outbreak-related isolates. Multi-SNP typing generated congruent data except for one non-outbreak-related ST131 isolate. We consider multi-SNP real-time PCR typing an accessible primary generic E. coli typing tool for rapid and uniform type identification.

  1. Improving cerebellar segmentation with statistical fusion

    Science.gov (United States)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  2. A comparison of cosegregation analysis methods for the clinical setting.

    Science.gov (United States)

    Rañola, John Michael O; Liu, Quanhui; Rosenthal, Elisabeth A; Shirts, Brian H

    2018-04-01

    Quantitative cosegregation analysis can help evaluate the pathogenicity of genetic variants. However, genetics professionals without statistical training often use simple methods, reporting only qualitative findings. We evaluate the potential utility of quantitative cosegregation in the clinical setting by comparing three methods. One thousand pedigrees each were simulated for benign and pathogenic variants in BRCA1 and MLH1 using United States historical demographic data to produce pedigrees similar to those seen in the clinic. These pedigrees were analyzed using two robust methods, full likelihood Bayes factors (FLB) and cosegregation likelihood ratios (CSLR), and a simpler method, counting meioses. Both FLB and CSLR outperform counting meioses when dealing with pathogenic variants, though counting meioses is not far behind. For benign variants, FLB and CSLR greatly outperform as counting meioses is unable to generate evidence for benign variants. Comparing FLB and CSLR, we find that the two methods perform similarly, indicating that quantitative results from either of these methods could be combined in multifactorial calculations. Combining quantitative information will be important as isolated use of cosegregation in single families will yield classification for less than 1% of variants. To encourage wider use of robust cosegregation analysis, we present a website ( http://www.analyze.myvariant.org ) which implements the CSLR, FLB, and Counting Meioses methods for ATM, BRCA1, BRCA2, CHEK2, MEN1, MLH1, MSH2, MSH6, and PMS2. We also present an R package, CoSeg, which performs the CSLR analysis on any gene with user supplied parameters. Future variant classification guidelines should allow nuanced inclusion of cosegregation evidence against pathogenicity.

  3. Vitis phylogenomics: hybridization intensities from a SNP array outperform genotype calls.

    Directory of Open Access Journals (Sweden)

    Allison J Miller

    Full Text Available Understanding relationships among species is a fundamental goal of evolutionary biology. Single nucleotide polymorphisms (SNPs identified through next generation sequencing and related technologies enable phylogeny reconstruction by providing unprecedented numbers of characters for analysis. One approach to SNP-based phylogeny reconstruction is to identify SNPs in a subset of individuals, and then to compile SNPs on an array that can be used to genotype additional samples at hundreds or thousands of sites simultaneously. Although powerful and efficient, this method is subject to ascertainment bias because applying variation discovered in a representative subset to a larger sample favors identification of SNPs with high minor allele frequencies and introduces bias against rare alleles. Here, we demonstrate that the use of hybridization intensity data, rather than genotype calls, reduces the effects of ascertainment bias. Whereas traditional SNP calls assess known variants based on diversity housed in the discovery panel, hybridization intensity data survey variation in the broader sample pool, regardless of whether those variants are present in the initial SNP discovery process. We apply SNP genotype and hybridization intensity data derived from the Vitis9kSNP array developed for grape to show the effects of ascertainment bias and to reconstruct evolutionary relationships among Vitis species. We demonstrate that phylogenies constructed using hybridization intensities suffer less from the distorting effects of ascertainment bias, and are thus more accurate than phylogenies based on genotype calls. Moreover, we reconstruct the phylogeny of the genus Vitis using hybridization data, show that North American subgenus Vitis species are monophyletic, and resolve several previously poorly known relationships among North American species. This study builds on earlier work that applied the Vitis9kSNP array to evolutionary questions within Vitis vinifera

  4. Method of predicting Splice Sites based on signal interactions

    Directory of Open Access Journals (Sweden)

    Deogun Jitender S

    2006-04-01

    Full Text Available Abstract Background Predicting and proper ranking of canonical splice sites (SSs is a challenging problem in bioinformatics and machine learning communities. Any progress in SSs recognition will lead to better understanding of splicing mechanism. We introduce several new approaches of combining a priori knowledge for improved SS detection. First, we design our new Bayesian SS sensor based on oligonucleotide counting. To further enhance prediction quality, we applied our new de novo motif detection tool MHMMotif to intronic ends and exons. We combine elements found with sensor information using Naive Bayesian Network, as implemented in our new tool SpliceScan. Results According to our tests, the Bayesian sensor outperforms the contemporary Maximum Entropy sensor for 5' SS detection. We report a number of putative Exonic (ESE and Intronic (ISE Splicing Enhancers found by MHMMotif tool. T-test statistics on mouse/rat intronic alignments indicates, that detected elements are on average more conserved as compared to other oligos, which supports our assumption of their functional importance. The tool has been shown to outperform the SpliceView, GeneSplicer, NNSplice, Genio and NetUTR tools for the test set of human genes. SpliceScan outperforms all contemporary ab initio gene structural prediction tools on the set of 5' UTR gene fragments. Conclusion Designed methods have many attractive properties, compared to existing approaches. Bayesian sensor, MHMMotif program and SpliceScan tools are freely available on our web site. Reviewers This article was reviewed by Manyuan Long, Arcady Mushegian and Mikhail Gelfand.

  5. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods.

    Science.gov (United States)

    Arcos-García, Álvaro; Álvarez-García, Juan A; Soria-Morillo, Luis M

    2018-03-01

    This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Relative accuracy of three common methods of parentage analysis in natural populations

    KAUST Repository

    Harrison, Hugo B.; Saenz Agudelo, Pablo; Planes, Serge; Jones, Geoffrey P.; Berumen, Michael L.

    2012-01-01

    Parentage studies and family reconstructions have become increasingly popular for investigating a range of evolutionary, ecological and behavioural processes in natural populations. However, a number of different assignment methods have emerged in common use and the accuracy of each may differ in relation to the number of loci examined, allelic diversity, incomplete sampling of all candidate parents and the presence of genotyping errors. Here, we examine how these factors affect the accuracy of three popular parentage inference methods (colony, famoz and an exclusion-Bayes' theorem approach by Christie (Molecular Ecology Resources, 2010a, 10, 115) to resolve true parent-offspring pairs using simulated data. Our findings demonstrate that accuracy increases with the number and diversity of loci. These were clearly the most important factors in obtaining accurate assignments explaining 75-90% of variance in overall accuracy across 60 simulated scenarios. Furthermore, the proportion of candidate parents sampled had a small but significant impact on the susceptibility of each method to either false-positive or false-negative assignments. Within the range of values simulated, colony outperformed FaMoz, which outperformed the exclusion-Bayes' theorem method. However, with 20 or more highly polymorphic loci, all methods could be applied with confidence. Our results show that for parentage inference in natural populations, careful consideration of the number and quality of markers will increase the accuracy of assignments and mitigate the effects of incomplete sampling of parental populations. © 2012 Blackwell Publishing Ltd.

  7. Relative accuracy of three common methods of parentage analysis in natural populations

    KAUST Repository

    Harrison, Hugo B.

    2012-12-27

    Parentage studies and family reconstructions have become increasingly popular for investigating a range of evolutionary, ecological and behavioural processes in natural populations. However, a number of different assignment methods have emerged in common use and the accuracy of each may differ in relation to the number of loci examined, allelic diversity, incomplete sampling of all candidate parents and the presence of genotyping errors. Here, we examine how these factors affect the accuracy of three popular parentage inference methods (colony, famoz and an exclusion-Bayes\\' theorem approach by Christie (Molecular Ecology Resources, 2010a, 10, 115) to resolve true parent-offspring pairs using simulated data. Our findings demonstrate that accuracy increases with the number and diversity of loci. These were clearly the most important factors in obtaining accurate assignments explaining 75-90% of variance in overall accuracy across 60 simulated scenarios. Furthermore, the proportion of candidate parents sampled had a small but significant impact on the susceptibility of each method to either false-positive or false-negative assignments. Within the range of values simulated, colony outperformed FaMoz, which outperformed the exclusion-Bayes\\' theorem method. However, with 20 or more highly polymorphic loci, all methods could be applied with confidence. Our results show that for parentage inference in natural populations, careful consideration of the number and quality of markers will increase the accuracy of assignments and mitigate the effects of incomplete sampling of parental populations. © 2012 Blackwell Publishing Ltd.

  8. Previously unknown species of Aspergillus.

    Science.gov (United States)

    Gautier, M; Normand, A-C; Ranque, S

    2016-08-01

    The use of multi-locus DNA sequence analysis has led to the description of previously unknown 'cryptic' Aspergillus species, whereas classical morphology-based identification of Aspergillus remains limited to the section or species-complex level. The current literature highlights two main features concerning these 'cryptic' Aspergillus species. First, the prevalence of such species in clinical samples is relatively high compared with emergent filamentous fungal taxa such as Mucorales, Scedosporium or Fusarium. Second, it is clearly important to identify these species in the clinical laboratory because of the high frequency of antifungal drug-resistant isolates of such Aspergillus species. Matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) has recently been shown to enable the identification of filamentous fungi with an accuracy similar to that of DNA sequence-based methods. As MALDI-TOF MS is well suited to the routine clinical laboratory workflow, it facilitates the identification of these 'cryptic' Aspergillus species at the routine mycology bench. The rapid establishment of enhanced filamentous fungi identification facilities will lead to a better understanding of the epidemiology and clinical importance of these emerging Aspergillus species. Based on routine MALDI-TOF MS-based identification results, we provide original insights into the key interpretation issues of a positive Aspergillus culture from a clinical sample. Which ubiquitous species that are frequently isolated from air samples are rarely involved in human invasive disease? Can both the species and the type of biological sample indicate Aspergillus carriage, colonization or infection in a patient? Highly accurate routine filamentous fungi identification is central to enhance the understanding of these previously unknown Aspergillus species, with a vital impact on further improved patient care. Copyright © 2016 European Society of Clinical Microbiology and

  9. Calculation of 0-0 excitation energies of organic molecules by CIS(D) quantum chemical methods

    International Nuclear Information System (INIS)

    Grimme, Stefan; Izgorodina, Ekaterina I.

    2004-01-01

    The accuracy and reliability of the CIS(D) quantum chemical method and a spin-component scaled variant (SCS-CIS(D)) are tested for calculating 0-0 excitation energies of organic molecules. The ground and excited state geometries and the vibrational zero-point corrections are taken from (TD)DFT-B3LYP calculations. In total 32 valence excited states of different character are studied: π → π* states of polycyclic aromatic compounds/polyenes and n → π* states of carbonyl, thiocarbonyl and aza(azo)-aromatic compounds. This set is augmented by two systems of special interest, i.e., indole and the TICT state of dimethylaminbenzonitrile (DMABN). Both methods predict excitation energies that are on average higher than experiment by about 0.2 eV. The errors are found to be quite systematic (with a standard deviation of about 0.15 eV) and especially SCS-CIS(D) provides a more balanced treatment of π → π* vs. n → π* states. For the test suite of states, both methods clearly outperform the (TD)DFT-B3LYP approach. Opposed to previous conclusions about the performance of CIS(D), these methods can be recommended as reliable and efficient tools for computational studies of excited state problems in organic chemistry. In order to obtain conclusive results, however, the use of optimized excited state geometries and comparison with observables (0-0 excitation energies) are necessary

  10. Kernel reconstruction methods for Doppler broadening — Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    International Nuclear Information System (INIS)

    Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord

    2017-01-01

    This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (T_j). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T_0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (T_j). The choice of the L_2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (T_j) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [T_m_i_n,T_m_a_x]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope "2"3"8U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of "2"3"8U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.

  11. Deep-learning: investigating deep neural networks hyper-parameters and comparison of performance to shallow methods for modeling bioactivity data.

    Science.gov (United States)

    Koutsoukas, Alexios; Monaghan, Keith J; Li, Xiaoli; Huan, Jun

    2017-06-28

    In recent years, research in artificial neural networks has resurged, now under the deep-learning umbrella, and grown extremely popular. Recently reported success of DL techniques in crowd-sourced QSAR and predictive toxicology competitions has showcased these methods as powerful tools in drug-discovery and toxicology research. The aim of this work was dual, first large number of hyper-parameter configurations were explored to investigate how they affect the performance of DNNs and could act as starting points when tuning DNNs and second their performance was compared to popular methods widely employed in the field of cheminformatics namely Naïve Bayes, k-nearest neighbor, random forest and support vector machines. Moreover, robustness of machine learning methods to different levels of artificially introduced noise was assessed. The open-source Caffe deep-learning framework and modern NVidia GPU units were utilized to carry out this study, allowing large number of DNN configurations to be explored. We show that feed-forward deep neural networks are capable of achieving strong classification performance and outperform shallow methods across diverse activity classes when optimized. Hyper-parameters that were found to play critical role are the activation function, dropout regularization, number hidden layers and number of neurons. When compared to the rest methods, tuned DNNs were found to statistically outperform, with p value <0.01 based on Wilcoxon statistical test. DNN achieved on average MCC units of 0.149 higher than NB, 0.092 than kNN, 0.052 than SVM with linear kernel, 0.021 than RF and finally 0.009 higher than SVM with radial basis function kernel. When exploring robustness to noise, non-linear methods were found to perform well when dealing with low levels of noise, lower than or equal to 20%, however when dealing with higher levels of noise, higher than 30%, the Naïve Bayes method was found to perform well and even outperform at the highest level of

  12. Optimization Methods in Emotion Recognition System

    Directory of Open Access Journals (Sweden)

    L. Povoda

    2016-09-01

    Full Text Available Emotions play big role in our everyday communication and contain important information. This work describes a novel method of automatic emotion recognition from textual data. The method is based on well-known data mining techniques, novel approach based on parallel run of SVM (Support Vector Machine classifiers, text preprocessing and 3 optimization methods: sequential elimination of attributes, parameter optimization based on token groups, and method of extending train data sets during practical testing and production release final tuning. We outperformed current state of the art methods and the results were validated on bigger data sets (3346 manually labelled samples which is less prone to overfitting when compared to related works. The accuracy achieved in this work is 86.89% for recognition of 5 emotional classes. The experiments were performed in the real world helpdesk environment, was processing Czech language but the proposed methodology is general and can be applied to many different languages.

  13. Prevalent musculoskeletal pain as a correlate of previous exposure to torture

    DEFF Research Database (Denmark)

    Olsen, Dorte Reff; Montgomery, Edith; Bojholm, S

    2006-01-01

    AIM: To research possible associations between previous exposure to specific torture techniques and prevalent pain in the head and face, back, and feet. METHODS: 221 refugees, 193 males and 28 females, previously exposed to torture in their home country, were subject to a clinical interview...... was general abuse of the whole body (OR 5.64, 95% CI 1.93-16.45). CONCLUSION: In spite of many factors being potentially co-responsible for prevalent pain, years after the torture took place it presents itself as strongly associated with specific loci of pain, with generalized effects, and with somatizing....

  14. Preoperative screening: value of previous tests.

    Science.gov (United States)

    Macpherson, D S; Snow, R; Lofgren, R P

    1990-12-15

    To determine the frequency of tests done in the year before elective surgery that might substitute for preoperative screening tests and to determine the frequency of test results that change from a normal value to a value likely to alter perioperative management. Retrospective cohort analysis of computerized laboratory data (complete blood count, sodium, potassium, and creatinine levels, prothrombin time, and partial thromboplastin time). Urban tertiary care Veterans Affairs Hospital. Consecutive sample of 1109 patients who had elective surgery in 1988. At admission, 7549 preoperative tests were done, 47% of which duplicated tests performed in the previous year. Of 3096 previous results that were normal as defined by hospital reference range and done closest to the time of but before admission (median interval, 2 months), 13 (0.4%; 95% CI, 0.2% to 0.7%), repeat values were outside a range considered acceptable for surgery. Most of the abnormalities were predictable from the patient's history, and most were not noted in the medical record. Of 461 previous tests that were abnormal, 78 (17%; CI, 13% to 20%) repeat values at admission were outside a range considered acceptable for surgery (P less than 0.001, frequency of clinically important abnormalities of patients with normal previous results with those with abnormal previous results). Physicians evaluating patients preoperatively could safely substitute the previous test results analyzed in this study for preoperative screening tests if the previous tests are normal and no obvious indication for retesting is present.

  15. Outcomes With Edoxaban Versus Warfarin in Patients With Previous Cerebrovascular Events

    DEFF Research Database (Denmark)

    Rost, Natalia S; Giugliano, Robert P; Ruff, Christian T

    2016-01-01

    BACKGROUND AND PURPOSE: Patients with atrial fibrillation and previous ischemic stroke (IS)/transient ischemic attack (TIA) are at high risk of recurrent cerebrovascular events despite anticoagulation. In this prespecified subgroup analysis, we compared warfarin with edoxaban in patients with ver......BACKGROUND AND PURPOSE: Patients with atrial fibrillation and previous ischemic stroke (IS)/transient ischemic attack (TIA) are at high risk of recurrent cerebrovascular events despite anticoagulation. In this prespecified subgroup analysis, we compared warfarin with edoxaban in patients...... with versus without previous IS/TIA. METHODS: ENGAGE AF-TIMI 48 (Effective Anticoagulation With Factor Xa Next Generation in Atrial Fibrillation-Thrombolysis in Myocardial Infarction 48) was a double-blind trial of 21 105 patients with atrial fibrillation randomized to warfarin (international normalized ratio......). Because only HDER is approved, we focused on the comparison of HDER versus warfarin. RESULTS: Of 5973 (28.3%) patients with previous IS/TIA, 67% had CHADS2 (congestive heart failure, hypertension, age, diabetes, prior stroke/transient ischemic attack) >3 and 36% were ≥75 years. Compared with 15 132...

  16. Identifying Hierarchical and Overlapping Protein Complexes Based on Essential Protein-Protein Interactions and “Seed-Expanding” Method

    Directory of Open Access Journals (Sweden)

    Jun Ren

    2014-01-01

    Full Text Available Many evidences have demonstrated that protein complexes are overlapping and hierarchically organized in PPI networks. Meanwhile, the large size of PPI network wants complex detection methods have low time complexity. Up to now, few methods can identify overlapping and hierarchical protein complexes in a PPI network quickly. In this paper, a novel method, called MCSE, is proposed based on λ-module and “seed-expanding.” First, it chooses seeds as essential PPIs or edges with high edge clustering values. Then, it identifies protein complexes by expanding each seed to a λ-module. MCSE is suitable for large PPI networks because of its low time complexity. MCSE can identify overlapping protein complexes naturally because a protein can be visited by different seeds. MCSE uses the parameter λ_th to control the range of seed expanding and can detect a hierarchical organization of protein complexes by tuning the value of λ_th. Experimental results of S. cerevisiae show that this hierarchical organization is similar to that of known complexes in MIPS database. The experimental results also show that MCSE outperforms other previous competing algorithms, such as CPM, CMC, Core-Attachment, Dpclus, HC-PIN, MCL, and NFC, in terms of the functional enrichment and matching with known protein complexes.

  17. A SOM clustering pattern sequence-based next symbol prediction method for day-ahead direct electricity load and price forecasting

    International Nuclear Information System (INIS)

    Jin, Cheng Hao; Pok, Gouchol; Lee, Yongmi; Park, Hyun-Woo; Kim, Kwang Deuk; Yun, Unil; Ryu, Keun Ho

    2015-01-01

    Highlights: • A novel pattern sequence-based direct time series forecasting method was proposed. • Due to the use of SOM’s topology preserving property, only SOM can be applied. • SCPSNSP only deals with the cluster patterns not each specific time series value. • SCPSNSP performs better than recently developed forecasting algorithms. - Abstract: In this paper, we propose a new day-ahead direct time series forecasting method for competitive electricity markets based on clustering and next symbol prediction. In the clustering step, pattern sequence and their topology relations are obtained from self organizing map time series clustering. In the next symbol prediction step, with each cluster label in the pattern sequence represented as a pair of its topologically identical coordinates, artificial neural network is used to predict the topological coordinates of next day by training the relationship between previous daily pattern sequence and its next day pattern. According to the obtained topology relations, the nearest nonzero hits pattern is assigned to next day so that the whole time series values can be directly forecasted from the assigned cluster pattern. The proposed method was evaluated on Spanish, Australian and New York electricity markets and compared with PSF and some of the most recently published forecasting methods. Experimental results show that the proposed method outperforms the best forecasting methods at least 3.64%

  18. Low-dose computed tomography image restoration using previous normal-dose scan

    International Nuclear Information System (INIS)

    Ma, Jianhua; Huang, Jing; Feng, Qianjin; Zhang, Hua; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2011-01-01

    Purpose: In current computed tomography (CT) examinations, the associated x-ray radiation dose is of a significant concern to patients and operators. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) or kVp parameter (or delivering less x-ray energy to the body) as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and the noise would propagate into the CT image if no adequate noise control is applied during image reconstruction. Since a normal-dose high diagnostic CT image scanned previously may be available in some clinical applications, such as CT perfusion imaging and CT angiography (CTA), this paper presents an innovative way to utilize the normal-dose scan as a priori information to induce signal restoration of the current low-dose CT image series. Methods: Unlike conventional local operations on neighboring image voxels, nonlocal means (NLM) algorithm utilizes the redundancy of information across the whole image. This paper adapts the NLM to utilize the redundancy of information in the previous normal-dose scan and further exploits ways to optimize the nonlocal weights for low-dose image restoration in the NLM framework. The resulting algorithm is called the previous normal-dose scan induced nonlocal means (ndiNLM). Because of the optimized nature of nonlocal weights calculation, the ndiNLM algorithm does not depend heavily on image registration between the current low-dose and the previous normal-dose CT scans. Furthermore, the smoothing parameter involved in the ndiNLM algorithm can be adaptively estimated based on the image noise relationship between the current low-dose and the previous normal-dose scanning protocols. Results: Qualitative and quantitative evaluations were carried out on a physical phantom as well as clinical abdominal and brain perfusion CT scans in terms of accuracy and resolution properties. The gain by the use

  19. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao

    2016-12-07

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  20. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao; Zheng, Wei-Shi; Ghanem, Bernard

    2016-01-01

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  1. Efficient Banknote Recognition Based on Selection of Discriminative Regions with One-Dimensional Visible-Light Line Sensor.

    Science.gov (United States)

    Pham, Tuyen Danh; Park, Young Ho; Kwon, Seung Yong; Park, Kang Ryoung; Jeong, Dae Sik; Yoon, Sungsoo

    2016-03-04

    Banknote papers are automatically recognized and classified in various machines, such as vending machines, automatic teller machines (ATM), and banknote-counting machines. Previous studies on automatic classification of banknotes have been based on the optical characteristics of banknote papers. On each banknote image, there are regions more distinguishable than others in terms of banknote types, sides, and directions. However, there has been little previous research on banknote recognition that has addressed the selection of distinguishable areas. To overcome this problem, we propose a method for recognizing banknotes by selecting more discriminative regions based on similarity mapping, using images captured by a one-dimensional visible light line sensor. Experimental results with various types of banknote databases show that our proposed method outperforms previous methods.

  2. Efficient Banknote Recognition Based on Selection of Discriminative Regions with One-Dimensional Visible-Light Line Sensor

    Directory of Open Access Journals (Sweden)

    Tuyen Danh Pham

    2016-03-01

    Full Text Available Banknote papers are automatically recognized and classified in various machines, such as vending machines, automatic teller machines (ATM, and banknote-counting machines. Previous studies on automatic classification of banknotes have been based on the optical characteristics of banknote papers. On each banknote image, there are regions more distinguishable than others in terms of banknote types, sides, and directions. However, there has been little previous research on banknote recognition that has addressed the selection of distinguishable areas. To overcome this problem, we propose a method for recognizing banknotes by selecting more discriminative regions based on similarity mapping, using images captured by a one-dimensional visible light line sensor. Experimental results with various types of banknote databases show that our proposed method outperforms previous methods.

  3. Hybrid recommendation methods in complex networks.

    Science.gov (United States)

    Fiasconaro, A; Tumminello, M; Nicosia, V; Latora, V; Mantegna, R N

    2015-07-01

    We propose two recommendation methods, based on the appropriate normalization of already existing similarity measures, and on the convex combination of the recommendation scores derived from similarity between users and between objects. We validate the proposed measures on three data sets, and we compare the performance of our methods to other recommendation systems recently proposed in the literature. We show that the proposed similarity measures allow us to attain an improvement of performances of up to 20% with respect to existing nonparametric methods, and that the accuracy of a recommendation can vary widely from one specific bipartite network to another, which suggests that a careful choice of the most suitable method is highly relevant for an effective recommendation on a given system. Finally, we study how an increasing presence of random links in the network affects the recommendation scores, finding that one of the two recommendation algorithms introduced here can systematically outperform the others in noisy data sets.

  4. DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail

    2016-03-16

    Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind

  5. The prevalence of previous self-harm amongst self-poisoning patients in Sri Lanka

    DEFF Research Database (Denmark)

    Mohamed, Fahim; Perera, Aravinda; Wijayaweera, Kusal

    2011-01-01

    BACKGROUND: One of the most important components of suicide prevention strategies is to target people who repeat self-harm as they are a high risk group. However, there is some evidence that the incidence of repeat self-harm is lower in Asia than in the West. The objective of this study...... was to investigate the prevalence of previous self-harm among a consecutive series of self-harm patients presenting to hospitals in rural Sri Lanka. METHOD: Six hundred and ninety-eight self-poisoning patients presenting to medical wards at two hospitals in Sri Lanka were interviewed about their previous episodes...... of self-harm. RESULTS: Sixty-one (8.7%, 95% CI 6.7-11%) patients reported at least one previous episode of self-harm [37 (10.7%) male, 24 (6.8%) female]; only 19 (2.7%, 95% CI 1.6-4.2%) patients had made more than one previous attempt. CONCLUSION: The low prevalence of previous self-harm is consistent...

  6. The pathogenicity of genetic variants previously associated with left ventricular non-compaction

    DEFF Research Database (Denmark)

    Abbasi, Yeganeh; Jabbari, Javad; Jabbari, Reza

    2016-01-01

    BACKGROUND: Left ventricular non-compaction (LVNC) is a rare cardiomyopathy. Many genetic variants have been associated with LVNC. However, the number of the previous LVNC-associated variants that are common in the background population remains unknown. The aim of this study was to provide...... an updated list of previously reported LVNC-associated variants with biologic description and investigate the prevalence of LVNC variants in healthy general population to find false-positive LVNC-associated variants. METHODS AND RESULTS: The Human Gene Mutation Database and PubMed were systematically...... searched to identify all previously reported LVNC-associated variants. Thereafter, the Exome Sequencing Project (ESP) and the Exome Aggregation Consortium (ExAC), that both represent the background population, was searched for all variants. Four in silico prediction tools were assessed to determine...

  7. Comparison of Classification Methods for Detecting Emotion from Mandarin Speech

    Science.gov (United States)

    Pao, Tsang-Long; Chen, Yu-Te; Yeh, Jun-Heng

    It is said that technology comes out from humanity. What is humanity? The very definition of humanity is emotion. Emotion is the basis for all human expression and the underlying theme behind everything that is done, said, thought or imagined. Making computers being able to perceive and respond to human emotion, the human-computer interaction will be more natural. Several classifiers are adopted for automatically assigning an emotion category, such as anger, happiness or sadness, to a speech utterance. These classifiers were designed independently and tested on various emotional speech corpora, making it difficult to compare and evaluate their performance. In this paper, we first compared several popular classification methods and evaluated their performance by applying them to a Mandarin speech corpus consisting of five basic emotions, including anger, happiness, boredom, sadness and neutral. The extracted feature streams contain MFCC, LPCC, and LPC. The experimental results show that the proposed WD-MKNN classifier achieves an accuracy of 81.4% for the 5-class emotion recognition and outperforms other classification techniques, including KNN, MKNN, DW-KNN, LDA, QDA, GMM, HMM, SVM, and BPNN. Then, to verify the advantage of the proposed method, we compared these classifiers by applying them to another Mandarin expressive speech corpus consisting of two emotions. The experimental results still show that the proposed WD-MKNN outperforms others.

  8. Experiments on Classification of Electroencephalography (EEG Signals in Imagination of Direction using Stacked Autoencoder

    Directory of Open Access Journals (Sweden)

    Kenta Tomonaga

    2017-08-01

    Full Text Available This paper presents classification methods for electroencephalography (EEG signals in imagination of direction measured by a portable EEG headset. In the authorsr previous studies, principal component analysis extracted significant features from EEG signals to construct neural network classifiers. To improve the performance, the authors have implemented a Stacked Autoencoder (SAE for the classification. The SAE carries out feature extraction and classification in a form of multi-layered neural network. Experimental results showed that the SAE outperformed the previous classifiers.

  9. DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail; Soufan, Othman; Essack, Magbubah; Kalnis, Panos; Bajic, Vladimir B.

    2016-01-01

    DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind can be accessed online at: http://​www.​cbrc.​kaust.​edu.​sa/​daspfind.

  10. On jet substructure methods for signal jets

    Energy Technology Data Exchange (ETDEWEB)

    Dasgupta, Mrinal [Consortium for Fundamental Physics, School of Physics & Astronomy, University of Manchester,Oxford Road, Manchester M13 9PL (United Kingdom); Powling, Alexander [School of Physics & Astronomy, University of Manchester,Oxford Road, Manchester M13 9PL (United Kingdom); Siodmok, Andrzej [Institute of Nuclear Physics, Polish Academy of Sciences,ul. Radzikowskiego 152, 31-342 Kraków (Poland); CERN, PH-TH,CH-1211 Geneva 23 (Switzerland)

    2015-08-17

    We carry out simple analytical calculations and Monte Carlo studies to better understand the impact of QCD radiation on some well-known jet substructure methods for jets arising from the decay of boosted Higgs bosons. Understanding differences between taggers for these signal jets assumes particular significance in situations where they perform similarly on QCD background jets. As an explicit example of this we compare the Y-splitter method to the more recently proposed Y-pruning technique. We demonstrate how the insight we gain can be used to significantly improve the performance of Y-splitter by combining it with trimming and show that this combination outperforms the other taggers studied here, at high p{sub T}. We also make analytical estimates for optimal parameter values, for a range of methods and compare to results from Monte Carlo studies.

  11. PROMIS PF CAT Outperforms the ODI and SF-36 Physical Function Domain in Spine Patients.

    Science.gov (United States)

    Brodke, Darrel S; Goz, Vadim; Voss, Maren W; Lawrence, Brandon D; Spiker, William Ryan; Hung, Man

    2017-06-15

    The Oswestry Disability Index v2.0 (ODI), SF36 Physical Function Domain (SF-36 PFD), and PROMIS Physical Function CAT v1.2 (PF CAT) questionnaires were prospectively collected from 1607 patients complaining of back or leg pain, visiting a university-based spine clinic. All questionnaires were collected electronically, using a tablet computer. The aim of this study was to compare the psychometric properties of the PROMIS PF CAT with the ODI and SF36 Physical Function Domain in the same patient population. Evidence-based decision-making is improved by using high-quality patient-reported outcomes measures. Prior studies have revealed the shortcomings of the ODI and SF36, commonly used in spine patients. The PROMIS Network has developed measures with excellent psychometric properties. The Physical Function domain, delivered by Computerized Adaptive Testing (PF CAT), performs well in the spine patient population, though to-date direct comparisons with common measures have not been performed. Standard Rasch analysis was performed to directly compare the psychometrics of the PF CAT, ODI, and SF36 PFD. Spearman correlations were computed to examine the correlations of the three instruments. Time required for administration was also recorded. One thousand six hundred seven patients were administered all assessments. The time required to answer all items in the PF CAT, ODI, and SF-36 PFD was 44, 169, and 99 seconds. The ceiling and floor effects were excellent for the PF CAT (0.81%, 3.86%), while the ceiling effects were marginal and floor effects quite poor for the ODI (6.91% and 44.24%) and SF-36 PFD (5.97% and 23.65%). All instruments significantly correlated with each other. The PROMIS PF CAT outperforms the ODI and SF-36 PFD in the spine patient population and is highly correlated. It has better coverage, while taking less time to administer with fewer questions to answer. 2.

  12. Detection of previously undiagnosed cases of COPD in a high-risk population identified in general practice

    DEFF Research Database (Denmark)

    Løkke, Anders; Ulrik, Charlotte Suppli; Dahl, Ronald

    2012-01-01

    Background and Aim: Under-diagnosis of COPD is a widespread problem. This study aimed to identify previously undiagnosed cases of COPD in a high-risk population identified through general practice. Methods: Participating GPs (n = 241) recruited subjects with no previous diagnosis of lung disease,...

  13. Micro-droplet based directed evolution outperforms conventional laboratory evolution

    DEFF Research Database (Denmark)

    Sjostrom, Staffan L.; Huang, Mingtao; Nielsen, Jens

    2014-01-01

    We present droplet adaptive laboratory evolution (DrALE), a directed evolution method used to improve industrial enzyme producing microorganisms for e.g. feedstock digestion. DrALE is based linking a desired phenotype to growth rate allowing only desired cells to proliferate. Single cells are con...... a whole-genome mutated library of yeast cells for α-amylase activity....

  14. Alternating direction methods for classical and ptychographic phase retrieval

    International Nuclear Information System (INIS)

    Wen, Zaiwen; Yang, Chao; Liu, Xin; Marchesini, Stefano

    2012-01-01

    In this paper, we show how the augmented Lagrangian alternating direction method (ADM) can be used to solve both the classical and ptychographic phase retrieval problems. We point out the connection between ADM and projection algorithms such as the hybrid input–output algorithm, and compare its performance against standard algorithms for phase retrieval on a number of test images. Our computational experiments show that ADM appears to be less sensitive to the choice of relaxation parameters, and it usually outperforms the existing techniques for both the classical and ptychographic phase retrieval problems. (paper)

  15. Kidnapping Detection and Recognition in Previous Unknown Environment

    Directory of Open Access Journals (Sweden)

    Yang Tian

    2017-01-01

    Full Text Available An unaware event referred to as kidnapping makes the estimation result of localization incorrect. In a previous unknown environment, incorrect localization result causes incorrect mapping result in Simultaneous Localization and Mapping (SLAM by kidnapping. In this situation, the explored area and unexplored area are divided to make the kidnapping recovery difficult. To provide sufficient information on kidnapping, a framework to judge whether kidnapping has occurred and to identify the type of kidnapping with filter-based SLAM is proposed. The framework is called double kidnapping detection and recognition (DKDR by performing two checks before and after the “update” process with different metrics in real time. To explain one of the principles of DKDR, we describe a property of filter-based SLAM that corrects the mapping result of the environment using the current observations after the “update” process. Two classical filter-based SLAM algorithms, Extend Kalman Filter (EKF SLAM and Particle Filter (PF SLAM, are modified to show that DKDR can be simply and widely applied in existing filter-based SLAM algorithms. Furthermore, a technique to determine the adapted thresholds of metrics in real time without previous data is presented. Both simulated and experimental results demonstrate the validity and accuracy of the proposed method.

  16. A New Shape Description Method Using Angular Radial Transform

    Science.gov (United States)

    Lee, Jong-Min; Kim, Whoi-Yul

    Shape is one of the primary low-level image features in content-based image retrieval. In this paper we propose a new shape description method that consists of a rotationally invariant angular radial transform descriptor (IARTD). The IARTD is a feature vector that combines the magnitude and aligned phases of the angular radial transform (ART) coefficients. A phase correction scheme is employed to produce the aligned phase so that the IARTD is invariant to rotation. The distance between two IARTDs is defined by combining differences in the magnitudes and aligned phases. In an experiment using the MPEG-7 shape dataset, the proposed method outperforms existing methods; the average BEP of the proposed method is 57.69%, while the average BEPs of the invariant Zernike moments descriptor and the traditional ART are 41.64% and 36.51%, respectively.

  17. How to prevent type 2 diabetes in women with previous gestational diabetes?

    DEFF Research Database (Denmark)

    Pedersen, Anne Louise Winkler; Terkildsen Maindal, Helle; Juul, Lise

    2017-01-01

    OBJECTIVES: Women with previous gestational diabetes (GDM) have a seven times higher risk of developing type 2 diabetes (T2DM) than women without. We aimed to review the evidence of effective behavioural interventions seeking to prevent T2DM in this high-risk group. METHODS: A systematic review...... of RCTs in several databases in March 2016. RESULTS: No specific intervention or intervention components were found superior. The pooled effect on diabetes incidence (four trials) was estimated to: -5.02 per 100 (95% CI: -9.24; -0.80). CONCLUSIONS: This study indicates that intervention is superior...... to no intervention in prevention of T2DM among women with previous GDM....

  18. Why envy outperforms admiration.

    Science.gov (United States)

    van de Ven, Niels; Zeelenberg, Marcel; Pieters, Rik

    2011-06-01

    Four studies tested the hypothesis that the emotion of benign envy, but not the emotions of admiration or malicious envy, motivates people to improve themselves. Studies 1 to 3 found that only benign envy was related to the motivation to study more (Study 1) and to actual performance on the Remote Associates Task (which measures intelligence and creativity; Studies 2 and 3). Study 4 found that an upward social comparison triggered benign envy and subsequent better performance only when people thought self-improvement was attainable. When participants thought self-improvement was hard, an upward social comparison led to more admiration and no motivation to do better. Implications of these findings for theories of social emotions such as envy, social comparisons, and for understanding the influence of role models are discussed.

  19. Do bilinguals outperform monolinguals?

    OpenAIRE

    Sejdi Sejdiu

    2016-01-01

    The relationship between second dialect acquisition and the psychological capacity of the learner is still a divisive topic that generates a lot of debate. A few researchers contend that the acquisition of the second dialect tends to improve the cognitive abilities in various individuals, but at the same time it could hinder the same abilities in other people. Currently, immersion is a common occurrence in some countries. In the recent past, it has significantly increased in its popularity, w...

  20. Non-invasive genetics outperforms morphological methods in faecal dietary analysis, revealing wild boar as a considerable conservation concern for ground-nesting birds.

    Science.gov (United States)

    Oja, Ragne; Soe, Egle; Valdmann, Harri; Saarma, Urmas

    2017-01-01

    Capercaillie (Tetrao urogallus) and other grouse species represent conservation concerns across Europe due to their negative abundance trends. In addition to habitat deterioration, predation is considered a major factor contributing to population declines. While the role of generalist predators on grouse predation is relatively well known, the impact of the omnivorous wild boar has remained elusive. We hypothesize that wild boar is an important predator of ground-nesting birds, but has been neglected as a bird predator because traditional morphological methods underestimate the proportion of birds in wild boar diet. To distinguish between different mammalian predator species, as well as different grouse prey species, we developed a molecular method based on the analysis of mitochondrial DNA that allows accurate species identification. We collected 109 wild boar faeces at protected capercaillie leks and surrounding areas and analysed bird consumption using genetic methods and classical morphological examination. Genetic analysis revealed that the proportion of birds in wild boar faeces was significantly higher (17.3%; 4.5×) than indicated by morphological examination (3.8%). Moreover, the genetic method allowed considerably more precise taxonomic identification of consumed birds compared to morphological analysis. Our results demonstrate: (i) the value of using genetic approaches in faecal dietary analysis due to their higher sensitivity, and (ii) that wild boar is an important predator of ground-nesting birds, deserving serious consideration in conservation planning for capercaillie and other grouse.

  1. Analysis of previous perceptual and motor experience in breaststroke kick learning

    Directory of Open Access Journals (Sweden)

    Ried Bettina

    2015-12-01

    Full Text Available One of the variables that influence motor learning is the learner’s previous experience, which may provide perceptual and motor elements to be transferred to a novel motor skill. For swimming skills, several motor experiences may prove effective. Purpose. The aim was to analyse the influence of previous experience in playing in water, swimming lessons, and music or dance lessons on learning the breaststroke kick. Methods. The study involved 39 Physical Education students possessing basic swimming skills, but not the breaststroke, who performed 400 acquisition trials followed by 50 retention and 50 transfer trials, during which stroke index as well as rhythmic and spatial configuration indices were mapped, and answered a yes/no questionnaire regarding previous experience. Data were analysed by ANOVA (p = 0.05 and the effect size (Cohen’s d ≥0.8 indicating large effect size. Results. The whole sample improved their stroke index and spatial configuration index, but not their rhythmic configuration index. Although differences between groups were not significant, two types of experience showed large practical effects on learning: childhood water playing experience only showed major practically relevant positive effects, and no experience in any of the three fields hampered the learning process. Conclusions. The results point towards diverse impact of previous experience regarding rhythmic activities, swimming lessons, and especially with playing in water during childhood, on learning the breaststroke kick.

  2. A practical comparison of methods to assess sum-of-products

    International Nuclear Information System (INIS)

    Rauzy, A.; Chatelet, E.; Dutuit, Y.; Berenguer, C.

    2003-01-01

    Many methods have been proposed in the literature to assess the probability of a sum-of-products. This problem has been shown computationally hard (namely no. P-hard). Therefore, algorithms can be compared only from a practical point of view. In this article, we propose first an efficient implementation of the pivotal decomposition method. This kind of algorithms is widely used in the Artificial Intelligence framework. It is unfortunately almost never considered in the reliability engineering framework, but as a pedagogical tool. We report experimental results that show that this method is in general much more efficient than classical methods that rewrite the sum-of-products under study into an equivalent sum of disjoint products. Then, we derive from our method a factorization algorithm to be used as a preprocessing method for binary decision diagrams. We show by means of experimental results that this latter approach outperforms the formers

  3. Hydrological and environmental variables outperform spatial factors in structuring species, trait composition, and beta diversity of pelagic algae.

    Science.gov (United States)

    Wu, Naicheng; Qu, Yueming; Guse, Björn; Makarevičiūtė, Kristė; To, Szewing; Riis, Tenna; Fohrer, Nicola

    2018-03-01

    There has been increasing interest in algae-based bioassessment, particularly, trait-based approaches are increasingly suggested. However, the main drivers, especially the contribution of hydrological variables, of species composition, trait composition, and beta diversity of algae communities are less studied. To link species and trait composition to multiple factors (i.e., hydrological variables, local environmental variables, and spatial factors) that potentially control species occurrence/abundance and to determine their relative roles in shaping species composition, trait composition, and beta diversities of pelagic algae communities, samples were collected from a German lowland catchment, where a well-proven ecohydrological modeling enabled to predict long-term discharges at each sampling site. Both trait and species composition showed significant correlations with hydrological, environmental, and spatial variables, and variation partitioning revealed that the hydrological and local environmental variables outperformed spatial variables. A higher variation of trait composition (57.0%) than species composition (37.5%) could be explained by abiotic factors. Mantel tests showed that both species and trait-based beta diversities were mostly related to hydrological and environmental heterogeneity with hydrological contributing more than environmental variables, while purely spatial impact was less important. Our findings revealed the relative importance of hydrological variables in shaping pelagic algae community and their spatial patterns of beta diversities, emphasizing the need to include hydrological variables in long-term biomonitoring campaigns and biodiversity conservation or restoration. A key implication for biodiversity conservation was that maintaining the instream flow regime and keeping various habitats among rivers are of vital importance. However, further investigations at multispatial and temporal scales are greatly needed.

  4. A clinically driven variant prioritization framework outperforms purely computational approaches for the diagnostic analysis of singleton WES data.

    Science.gov (United States)

    Stark, Zornitza; Dashnow, Harriet; Lunke, Sebastian; Tan, Tiong Y; Yeung, Alison; Sadedin, Simon; Thorne, Natalie; Macciocca, Ivan; Gaff, Clara; Oshlack, Alicia; White, Susan M; James, Paul A

    2017-11-01

    Rapid identification of clinically significant variants is key to the successful application of next generation sequencing technologies in clinical practice. The Melbourne Genomics Health Alliance (MGHA) variant prioritization framework employs a gene prioritization index based on clinician-generated a priori gene lists, and a variant prioritization index (VPI) based on rarity, conservation and protein effect. We used data from 80 patients who underwent singleton whole exome sequencing (WES) to test the ability of the framework to rank causative variants highly, and compared it against the performance of other gene and variant prioritization tools. Causative variants were identified in 59 of the patients. Using the MGHA prioritization framework the average rank of the causative variant was 2.24, with 76% ranked as the top priority variant, and 90% ranked within the top five. Using clinician-generated gene lists resulted in ranking causative variants an average of 8.2 positions higher than prioritization based on variant properties alone. This clinically driven prioritization approach significantly outperformed purely computational tools, placing a greater proportion of causative variants top or in the top 5 (permutation P-value=0.001). Clinicians included 40 of the 49 WES diagnoses in their a priori list of differential diagnoses (81%). The lists generated by PhenoTips and Phenomizer contained 14 (29%) and 18 (37%) of these diagnoses respectively. These results highlight the benefits of clinically led variant prioritization in increasing the efficiency of singleton WES data analysis and have important implications for developing models for the funding and delivery of genomic services.

  5. Figure-ground segmentation based on class-independent shape priors

    Science.gov (United States)

    Li, Yang; Liu, Yang; Liu, Guojun; Guo, Maozu

    2018-01-01

    We propose a method to generate figure-ground segmentation by incorporating shape priors into the graph-cuts algorithm. Given an image, we first obtain a linear representation of an image and then apply directional chamfer matching to generate class-independent, nonparametric shape priors, which provide shape clues for the graph-cuts algorithm. We then enforce shape priors in a graph-cuts energy function to produce object segmentation. In contrast to previous segmentation methods, the proposed method shares shape knowledge for different semantic classes and does not require class-specific model training. Therefore, the approach obtains high-quality segmentation for objects. We experimentally validate that the proposed method outperforms previous approaches using the challenging PASCAL VOC 2010/2012 and Berkeley (BSD300) segmentation datasets.

  6. Dual linear structured support vector machine tracking method via scale correlation filter

    Science.gov (United States)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  7. Real-time biscuit tile image segmentation method based on edge detection.

    Science.gov (United States)

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter

    2018-05-01

    In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Cause Information Extraction from Financial Articles Concerning Business Performance

    Science.gov (United States)

    Sakai, Hiroyuki; Masuyama, Shigeru

    We propose a method of extracting cause information from Japanese financial articles concerning business performance. Our method acquires cause informtion, e. g. “_??__??__??__??__??__??__??__??__??__??_ (zidousya no uriage ga koutyou: Sales of cars were good)”. Cause information is useful for investors in selecting companies to invest. Our method extracts cause information as a form of causal expression by using statistical information and initial clue expressions automatically. Our method can extract causal expressions without predetermined patterns or complex rules given by hand, and is expected to be applied to other tasks for acquiring phrases that have a particular meaning not limited to cause information. We compared our method with our previous one originally proposed for extracting phrases concerning traffic accident causes and experimental results showed that our new method outperforms our previous one.

  9. Computational Methods for ChIP-seq Data Analysis and Applications

    KAUST Repository

    Ashoor, Haitham

    2017-04-25

    The development of Chromatin immunoprecipitation followed by sequencing (ChIP-seq) technology has enabled the construction of genome-wide maps of protein-DNA interaction. Such maps provide information about transcriptional regulation at the epigenetic level (histone modifications and histone variants) and at the level of transcription factor (TF) activity. This dissertation presents novel computational methods for ChIP-seq data analysis and applications. The work of this dissertation addresses four main challenges. First, I address the problem of detecting histone modifications from ChIP-seq cancer samples. The presence of copy number variations (CNVs) in cancer samples results in statistical biases that lead to inaccurate predictions when standard methods are used. To overcome this issue I developed HMCan, a specially designed algorithm to handle ChIP-seq cancer data by accounting for the presence of CNVs. When using ChIP-seq data from cancer cells, HMCan demonstrates unbiased and accurate predictions compared to the standard state of the art methods. Second, I address the problem of identifying changes in histone modifications between two ChIP-seq samples with different genetic backgrounds (for example cancer vs. normal). In addition to CNVs, different antibody efficiency between samples and presence of samples replicates are challenges for this problem. To overcome these issues, I developed the HMCan-diff algorithm as an extension to HMCan. HMCan-diff implements robust normalization methods to address the challenges listed above. HMCan-diff significantly outperforms another state of the art methods on data containing cancer samples. Third, I investigate and analyze predictions of different methods for enhancer prediction based on ChIP-seq data. The analysis shows that predictions generated by different methods are poorly overlapping. To overcome this issue, I developed DENdb, a database that integrates enhancer predictions from different methods. DENdb also

  10. Machine Learning Algorithms Outperform Conventional Regression Models in Predicting Development of Hepatocellular Carcinoma

    Science.gov (United States)

    Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K

    2015-01-01

    Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (pmachine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273

  11. Human Splice-Site Prediction with Deep Neural Networks.

    Science.gov (United States)

    Naito, Tatsuhiko

    2018-04-18

    Accurate splice-site prediction is essential to delineate gene structures from sequence data. Several computational techniques have been applied to create a system to predict canonical splice sites. For classification tasks, deep neural networks (DNNs) have achieved record-breaking results and often outperformed other supervised learning techniques. In this study, a new method of splice-site prediction using DNNs was proposed. The proposed system receives an input sequence data and returns an answer as to whether it is splice site. The length of input is 140 nucleotides, with the consensus sequence (i.e., "GT" and "AG" for the donor and acceptor sites, respectively) in the middle. Each input sequence model is applied to the pretrained DNN model that determines the probability that an input is a splice site. The model consists of convolutional layers and bidirectional long short-term memory network layers. The pretraining and validation were conducted using the data set tested in previously reported methods. The performance evaluation results showed that the proposed method can outperform the previous methods. In addition, the pattern learned by the DNNs was visualized as position frequency matrices (PFMs). Some of PFMs were very similar to the consensus sequence. The trained DNN model and the brief source code for the prediction system are uploaded. Further improvement will be achieved following the further development of DNNs.

  12. Lettuce (Lactuca sativa L. var. Sucrine) growth performance in complemented aquaponic solution outperforms hydroponics

    NARCIS (Netherlands)

    Delaide, Boris; Goddek, Simon; Gott, James; Soyeurt, Hélène; Jijakli, M.H.

    2016-01-01

    Plant growth performance is optimized under hydroponic conditions. The comparison between aquaponics and hydroponics has attracted considerable attention recently, particularly regarding plant yield. However, previous research has not focused on the potential of using aquaponic solution

  13. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction.

    Science.gov (United States)

    Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua

    2016-01-01

    Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.

  14. Locating previously unknown patterns in data-mining results: a dual data- and knowledge-mining method

    Directory of Open Access Journals (Sweden)

    Knaus William A

    2006-03-01

    Full Text Available Abstract Background Data mining can be utilized to automate analysis of substantial amounts of data produced in many organizations. However, data mining produces large numbers of rules and patterns, many of which are not useful. Existing methods for pruning uninteresting patterns have only begun to automate the knowledge acquisition step (which is required for subjective measures of interestingness, hence leaving a serious bottleneck. In this paper we propose a method for automatically acquiring knowledge to shorten the pattern list by locating the novel and interesting ones. Methods The dual-mining method is based on automatically comparing the strength of patterns mined from a database with the strength of equivalent patterns mined from a relevant knowledgebase. When these two estimates of pattern strength do not match, a high "surprise score" is assigned to the pattern, identifying the pattern as potentially interesting. The surprise score captures the degree of novelty or interestingness of the mined pattern. In addition, we show how to compute p values for each surprise score, thus filtering out noise and attaching statistical significance. Results We have implemented the dual-mining method using scripts written in Perl and R. We applied the method to a large patient database and a biomedical literature citation knowledgebase. The system estimated association scores for 50,000 patterns, composed of disease entities and lab results, by querying the database and the knowledgebase. It then computed the surprise scores by comparing the pairs of association scores. Finally, the system estimated statistical significance of the scores. Conclusion The dual-mining method eliminates more than 90% of patterns with strong associations, thus identifying them as uninteresting. We found that the pruning of patterns using the surprise score matched the biomedical evidence in the 100 cases that were examined by hand. The method automates the acquisition of

  15. Statistical methods for detecting differentially abundant features in clinical metagenomic samples.

    Directory of Open Access Journals (Sweden)

    James Robert White

    2009-04-01

    Full Text Available Numerous studies are currently underway to characterize the microbial communities inhabiting our world. These studies aim to dramatically expand our understanding of the microbial biosphere and, more importantly, hope to reveal the secrets of the complex symbiotic relationship between us and our commensal bacterial microflora. An important prerequisite for such discoveries are computational tools that are able to rapidly and accurately compare large datasets generated from complex bacterial communities to identify features that distinguish them.We present a statistical method for comparing clinical metagenomic samples from two treatment populations on the basis of count data (e.g. as obtained through sequencing to detect differentially abundant features. Our method, Metastats, employs the false discovery rate to improve specificity in high-complexity environments, and separately handles sparsely-sampled features using Fisher's exact test. Under a variety of simulations, we show that Metastats performs well compared to previously used methods, and significantly outperforms other methods for features with sparse counts. We demonstrate the utility of our method on several datasets including a 16S rRNA survey of obese and lean human gut microbiomes, COG functional profiles of infant and mature gut microbiomes, and bacterial and viral metabolic subsystem data inferred from random sequencing of 85 metagenomes. The application of our method to the obesity dataset reveals differences between obese and lean subjects not reported in the original study. For the COG and subsystem datasets, we provide the first statistically rigorous assessment of the differences between these populations. The methods described in this paper are the first to address clinical metagenomic datasets comprising samples from multiple subjects. Our methods are robust across datasets of varied complexity and sampling level. While designed for metagenomic applications, our software

  16. Invasive Acer negundo outperforms native species in non-limiting resource environments due to its higher phenotypic plasticity.

    Science.gov (United States)

    Porté, Annabel J; Lamarque, Laurent J; Lortie, Christopher J; Michalet, Richard; Delzon, Sylvain

    2011-11-24

    To identify the determinants of invasiveness, comparisons of traits of invasive and native species are commonly performed. Invasiveness is generally linked to higher values of reproductive, physiological and growth-related traits of the invasives relative to the natives in the introduced range. Phenotypic plasticity of these traits has also been cited to increase the success of invasive species but has been little studied in invasive tree species. In a greenhouse experiment, we compared ecophysiological traits between an invasive species to Europe, Acer negundo, and early- and late-successional co-occurring native species, under different light, nutrient availability and disturbance regimes. We also compared species of the same species groups in situ, in riparian forests. Under non-limiting resources, A. negundo seedlings showed higher growth rates than the native species. However, A. negundo displayed equivalent or lower photosynthetic capacities and nitrogen content per unit leaf area compared to the native species; these findings were observed both on the seedlings in the greenhouse experiment and on adult trees in situ. These physiological traits were mostly conservative along the different light, nutrient and disturbance environments. Overall, under non-limiting light and nutrient conditions, specific leaf area and total leaf area of A. negundo were substantially larger. The invasive species presented a higher plasticity in allocation to foliage and therefore in growth with increasing nutrient and light availability relative to the native species. The higher level of plasticity of the invasive species in foliage allocation in response to light and nutrient availability induced a better growth in non-limiting resource environments. These results give us more elements on the invasiveness of A. negundo and suggest that such behaviour could explain the ability of A. negundo to outperform native tree species, contributes to its spread in European resource

  17. Alteration of Box-Jenkins methodology by implementing genetic algorithm method

    Science.gov (United States)

    Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad

    2015-02-01

    A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.

  18. Randomized Block Cubic Newton Method

    KAUST Repository

    Doikov, Nikita; Richtarik, Peter

    2018-01-01

    We study the problem of minimizing the sum of three convex functions: a differentiable, twice-differentiable and a non-smooth term in a high dimensional setting. To this effect we propose and analyze a randomized block cubic Newton (RBCN) method, which in each iteration builds a model of the objective function formed as the sum of the natural models of its three components: a linear model with a quadratic regularizer for the differentiable term, a quadratic model with a cubic regularizer for the twice differentiable term, and perfect (proximal) model for the nonsmooth term. Our method in each iteration minimizes the model over a random subset of blocks of the search variable. RBCN is the first algorithm with these properties, generalizing several existing methods, matching the best known bounds in all special cases. We establish ${\\cal O}(1/\\epsilon)$, ${\\cal O}(1/\\sqrt{\\epsilon})$ and ${\\cal O}(\\log (1/\\epsilon))$ rates under different assumptions on the component functions. Lastly, we show numerically that our method outperforms the state-of-the-art on a variety of machine learning problems, including cubically regularized least-squares, logistic regression with constraints, and Poisson regression.

  19. Randomized Block Cubic Newton Method

    KAUST Repository

    Doikov, Nikita

    2018-02-12

    We study the problem of minimizing the sum of three convex functions: a differentiable, twice-differentiable and a non-smooth term in a high dimensional setting. To this effect we propose and analyze a randomized block cubic Newton (RBCN) method, which in each iteration builds a model of the objective function formed as the sum of the natural models of its three components: a linear model with a quadratic regularizer for the differentiable term, a quadratic model with a cubic regularizer for the twice differentiable term, and perfect (proximal) model for the nonsmooth term. Our method in each iteration minimizes the model over a random subset of blocks of the search variable. RBCN is the first algorithm with these properties, generalizing several existing methods, matching the best known bounds in all special cases. We establish ${\\\\cal O}(1/\\\\epsilon)$, ${\\\\cal O}(1/\\\\sqrt{\\\\epsilon})$ and ${\\\\cal O}(\\\\log (1/\\\\epsilon))$ rates under different assumptions on the component functions. Lastly, we show numerically that our method outperforms the state-of-the-art on a variety of machine learning problems, including cubically regularized least-squares, logistic regression with constraints, and Poisson regression.

  20. Semiempirical Quantum-Chemical Orthogonalization-Corrected Methods: Benchmarks for Ground-State Properties.

    Science.gov (United States)

    Dral, Pavlo O; Wu, Xin; Spörkel, Lasse; Koslowski, Axel; Thiel, Walter

    2016-03-08

    The semiempirical orthogonalization-corrected OMx methods (OM1, OM2, and OM3) go beyond the standard MNDO model by including additional interactions in the electronic structure calculation. When augmented with empirical dispersion corrections, the resulting OMx-Dn approaches offer a fast and robust treatment of noncovalent interactions. Here we evaluate the performance of the OMx and OMx-Dn methods for a variety of ground-state properties using a large and diverse collection of benchmark sets from the literature, with a total of 13035 original and derived reference data. Extensive comparisons are made with the results from established semiempirical methods (MNDO, AM1, PM3, PM6, and PM7) that also use the NDDO (neglect of diatomic differential overlap) integral approximation. Statistical evaluations show that the OMx and OMx-Dn methods outperform the other methods for most of the benchmark sets.

  1. A multi-sample based method for identifying common CNVs in normal human genomic structure using high-resolution aCGH data.

    Directory of Open Access Journals (Sweden)

    Chihyun Park

    Full Text Available BACKGROUND: It is difficult to identify copy number variations (CNV in normal human genomic data due to noise and non-linear relationships between different genomic regions and signal intensity. A high-resolution array comparative genomic hybridization (aCGH containing 42 million probes, which is very large compared to previous arrays, was recently published. Most existing CNV detection algorithms do not work well because of noise associated with the large amount of input data and because most of the current methods were not designed to analyze normal human samples. Normal human genome analysis often requires a joint approach across multiple samples. However, the majority of existing methods can only identify CNVs from a single sample. METHODOLOGY AND PRINCIPAL FINDINGS: We developed a multi-sample-based genomic variations detector (MGVD that uses segmentation to identify common breakpoints across multiple samples and a k-means-based clustering strategy. Unlike previous methods, MGVD simultaneously considers multiple samples with different genomic intensities and identifies CNVs and CNV zones (CNVZs; CNVZ is a more precise measure of the location of a genomic variant than the CNV region (CNVR. CONCLUSIONS AND SIGNIFICANCE: We designed a specialized algorithm to detect common CNVs from extremely high-resolution multi-sample aCGH data. MGVD showed high sensitivity and a low false discovery rate for a simulated data set, and outperformed most current methods when real, high-resolution HapMap datasets were analyzed. MGVD also had the fastest runtime compared to the other algorithms evaluated when actual, high-resolution aCGH data were analyzed. The CNVZs identified by MGVD can be used in association studies for revealing relationships between phenotypes and genomic aberrations. Our algorithm was developed with standard C++ and is available in Linux and MS Windows format in the STL library. It is freely available at: http://embio.yonsei.ac.kr/~Park/mgvd.php.

  2. Typing DNA profiles from previously enhanced fingerprints using direct PCR.

    Science.gov (United States)

    Templeton, Jennifer E L; Taylor, Duncan; Handt, Oliva; Linacre, Adrian

    2017-07-01

    Fingermarks are a source of human identification both through the ridge patterns and DNA profiling. Typing nuclear STR DNA markers from previously enhanced fingermarks provides an alternative method of utilising the limited fingermark deposit that can be left behind during a criminal act. Dusting with fingerprint powders is a standard method used in classical fingermark enhancement and can affect DNA data. The ability to generate informative DNA profiles from powdered fingerprints using direct PCR swabs was investigated. Direct PCR was used as the opportunity to generate usable DNA profiles after performing any of the standard DNA extraction processes is minimal. Omitting the extraction step will, for many samples, be the key to success if there is limited sample DNA. DNA profiles were generated by direct PCR from 160 fingermarks after treatment with one of the following dactyloscopic fingerprint powders: white hadonite; silver aluminium; HiFi Volcano silk black; or black magnetic fingerprint powder. This was achieved by a combination of an optimised double-swabbing technique and swab media, omission of the extraction step to minimise loss of critical low-template DNA, and additional AmpliTaq Gold ® DNA polymerase to boost the PCR. Ninety eight out of 160 samples (61%) were considered 'up-loadable' to the Australian National Criminal Investigation DNA Database (NCIDD). The method described required a minimum of working steps, equipment and reagents, and was completed within 4h. Direct PCR allows the generation of DNA profiles from enhanced prints without the need to increase PCR cycle numbers beyond manufacturer's recommendations. Particular emphasis was placed on preventing contamination by applying strict protocols and avoiding the use of previously used fingerprint brushes. Based on this extensive survey, the data provided indicate minimal effects of any of these four powders on the chance of obtaining DNA profiles from enhanced fingermarks. Copyright © 2017

  3. A new method for mobile phone image denoising

    Science.gov (United States)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  4. VBAC Scoring: Successful vaginal delivery in previous one caesarean section in induced labour

    International Nuclear Information System (INIS)

    Raja, J.F.; Bangash, K.T.; Mahmud, G.

    2013-01-01

    Objective: To develop a scoring system for the prediction of successful vaginal birth after caesarean section, following induction of labour with intra-vaginal E2 gel (Glandin). Methods: The cross-sectional study was conducted from January 2010 to August 2011, at the Pakistan Institute of Medical Sciences in Islamabad. Trial of labour in previous one caesarean section, undergoing induction with intra-vaginal E2 gel, was attempted in 100 women. They were scored according to six variables; maternal age; gestation; indications of previous caesarean; history of vaginal birth either before or after the previous caesarean; Bishop score and body mass index. Multivariate and univariate logistic regression analysis was used to develop the scoring system. Results: Of the total, 67 (67%) women delivered vaginally, while 33 (33%) ended in repeat caesarean delivery. Among the subjects, 55 (55%) women had no history of vaginal delivery either before or after previous caesarean section; 15 (15%) had history of vaginal births both before and after the previous caesarean; while 30 (30%) had vaginal delivery only after the previous caesarean section. Rates of successful vaginal birth after caesarean increased from 38% in women having a score of 0-3 to 58% in patients scoring 4-6. Among those having a score of 7-9 and 10-12, the success rates were 71% and 86% respectively. Conclusion: Increasing scores correlated with the increasing probability of vaginal birth after caesarean undergoing induction of labour. The admission VBAC scoring system is useful in counselling women with previous caesarean for the option of induction of labour or repeat caesarean delivery. (author)

  5. Interaction sorting method for molecular dynamics on multi-core SIMD CPU architecture.

    Science.gov (United States)

    Matvienko, Sergey; Alemasov, Nikolay; Fomin, Eduard

    2015-02-01

    Molecular dynamics (MD) is widely used in computational biology for studying binding mechanisms of molecules, molecular transport, conformational transitions, protein folding, etc. The method is computationally expensive; thus, the demand for the development of novel, much more efficient algorithms is still high. Therefore, the new algorithm designed in 2007 and called interaction sorting (IS) clearly attracted interest, as it outperformed the most efficient MD algorithms. In this work, a new IS modification is proposed which allows the algorithm to utilize SIMD processor instructions. This paper shows that the improvement provides an additional gain in performance, 9% to 45% in comparison to the original IS method.

  6. 49 CFR 173.23 - Previously authorized packaging.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Previously authorized packaging. 173.23 Section... REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for Transportation § 173.23 Previously authorized packaging. (a) When the regulations specify a packaging with a specification marking...

  7. Supplementary Material for: DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail

    2016-01-01

    Abstract Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery

  8. Hypothesis testing on the fractal structure of behavioral sequences: the Bayesian assessment of scaling methodology.

    Science.gov (United States)

    Moscoso del Prado Martín, Fermín

    2013-12-01

    I introduce the Bayesian assessment of scaling (BAS), a simple but powerful Bayesian hypothesis contrast methodology that can be used to test hypotheses on the scaling regime exhibited by a sequence of behavioral data. Rather than comparing parametric models, as typically done in previous approaches, the BAS offers a direct, nonparametric way to test whether a time series exhibits fractal scaling. The BAS provides a simpler and faster test than do previous methods, and the code for making the required computations is provided. The method also enables testing of finely specified hypotheses on the scaling indices, something that was not possible with the previously available methods. I then present 4 simulation studies showing that the BAS methodology outperforms the other methods used in the psychological literature. I conclude with a discussion of methodological issues on fractal analyses in experimental psychology. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  9. Performance of human observers and an automatic 3-dimensional computer-vision-based locomotion scoring method to detect lameness and hoof lesions in dairy cows

    NARCIS (Netherlands)

    Schlageter-Tello, Andrés; Hertem, Van Tom; Bokkers, Eddie A.M.; Viazzi, Stefano; Bahr, Claudia; Lokhorst, Kees

    2018-01-01

    The objective of this study was to determine if a 3-dimensional computer vision automatic locomotion scoring (3D-ALS) method was able to outperform human observers for classifying cows as lame or nonlame and for detecting cows affected and nonaffected by specific type(s) of hoof lesion. Data

  10. 22 CFR 40.91 - Certain aliens previously removed.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Certain aliens previously removed. 40.91... IMMIGRANTS UNDER THE IMMIGRATION AND NATIONALITY ACT, AS AMENDED Aliens Previously Removed § 40.91 Certain aliens previously removed. (a) 5-year bar. An alien who has been found inadmissible, whether as a result...

  11. A website evaluation model by integration of previous evaluation models using a quantitative approach

    Directory of Open Access Journals (Sweden)

    Ali Moeini

    2015-01-01

    Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.

  12. Outcome of trial of scar in patients with previous caesarean section

    International Nuclear Information System (INIS)

    Khan, B.; Bashir, R.; Khan, W.

    2016-01-01

    Medical evidence indicates that 60-80% of women can achieve vaginal delivery after a previous lower segment caesarean section. Proper selection of patients for trial of scar and vigilant monitoring during labour will achieve successful maternal and perinatal outcome. The objective of our study is to establish the fact that vaginal delivery after one caesarean section has a high success rate in patients with previous one caesarean section for non-recurrent cause. Methods: The study was conducted in Ayub Teaching Abbottabad, Gynae-B Unit. All labouring patients, during the study period of five years, with previous one caesarean section and between 37 weeks to 41 weeks of gestation for a non-recurrent cause were included in the study. Data was recorded on special proforma designed for the purpose. Patients who had previous classical caesarean section, more than one caesarean section, and previous caesarean section with severe wound infection, transverse lie and placenta previa in present pregnancy were excluded. Foetal macrosomia (wt>4 kg) and severe IUGR with compromised blood flow on Doppler in present pregnancy were also not considered suitable for the study. Patients who had any absolute contraindication for vaginal delivery were also excluded. Results: There were 12505 deliveries during the study period. Total vaginal deliveries were 8790 and total caesarean sections were 3715. Caesarean section rate was 29.7%. Out of these 8790 patients, 764 patients were given a trial of scar and 535 patients delivered successfully vaginally (70%). Women who presented with spontaneous onset of labour were more likely to deliver vaginally (74.8%) as compared to induction group (27.1%). Conclusion: Trial of vaginal birth after caesarean (VBAC) in selected cases has great importance in the present era of the rising rate of primary caesarean section. (author)

  13. Study of functional-performance deficits in athletes with previous ankle sprains

    Directory of Open Access Journals (Sweden)

    hamid Babaee

    2008-04-01

    Full Text Available Abstract Background: Despite the importance of functional-performance deficits in athletes with history of ankle sprain few, studies have been carried out in this area. The aim of this research was to study relationship between previous ankle sprains and functional-performance deficits in athletes. Materials and methods: The subjects were 40 professional athletes selected through random sampling among volunteer participants in soccer, basketball, volleyball and handball teams of Lorestan province. The subjects were divided into 2 groups: Injured group (athletes with previous ankle sprains and healthy group (athletes without previous ankle sprains. In this descriptive study we used Functional-performance tests (figure 8 hop test and side hop test to determine ankle deficits and limitations. They participated in figure 8 hop test including hopping in 8 shape course with the length of 5 meters and side hop test including 10 side hop repetitions in course with the length of 30 centimeters. Time were recorded via stopwatch. Results: After data gathering and assessing information distributions, Pearson correlation was used to assess relationships, and independent T test to assess differences between variables. Finally the results showed that there is a significant relationship between previous ankle sprains and functional-performance deficits in the athletes. Conclusion: The athletes who had previous ankle sprains indicated functional-performance deficits more than healthy athletes in completion of mentioned functional-performance tests. The functional-performance tests (figure 8 hop test and side hop test are sensitive and suitable to assess and detect functional-performance deficits in athletes. Therefore we can use the figure 8 hop and side hop tests for goals such as prevention, assessment and rehabilitation of ankle sprains without spending too much money and time.

  14. Regression Methods for Virtual Metrology of Layer Thickness in Chemical Vapor Deposition

    DEFF Research Database (Denmark)

    Purwins, Hendrik; Barak, Bernd; Nagi, Ahmed

    2014-01-01

    The quality of wafer production in semiconductor manufacturing cannot always be monitored by a costly physical measurement. Instead of measuring a quantity directly, it can be predicted by a regression method (Virtual Metrology). In this paper, a survey on regression methods is given to predict...... average Silicon Nitride cap layer thickness for the Plasma Enhanced Chemical Vapor Deposition (PECVD) dual-layer metal passivation stack process. Process and production equipment Fault Detection and Classification (FDC) data are used as predictor variables. Various variable sets are compared: one most...... algorithm, and Support Vector Regression (SVR). On a test set, SVR outperforms the other methods by a large margin, being more robust towards changes in the production conditions. The method performs better on high-dimensional multivariate input data than on the most predictive variables alone. Process...

  15. AptRank: an adaptive PageRank model for protein function prediction on   bi-relational graphs.

    Science.gov (United States)

    Jiang, Biaobin; Kloster, Kyle; Gleich, David F; Gribskov, Michael

    2017-06-15

    Diffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood-based and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model. We first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is a direct application of traditional PageRank with fixed decay parameters. In contrast, AptRank utilizes an adaptive diffusion mechanism to improve the performance of BirgRank. We evaluate the ability of both methods to predict protein function on yeast, fly and human protein datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design four different validation strategies: missing function prediction, de novo function prediction, guided function prediction and newly discovered function prediction to comprehensively evaluate predictability of all six methods. We find that both BirgRank and AptRank outperform the previous methods, especially in missing function prediction when using only 10% of the data for training. The MATLAB code is available at https://github.rcac.purdue.edu/mgribsko/aptrank . gribskov@purdue.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  16. Digital image analysis outperforms manual biomarker assessment in breast cancer

    DEFF Research Database (Denmark)

    Stålhammar, Gustav; Fuentes Martinez, Nelson; Lippert, Michael

    2016-01-01

    In the spectrum of breast cancers, categorization according to the four gene expression-based subtypes 'Luminal A,' 'Luminal B,' 'HER2-enriched,' and 'Basal-like' is the method of choice for prognostic and predictive value. As gene expression assays are not yet universally available, routine immu...

  17. Pressurized water reactor in-core nuclear fuel management by tabu search

    International Nuclear Information System (INIS)

    Hill, Natasha J.; Parks, Geoffrey T.

    2015-01-01

    Highlights: • We develop a tabu search implementation for PWR reload core design. • We conduct computational experiments to find optimal parameter values. • We test the performance of the algorithm on two representative PWR geometries. • We compare this performance with that given by established optimization methods. • Our tabu search implementation outperforms these methods in all cases. - Abstract: Optimization of the arrangement of fuel assemblies and burnable poisons when reloading pressurized water reactors has, in the past, been performed with many different algorithms in an attempt to make reactors more economic and fuel efficient. The use of the tabu search algorithm in tackling reload core design problems is investigated further here after limited, but promising, previous investigations. The performance of the tabu search implementation developed was compared with established genetic algorithm and simulated annealing optimization routines. Tabu search outperformed these existing programs for a number of different objective functions on two different representative core geometries

  18. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction

    Directory of Open Access Journals (Sweden)

    Hongyang Lu

    2016-01-01

    Full Text Available Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV approach and adaptive dictionary learning (DL. In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.

  19. Physics exam preparation: A comparison of three methods

    Directory of Open Access Journals (Sweden)

    Witat Fakcharoenphol

    2014-03-01

    Full Text Available In this clinical study on helping students prepare for an exam, we compared three different treatments. All students were asked to take a practice exam. One group was then given worked-out solutions for that exam, another group was given the solutions and targeted exercises to do as homework based on the result of their practice exam, and the third group was given the solutions, homework, and also an hour of one-on-one tutoring. Participants from all three conditions significantly outperformed the control group on the midterm exam. However, participants that had one-on-one tutoring did not outperform the other two participant groups.

  20. Trephine Transverse Colostomy Is Effective for Patients Who Have Previously Undergone Rectal Surgery

    Science.gov (United States)

    Yeom, Seung-Seop; Jung, Sung Woo; Oh, Se Heon; Lee, Jong Lyul; Yoon, Yong Sik; Park, In Ja; Lim, Seok-Byung; Yu, Chang Sik; Kim, Jin Cheon

    2018-01-01

    Purpose Colostomy creation is an essential procedure for colorectal surgeons, but the preferred method of colostomy varies by surgeon. We compared the outcomes of trephine colostomy creation with open those for the (laparotomy) and laparoscopic methods and evaluated appropriate indications for a trephine colostomy and the advantages of the technique. Methods We retrospectively evaluated 263 patients who had undergone colostomy creation by trephine, open and laparoscopic approaches between April 2006 and March 2016. We compared the clinical features and the operative and postoperative outcomes according to the approach used for stoma creation. Results One hundred sixty-three patients (62%) underwent colostomy surgery for obstructive causes and 100 (38%) for fistulous problems. The mean operative time was significantly shorter with the trephine approach (trephine, 46.0 ± 1.9 minutes; open, 78.7 ± 3.9 minutes; laparoscopic, 63.5 ± 5.0 minutes; P colostomy was feasible for a diversion colostomy (P colostomy is safe and can be implemented quickly in various situations, and compared to other colostomy procedures, the patient’s recovery is faster. Previous laparotomy history was not a contraindication for a trephine colostomy, and a trephine transverse colostomy is feasible for patients who have undergone previous rectal surgery. PMID:29742862

  1. Global positioning method based on polarized light compass system

    Science.gov (United States)

    Liu, Jun; Yang, Jiangtao; Wang, Yubo; Tang, Jun; Shen, Chong

    2018-05-01

    This paper presents a global positioning method based on a polarized light compass system. A main limitation of polarization positioning is the environment such as weak and locally destroyed polarization environments, and the solution to the positioning problem is given in this paper which is polarization image de-noising and segmentation. Therefore, the pulse coupled neural network is employed for enhancing positioning performance. The prominent advantages of the present positioning technique are as follows: (i) compared to the existing position method based on polarized light, better sun tracking accuracy can be achieved and (ii) the robustness and accuracy of positioning under weak and locally destroyed polarization environments, such as cloudy or building shielding, are improved significantly. Finally, some field experiments are given to demonstrate the effectiveness and applicability of the proposed global positioning technique. The experiments have shown that our proposed method outperforms the conventional polarization positioning method, the real time longitude and latitude with accuracy up to 0.0461° and 0.0911°, respectively.

  2. Nuclear introns outperform mitochondrial DNA in inter-specific phylogenetic reconstruction: Lessons from horseshoe bats (Rhinolophidae: Chiroptera).

    Science.gov (United States)

    Dool, Serena E; Puechmaille, Sebastien J; Foley, Nicole M; Allegrini, Benjamin; Bastian, Anna; Mutumi, Gregory L; Maluleke, Tinyiko G; Odendaal, Lizelle J; Teeling, Emma C; Jacobs, David S

    2016-04-01

    Despite many studies illustrating the perils of utilising mitochondrial DNA in phylogenetic studies, it remains one of the most widely used genetic markers for this purpose. Over the last decade, nuclear introns have been proposed as alternative markers for phylogenetic reconstruction. However, the resolution capabilities of mtDNA and nuclear introns have rarely been quantified and compared. In the current study we generated a novel ∼5kb dataset comprising six nuclear introns and a mtDNA fragment. We assessed the relative resolution capabilities of the six intronic fragments with respect to each other, when used in various combinations together, and when compared to the traditionally used mtDNA. We focused on a major clade in the horseshoe bat family (Afro-Palaearctic clade; Rhinolophidae) as our case study. This old, widely distributed and speciose group contains a high level of conserved morphology. This morphological stasis renders the reconstruction of the phylogeny of this group with traditional morphological characters complex. We sampled multiple individuals per species to represent their geographic distributions as best as possible (122 individuals, 24 species, 68 localities). We reconstructed the species phylogeny using several complementary methods (partitioned Maximum Likelihood and Bayesian and Bayesian multispecies-coalescent) and made inferences based on consensus across these methods. We computed pairwise comparisons based on Robinson-Foulds tree distance metric between all Bayesian topologies generated (27,000) for every gene(s) and visualised the tree space using multidimensional scaling (MDS) plots. Using our supported species phylogeny we estimated the ancestral state of key traits of interest within this group, e.g. echolocation peak frequency which has been implicated in speciation. Our results revealed many potential cryptic species within this group, even in taxa where this was not suspected a priori and also found evidence for mt

  3. Tactile Acuity in the Blind: A Closer Look Reveals Superiority over the Sighted in Some but Not All Cutaneous Tasks

    Science.gov (United States)

    Alary, Flamine; Duquette, Marco; Goldstein, Rachel; Chapman, C. Elaine; Voss, Patrice; La Buissonniere-Ariza, Valerie; Lepore, Franco

    2009-01-01

    Previous studies have shown that blind subjects may outperform the sighted on certain tactile discrimination tasks. We recently showed that blind subjects outperformed the sighted in a haptic 2D-angle discrimination task. The purpose of this study was to compare the performance of the same blind (n = 16) and sighted (n = 17, G1) subjects in three…

  4. A systematic model identification method for chemical transformation pathways – the case of heroin biomarkers in wastewater

    DEFF Research Database (Denmark)

    Ramin, Pedram; Valverde Pérez, Borja; Polesel, Fabio

    2017-01-01

    This study presents a novel statistical approach for identifying sequenced chemical transformation pathways in combination with reaction kinetics models. The proposed method relies on sound uncertainty propagation by considering parameter ranges and associated probability distribution obtained...... at any given transformation pathway levels as priors for parameter estimation at any subsequent transformation levels. The method was applied to calibrate a model predicting the transformation in untreated wastewater of six biomarkers, excreted following human metabolism of heroin and codeine. The method....... Results obtained suggest that the method developed has the potential to outperform conventional approaches in terms of prediction accuracy, transformation pathway identification and parameter identifiability. This method can be used in conjunction with optimal experimental designs to effectively identify...

  5. Statistically Consistent k-mer Methods for Phylogenetic Tree Reconstruction.

    Science.gov (United States)

    Allman, Elizabeth S; Rhodes, John A; Sullivant, Seth

    2017-02-01

    Frequencies of k-mers in sequences are sometimes used as a basis for inferring phylogenetic trees without first obtaining a multiple sequence alignment. We show that a standard approach of using the squared Euclidean distance between k-mer vectors to approximate a tree metric can be statistically inconsistent. To remedy this, we derive model-based distance corrections for orthologous sequences without gaps, which lead to consistent tree inference. The identifiability of model parameters from k-mer frequencies is also studied. Finally, we report simulations showing that the corrected distance outperforms many other k-mer methods, even when sequences are generated with an insertion and deletion process. These results have implications for multiple sequence alignment as well since k-mer methods are usually the first step in constructing a guide tree for such algorithms.

  6. Groin Problems in Male Soccer Players Are More Common Than Previously Reported

    DEFF Research Database (Denmark)

    Harøy, Joar; Clarsen, Ben; Thorborg, Kristian

    2017-01-01

    surveillance method developed to capture acute and overuse problems. STUDY DESIGN: Descriptive epidemiology study. METHODS: We registered groin problems during a 6-week period of match congestion using the Oslo Sports Trauma Research Center Overuse Injury Questionnaire. A total of 240 players from 15 teams......BACKGROUND: The majority of surveillance studies in soccer have used a time-loss injury definition, and many groin problems result from overuse, leading to gradually increasing pain and/or reduced performance without necessarily causing an absence from soccer training or match play. Thus......, the magnitude of groin problems in soccer has probably been underestimated in previous studies based on traditional injury surveillance methods. PURPOSE: To investigate the prevalence of groin problems among soccer players of both sexes and among male soccer players at different levels of play through a new...

  7. Comparison of Heuristic Methods Applied for Optimal Operation of Water Resources

    Directory of Open Access Journals (Sweden)

    Alireza Borhani Dariane

    2009-01-01

    Full Text Available Water resources optimization problems are usually complex and hard to solve using the ordinary optimization methods, or they are at least  not economically efficient. A great number of studies have been conducted in quest of suitable methods capable of handling such problems. In recent years, some new heuristic methods such as genetic and ant algorithms have been introduced in systems engineering. Preliminary applications of these methods in water resources problems have shown that some of them are powerful tools, capable of solving complex problems. In this paper, the application of such heuristic methods as Genetic Algorithm (GA and Ant Colony Optimization (ACO have been studied for optimizing reservoir operation. The Dez Dam reservoir inIranwas chosen for a case study. The methods were applied and compared using short-term (one year and long-term models. Comparison of the results showed that GA outperforms both DP and ACO in finding true global optimum solutions and operating rules.

  8. Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy

    Directory of Open Access Journals (Sweden)

    Duo Hao

    2017-11-01

    Full Text Available Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD and relative entropy (RE. In this paper, the global motion vector (GMV is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF, Kalman filter (KF, wavelet decomposition (MD method, empirical mode decomposition (EMD-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.

  9. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    Science.gov (United States)

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. KFC2: a knowledge-based hot spot prediction method based on interface solvation, atomic density, and plasticity features.

    Science.gov (United States)

    Zhu, Xiaolei; Mitchell, Julie C

    2011-09-01

    Hot spots constitute a small fraction of protein-protein interface residues, yet they account for a large fraction of the binding affinity. Based on our previous method (KFC), we present two new methods (KFC2a and KFC2b) that outperform other methods at hot spot prediction. A number of improvements were made in developing these new methods. First, we created a training data set that contained a similar number of hot spot and non-hot spot residues. In addition, we generated 47 different features, and different numbers of features were used to train the models to avoid over-fitting. Finally, two feature combinations were selected: One (used in KFC2a) is composed of eight features that are mainly related to solvent accessible surface area and local plasticity; the other (KFC2b) is composed of seven features, only two of which are identical to those used in KFC2a. The two models were built using support vector machines (SVM). The two KFC2 models were then tested on a mixed independent test set, and compared with other methods such as Robetta, FOLDEF, HotPoint, MINERVA, and KFC. KFC2a showed the highest predictive accuracy for hot spot residues (True Positive Rate: TPR = 0.85); however, the false positive rate was somewhat higher than for other models. KFC2b showed the best predictive accuracy for hot spot residues (True Positive Rate: TPR = 0.62) among all methods other than KFC2a, and the False Positive Rate (FPR = 0.15) was comparable with other highly predictive methods. Copyright © 2011 Wiley-Liss, Inc.

  11. Contrast-based fully automatic segmentation of white matter hyperintensities: method and validation.

    Directory of Open Access Journals (Sweden)

    Thomas Samaille

    Full Text Available White matter hyperintensities (WMH on T2 or FLAIR sequences have been commonly observed on MR images of elderly people. They have been associated with various disorders and have been shown to be a strong risk factor for stroke and dementia. WMH studies usually required visual evaluation of WMH load or time-consuming manual delineation. This paper introduced WHASA (White matter Hyperintensities Automated Segmentation Algorithm, a new method for automatically segmenting WMH from FLAIR and T1 images in multicentre studies. Contrary to previous approaches that were based on intensities, this method relied on contrast: non linear diffusion filtering alternated with watershed segmentation to obtain piecewise constant images with increased contrast between WMH and surroundings tissues. WMH were then selected based on subject dependant automatically computed threshold and anatomical information. WHASA was evaluated on 67 patients from two studies, acquired on six different MRI scanners and displaying a wide range of lesion load. Accuracy of the segmentation was assessed through volume and spatial agreement measures with respect to manual segmentation; an intraclass correlation coefficient (ICC of 0.96 and a mean similarity index (SI of 0.72 were obtained. WHASA was compared to four other approaches: Freesurfer and a thresholding approach as unsupervised methods; k-nearest neighbours (kNN and support vector machines (SVM as supervised ones. For these latter, influence of the training set was also investigated. WHASA clearly outperformed both unsupervised methods, while performing at least as good as supervised approaches (ICC range: 0.87-0.91 for kNN; 0.89-0.94 for SVM. Mean SI: 0.63-0.71 for kNN, 0.67-0.72 for SVM, and did not need any training set.

  12. Survival after early-stage breast cancer of women previously treated for depression

    DEFF Research Database (Denmark)

    Suppli, Nis Frederik Palm; Johansen, Christoffer; Kessing, Lars Vedel

    2017-01-01

    treatment of depression and risk of receiving nonguideline treatment of breast cancer were assessed in multivariable logistic regression analyses. We compared the overall survival, breast cancer-specific survival, and risk of death by suicide of women who were and were not treated for depression before......Purpose The aim of this nationwide, register-based cohort study was to determine whether women treated for depression before primary early-stage breast cancer are at increased risk for receiving treatment that is not in accordance with national guidelines and for poorer survival. Material...... and Methods We identified 45,325 women with early breast cancer diagnosed in Denmark from 1998 to 2011. Of these, 744 women (2%) had had a previous hospital contact (as an inpatient or outpatient) for depression and another 6,068 (13%) had been treated with antidepressants. Associations between previous...

  13. Impact of previously disadvantaged land-users on sustainable ...

    African Journals Online (AJOL)

    Impact of previously disadvantaged land-users on sustainable agricultural ... about previously disadvantaged land users involved in communal farming systems ... of input, capital, marketing, information and land use planning, with effect on ...

  14. Previous induced abortion among young women seeking abortion-related care in Kenya: a cross-sectional analysis.

    Science.gov (United States)

    Kabiru, Caroline W; Ushie, Boniface A; Mutua, Michael M; Izugbara, Chimaraoke O

    2016-05-14

    Unsafe abortion is a leading cause of death among young women aged 10-24 years in sub-Saharan Africa. Although having multiple induced abortions may exacerbate the risk for poor health outcomes, there has been minimal research on young women in this region who have multiple induced abortions. The objective of this study was therefore to assess the prevalence and correlates of reporting a previous induced abortion among young females aged 12-24 years seeking abortion-related care in Kenya. We used data on 1,378 young women aged 12-24 years who presented for abortion-related care in 246 health facilities in a nationwide survey conducted in 2012. Socio-demographic characteristics, reproductive and clinical histories, and physical examination assessment data were collected from women during a one-month data collection period using an abortion case capture form. Nine percent (n = 98) of young women reported a previous induced abortion prior to the index pregnancy for which they were receiving care. Statistically significant differences by previous history of induced abortion were observed for area of residence, religion and occupation at bivariate level. Urban dwellers and unemployed/other young women were more likely to report a previous induced abortion. A greater proportion of young women reporting a previous induced abortion stated that they were using a contraceptive method at the time of the index pregnancy (47 %) compared with those reporting no previous induced abortion (23 %). Not surprisingly, a greater proportion of young women reporting a previous induced abortion (82 %) reported their index pregnancy as unintended (not wanted at all or mistimed) compared with women reporting no previous induced abortion (64 %). Our study results show that about one in every ten young women seeking abortion-related care in Kenya reports a previous induced abortion. Comprehensive post-abortion care services targeting young women are needed. In particular, post

  15. Oversimplifying quantum factoring.

    Science.gov (United States)

    Smolin, John A; Smith, Graeme; Vargo, Alexander

    2013-07-11

    Shor's quantum factoring algorithm exponentially outperforms known classical methods. Previous experimental implementations have used simplifications dependent on knowing the factors in advance. However, as we show here, all composite numbers admit simplification of the algorithm to a circuit equivalent to flipping coins. The difficulty of a particular experiment therefore depends on the level of simplification chosen, not the size of the number factored. Valid implementations should not make use of the answer sought.

  16. ChIPnorm: a statistical method for normalizing and identifying differential regions in histone modification ChIP-seq libraries.

    Science.gov (United States)

    Nair, Nishanth Ulhas; Sahu, Avinash Das; Bucher, Philipp; Moret, Bernard M E

    2012-01-01

    The advent of high-throughput technologies such as ChIP-seq has made possible the study of histone modifications. A problem of particular interest is the identification of regions of the genome where different cell types from the same organism exhibit different patterns of histone enrichment. This problem turns out to be surprisingly difficult, even in simple pairwise comparisons, because of the significant level of noise in ChIP-seq data. In this paper we propose a two-stage statistical method, called ChIPnorm, to normalize ChIP-seq data, and to find differential regions in the genome, given two libraries of histone modifications of different cell types. We show that the ChIPnorm method removes most of the noise and bias in the data and outperforms other normalization methods. We correlate the histone marks with gene expression data and confirm that histone modifications H3K27me3 and H3K4me3 act as respectively a repressor and an activator of genes. Compared to what was previously reported in the literature, we find that a substantially higher fraction of bivalent marks in ES cells for H3K27me3 and H3K4me3 move into a K27-only state. We find that most of the promoter regions in protein-coding genes have differential histone-modification sites. The software for this work can be downloaded from http://lcbb.epfl.ch/software.html.

  17. Influence of Previous Knowledge, Language Skills and Domain-specific Interest on Observation Competency

    Science.gov (United States)

    Kohlhauf, Lucia; Rutke, Ulrike; Neuhaus, Birgit

    2011-10-01

    Many epoch-making biological discoveries (e.g. Darwinian Theory) were based upon observations. Nevertheless, observation is often regarded as `just looking' rather than a basic scientific skill. As observation is one of the main research methods in biological sciences, it must be considered as an independent research method and systematic practice of this method is necessary. Because observation skills form the basis of further scientific methods (e.g. experiments or comparisons) and children from the age of 4 years are able to independently generate questions and hypotheses, it seems possible to foster observation competency at a preschool level. To be able to provide development-adequate individual fostering of this competency, it is first necessary to assess each child's competency. Therefore, drawing on the recent literature, we developed in this study a competency model that was empirically evaluated within learners ( N = 110) from different age groups, from kindergarten to university. In addition, we collected data on language skills, domain-specific interest and previous knowledge to analyse coherence between these skills and observation competency. The study showed as expected that previous knowledge had a high impact on observation competency, whereas the influence of domain-specific interest was nonexistent. Language skills were shown to have a weak influence. By utilising the empirically validated model consisting of three dimensions (`Describing', `Scientific reasoning' and `Interpreting') and three skill levels, it was possible to assess each child's competency level and to develop and evaluate guided play activities to individually foster a child's observation competency.

  18. A New Online Calibration Method Based on Lord's Bias-Correction.

    Science.gov (United States)

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  19. Lettuce (Lactuca sativa L. var. Sucrine) growth performance in complemented aquaponic solution outperforms hydroponics

    OpenAIRE

    Delaide, Boris; Goddek, Simon; Gott, James; Soyeurt, Hélène; Jijakli, M.H.

    2016-01-01

    Plant growth performance is optimized under hydroponic conditions. The comparison between aquaponics and hydroponics has attracted considerable attention recently, particularly regarding plant yield. However, previous research has not focused on the potential of using aquaponic solution complemented with mineral elements to commercial hydroponic levels in order to increase yield. For this purpose, lettuce plants were put into AeroFlo installations and exposed to hydroponic (HP), aquaponic (AP...

  20. Determining root correspondence between previously and newly detected objects

    Science.gov (United States)

    Paglieroni, David W.; Beer, N Reginald

    2014-06-17

    A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.

  1. Deep Hashing Based Fusing Index Method for Large-Scale Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lijuan Duan

    2017-01-01

    Full Text Available Hashing has been widely deployed to perform the Approximate Nearest Neighbor (ANN search for the large-scale image retrieval to solve the problem of storage and retrieval efficiency. Recently, deep hashing methods have been proposed to perform the simultaneous feature learning and the hash code learning with deep neural networks. Even though deep hashing has shown the better performance than traditional hashing methods with handcrafted features, the learned compact hash code from one deep hashing network may not provide the full representation of an image. In this paper, we propose a novel hashing indexing method, called the Deep Hashing based Fusing Index (DHFI, to generate a more compact hash code which has stronger expression ability and distinction capability. In our method, we train two different architecture’s deep hashing subnetworks and fuse the hash codes generated by the two subnetworks together to unify images. Experiments on two real datasets show that our method can outperform state-of-the-art image retrieval applications.

  2. Influence of Previous Crop on Durum Wheat Yield and Yield Stability in a Long-term Experiment

    Directory of Open Access Journals (Sweden)

    Anna Maria Stellacci

    2011-02-01

    Full Text Available Long-term experiments are leading indicators of sustainability and serve as an early warning system to detect problems that may compromise future productivity. So the stability of yield is an important parameter to be considered when judging the value of a cropping system relative to others. In a long-term rotation experiment set up in 1972 the influence of different crop sequences on the yields and on yield stability of durum wheat (Triticum durum Desf. was studied. The complete field experiment is a split-split plot in a randomized complete block design with two replications; the whole experiment considers three crop sequences: 1 three-year crop rotation: sugar-beet, wheat + catch crop, wheat; 2 one-year crop rotation: wheat + catch crop; 3 wheat continuous crop; the split treatments are two different crop residue managements; the split-split plot treatments are 18 different fertilization formulas. Each phase of every crop rotation occurred every year. In this paper only one crop residue management and only one fertilization treatment have been analized. Wheat crops in different rotations are coded as follows: F1: wheat after sugar-beet in three-year crop rotation; F2: wheat after wheat in three-year crop rotation; Fc+i: wheat in wheat + catch crop rotation; Fc: continuous wheat. The following two variables were analysed: grain yield and hectolitre weight. Repeated measures analyses of variance and stability analyses have been perfomed for the two variables. The stability analysis was conducted using: three variance methods, namely the coefficient of variability of Francis and Kannenberg, the ecovalence index of Wricke and the stability variance index of Shukla; the regression method of Eberhart and Russell; a method, proposed by Piepho, that computes the probability of one system outperforming another system. It has turned out that each of the stability methods used has enriched of information the simple variance analysis. The Piepho

  3. Classifying medical relations in clinical text via convolutional neural networks.

    Science.gov (United States)

    He, Bin; Guan, Yi; Dai, Rui

    2018-05-16

    Deep learning research on relation classification has achieved solid performance in the general domain. This study proposes a convolutional neural network (CNN) architecture with a multi-pooling operation for medical relation classification on clinical records and explores a loss function with a category-level constraint matrix. Experiments using the 2010 i2b2/VA relation corpus demonstrate these models, which do not depend on any external features, outperform previous single-model methods and our best model is competitive with the existing ensemble-based method. Copyright © 2018. Published by Elsevier B.V.

  4. Sex ratio at birth in India, its relation to birth order, sex of previous children and use of indigenous medicine.

    Directory of Open Access Journals (Sweden)

    Samiksha Manchanda

    Full Text Available OBJECTIVE: Sex-ratio at birth in families with previous girls is worse than those with a boy. Our aim was to prospectively study in a large maternal and child unit sex-ratio against previous birth sex and use of traditional medicines for sex selection. MAIN OUTCOME MEASURES: Sex-ratio among mothers in families with a previous girl and in those with a previous boy, prevalence of indigenous medicine use and sex-ratio in those using medicines for sex selection. RESULTS: Overall there were 806 girls to 1000 boys. The sex-ratio was 720:1000 if there was one previous girl and 178:1000 if there were two previous girls. In second children of families with a previous boy 1017 girls were born per 1000 boys. Sex-ratio in those with one previous girl, who were taking traditional medicines for sex selection, was 928:1000. CONCLUSION: Evidence from the second children clearly shows the sex-ratio is being manipulated by human interventions. More mothers with previous girls tend to use traditional medicines for sex selection, in their subsequent pregnancies. Those taking such medication do not seem to be helped according to expectations. They seem to rely on this method and so are less likely use more definitive methods like sex selective abortions. This is the first such prospective investigation of sex ratio in second children looked at against the sex of previous children. More studies are needed to confirm the findings.

  5. Cut Based Method for Comparing Complex Networks.

    Science.gov (United States)

    Liu, Qun; Dong, Zhishan; Wang, En

    2018-03-23

    Revealing the underlying similarity of various complex networks has become both a popular and interdisciplinary topic, with a plethora of relevant application domains. The essence of the similarity here is that network features of the same network type are highly similar, while the features of different kinds of networks present low similarity. In this paper, we introduce and explore a new method for comparing various complex networks based on the cut distance. We show correspondence between the cut distance and the similarity of two networks. This correspondence allows us to consider a broad range of complex networks and explicitly compare various networks with high accuracy. Various machine learning technologies such as genetic algorithms, nearest neighbor classification, and model selection are employed during the comparison process. Our cut method is shown to be suited for comparisons of undirected networks and directed networks, as well as weighted networks. In the model selection process, the results demonstrate that our approach outperforms other state-of-the-art methods with respect to accuracy.

  6. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  7. Erlotinib-induced rash spares previously irradiated skin

    International Nuclear Information System (INIS)

    Lips, Irene M.; Vonk, Ernest J.A.; Koster, Mariska E.Y.; Houwing, Ronald H.

    2011-01-01

    Erlotinib is an epidermal growth factor receptor inhibitor prescribed to patients with locally advanced or metastasized non-small cell lung carcinoma after failure of at least one earlier chemotherapy treatment. Approximately 75% of the patients treated with erlotinib develop acneiform skin rashes. A patient treated with erlotinib 3 months after finishing concomitant treatment with chemotherapy and radiotherapy for non-small cell lung cancer is presented. Unexpectedly, the part of the skin that had been included in his previously radiotherapy field was completely spared from the erlotinib-induced acneiform skin rash. The exact mechanism of erlotinib-induced rash sparing in previously irradiated skin is unclear. The underlying mechanism of this phenomenon needs to be explored further, because the number of patients being treated with a combination of both therapeutic modalities is increasing. The therapeutic effect of erlotinib in the area of the previously irradiated lesion should be assessed. (orig.)

  8. A comparison of morbidity associated with placenta previa with and without previous caesarean sections

    International Nuclear Information System (INIS)

    Baqai, S.; Siraj, A.; Noor, N.

    2018-01-01

    To compare the morbidity associated with placenta previa with and without previous caesarean sections. Study Design: Retrospective comparative study. Place and Duration of Study: From March 2014 till March 2016 in the department of Obstetrics and Gynaecology at PNS Shifa hospital Karachi. Material and Methods: After the approval from hospital ethical committee, antenatal patients with singleton pregnancy of gestational age >32 weeks, in the age group of 20-40 years diagnosed to have placenta previa included in the study. All patients with twin pregnancy less than 20 years and more than 40 years of age were excluded. The records of all patients fulfilling the inclusion criteria were reviewed. Data had been collected for demographic and maternal variables, placenta previa, history of previous lower segment caesarean section (LSCS), complications associated with placenta previa and techniques used to control blood loss were recorded. Results: During the study period, 6879 patients were delivered in PNS Shifa, out of these, 2060 (29.9%) had caesarean section out of these, 47.3% patients had previous history of LSCS. Thirty three (1.6%) patients were diagnosed to have placenta previa and frequency of placenta previa was significantly higher in patients with previous history of LSCS than previous normal delivery of LSCS i.e. 22 vs. 11 (p=0.023). It was observed that the frequency of morbidly adherent placenta (MAP) and Intensive care unit (ICU) stay were significantly higher in patients with previous history of LSCS than previous history of normal delivery. Conclusion: Frequency of placenta previa was significantly higher in patients with history of LSCS. Also placenta previa remains a major risk factor for various maternal complications. (author)

  9. 3D Fourier synthesis of a new X-ray picture identical in projection to a previous picture

    International Nuclear Information System (INIS)

    Carlsson, P.E.

    1993-01-01

    A central problem in diagnostic radiology is to compare a new X-ray picture with a previous picture and from this comparison be able to decide if anatomical changes have occurred in the patient or not. It is of primary interest that these pictures are identical in projection. If not it is difficult to decide with confidence if differences between the pictures are due to anatomical changes or differences in their projection geometry. In this thesis we present a non invasive method that makes it possible to find the relative changes in the projection geometry between the exposure of a previous picture and a new picture. The method presented is based on the projection slice theorem (central section theorem). Instead of an elaborate search for a single new picture a pre-planned set of pictures are exposed from a circular orbit above the patient. By using 3D Fourier transform techniques we are able to synthesize a new X-ray picture from this set of pictures that is identical in projection to the previous one. The method has certain limits. Those are as follows: *The X-ray focus position must always be at a fixed distance from the image plane. *The object may only be translated parallel to the image plane and rotated around axes perpendicular to this plane. Under those restrictions, we may treat divergent projection pictures as if they are generated by a parallel projection of a scaled object. The unknown rotation and translation of the object in the previous case are both retrieved in two different procedures and compensated for. Experiments on synthetic data has proved that the method is working even in the presence of severe noise

  10. Systematic Evaluation of Methods for Integration of Transcriptomic Data into Constraint-Based Models of Metabolism

    DEFF Research Database (Denmark)

    Machado, Daniel; Herrgard, Markus

    2014-01-01

    of these methods has not been critically evaluated and compared. This work presents a survey of recently published methods that use transcript levels to try to improve metabolic flux predictions either by generating flux distributions or by creating context-specific models. A subset of these methods...... is then systematically evaluated using published data from three different case studies in E. coli and S. cerevisiae. The flux predictions made by different methods using transcriptomic data are compared against experimentally determined extracellular and intracellular fluxes (from 13C-labeling data). The sensitivity...... of the results to method-specific parameters is also evaluated, as well as their robustness to noise in the data. The results show that none of the methods outperforms the others for all cases. Also, it is observed that for many conditions, the predictions obtained by simple flux balance analysis using growth...

  11. Effectiveness of disinfection with alcohol 70% (w/v of contaminated surfaces not previously cleaned

    Directory of Open Access Journals (Sweden)

    Maurício Uchikawa Graziano

    2013-04-01

    Full Text Available OBJECTIVE: To evaluate the disinfectant effectiveness of alcohol 70% (w/v using friction, without previous cleaning, on work surfaces, as a concurrent disinfecting procedure in Health Services. METHOD: An experimental, randomized and single-blinded laboratory study was undertaken. The samples were enamelled surfaces, intentionally contaminated with Serratia marcescens microorganisms ATCC 14756 106 CFU/mL with 10% of human saliva added, and were submitted to the procedure of disinfection WITHOUT previous cleaning. The results were compared to disinfection preceded by cleaning. RESULTS: There was a reduction of six logarithms of the initial microbial population, equal in the groups WITH and WITHOUT previous cleaning (p=0.440 and a residual microbial load ≤ 102 CFU. CONCLUSION: The research demonstrated the acceptability of the practice evaluated, bringing an important response to the area of health, in particular to Nursing, which most undertakes procedures of concurrent cleaning /disinfecting of these work surfaces.

  12. Effectiveness of disinfection with alcohol 70% (w/v of contaminated surfaces not previously cleaned

    Directory of Open Access Journals (Sweden)

    Maurício Uchikawa Graziano

    Full Text Available OBJECTIVE: To evaluate the disinfectant effectiveness of alcohol 70% (w/v using friction, without previous cleaning, on work surfaces, as a concurrent disinfecting procedure in Health Services. METHOD: An experimental, randomized and single-blinded laboratory study was undertaken. The samples were enamelled surfaces, intentionally contaminated with Serratia marcescens microorganisms ATCC 14756 106 CFU/mL with 10% of human saliva added, and were submitted to the procedure of disinfection WITHOUT previous cleaning. The results were compared to disinfection preceded by cleaning. RESULTS: There was a reduction of six logarithms of the initial microbial population, equal in the groups WITH and WITHOUT previous cleaning (p=0.440 and a residual microbial load ≤ 102 CFU. CONCLUSION: The research demonstrated the acceptability of the practice evaluated, bringing an important response to the area of health, in particular to Nursing, which most undertakes procedures of concurrent cleaning /disinfecting of these work surfaces.

  13. Treatment response in psychotic patients classified according to social and clinical needs, drug side effects, and previous treatment; a method to identify functional remission.

    Science.gov (United States)

    Alenius, Malin; Hammarlund-Udenaes, Margareta; Hartvig, Per; Sundquist, Staffan; Lindström, Leif

    2009-01-01

    Various approaches have been made over the years to classify psychotic patients according to inadequate treatment response, using terms such as treatment resistant or treatment refractory. Existing classifications have been criticized for overestimating positive symptoms; underestimating residual symptoms, negative symptoms, and side effects; or being to open for individual interpretation. The aim of this study was to present and evaluate a new method of classification according to treatment response and, thus, to identify patients in functional remission. A naturalistic, cross-sectional study was performed using patient interviews and information from patient files. The new classification method CANSEPT, which combines the Camberwell Assessment of Need rating scale, the Udvalg for Kliniske Undersøgelser side effect rating scale (SE), and the patient's previous treatment history (PT), was used to group the patients according to treatment response. CANSEPT was evaluated by comparison of expected and observed results. In the patient population (n = 123), the patients in functional remission, as defined by CANSEPT, had higher quality of life, fewer hospitalizations, fewer psychotic symptoms, and higher rate of workers than those with the worst treatment outcome. In the evaluation, CANSEPT showed validity in discriminating the patients of interest and was well tolerated by the patients. CANSEPT could secure inclusion of correct patients in the clinic or in research.

  14. Progeny Clustering: A Method to Identify Biological Phenotypes

    Science.gov (United States)

    Hu, Chenyue W.; Kornblau, Steven M.; Slater, John H.; Qutub, Amina A.

    2015-01-01

    Estimating the optimal number of clusters is a major challenge in applying cluster analysis to any type of dataset, especially to biomedical datasets, which are high-dimensional and complex. Here, we introduce an improved method, Progeny Clustering, which is stability-based and exceptionally efficient in computing, to find the ideal number of clusters. The algorithm employs a novel Progeny Sampling method to reconstruct cluster identity, a co-occurrence probability matrix to assess the clustering stability, and a set of reference datasets to overcome inherent biases in the algorithm and data space. Our method was shown successful and robust when applied to two synthetic datasets (datasets of two-dimensions and ten-dimensions containing eight dimensions of pure noise), two standard biological datasets (the Iris dataset and Rat CNS dataset) and two biological datasets (a cell phenotype dataset and an acute myeloid leukemia (AML) reverse phase protein array (RPPA) dataset). Progeny Clustering outperformed some popular clustering evaluation methods in the ten-dimensional synthetic dataset as well as in the cell phenotype dataset, and it was the only method that successfully discovered clinically meaningful patient groupings in the AML RPPA dataset. PMID:26267476

  15. Previous experience in manned space flight: A survey of human factors lessons learned

    Science.gov (United States)

    Chandlee, George O.; Woolford, Barbara

    1993-01-01

    Previous experience in manned space flight programs can be used to compile a data base of human factors lessons learned for the purpose of developing aids in the future design of inhabited spacecraft. The objectives are to gather information available from relevant sources, to develop a taxonomy of human factors data, and to produce a data base that can be used in the future for those people involved in the design of manned spacecraft operations. A study is currently underway at the Johnson Space Center with the objective of compiling, classifying, and summarizing relevant human factors data bearing on the lessons learned from previous manned space flights. The research reported defines sources of data, methods for collection, and proposes a classification for human factors data that may be a model for other human factors disciplines.

  16. The Relationship of Lumbar Multifidus Muscle Morphology to Previous, Current, and Future Low Back Pain

    DEFF Research Database (Denmark)

    Hebert, Jeffrey J; Kjær, Per; Fritz, Julie M

    2014-01-01

    of LBP after five and nine years.Summary of Background Data. Although low back pain (LBP) is a major source of disease burden, the biologic determinants of LBP are poorly understood.Methods. Participants were 40-year-old adults randomly sampled from a Danish population and followed-up at ages 45 and 49....... At each time point, participants underwent magnetic resonance imaging and reported ever having had LBP, LBP in the previous year, non-trivial LBP in the previous year, or a history of pain radiating into the legs. Pixel intensity and frequencies from T1-weighted magnetic resonance images identified...

  17. Heuristic methods using grasp, path relinking and variable neighborhood search for the clustered traveling salesman problem

    Directory of Open Access Journals (Sweden)

    Mário Mestria

    2013-08-01

    Full Text Available The Clustered Traveling Salesman Problem (CTSP is a generalization of the Traveling Salesman Problem (TSP in which the set of vertices is partitioned into disjoint clusters and objective is to find a minimum cost Hamiltonian cycle such that the vertices of each cluster are visited contiguously. The CTSP is NP-hard and, in this context, we are proposed heuristic methods for the CTSP using GRASP, Path Relinking and Variable Neighborhood Descent (VND. The heuristic methods were tested using Euclidean instances with up to 2000 vertices and clusters varying between 4 to 150 vertices. The computational tests were performed to compare the performance of the heuristic methods with an exact algorithm using the Parallel CPLEX software. The computational results showed that the hybrid heuristic method using VND outperforms other heuristic methods.

  18. Automatic electromagnetic valve for previous vacuum

    International Nuclear Information System (INIS)

    Granados, C. E.; Martin, F.

    1959-01-01

    A valve which permits the maintenance of an installation vacuum when electric current fails is described. It also lets the air in the previous vacuum bomb to prevent the oil ascending in the vacuum tubes. (Author)

  19. A REVIEW ON EFFICACIOUS METHODS TO DECOLORIZE REACTIVE AZO DYE

    Directory of Open Access Journals (Sweden)

    Jagadeesan Vijayaraghavan

    2013-01-01

    Full Text Available This paper deals with the intensive review of reactive azo dye, Reactive Black 5. Various physicochemical methods namely photo catalysis, electrochemical, adsorption, hydrolysis and biological methods like microbial degradation, biosorption and bioaccumulation have been analyzed thoroughly along with the merits and demerits of each method. Among these various methods, biological treatment methods are found to be the best for decolorization of Reactive Black 5. With respect to dye biosorption, microbial biomass (bacteria, fungi, microalgae, etc, and outperformed macroscopic materials (seaweeds, crab shell, etc. are used for decolorization process. The use of living organisms may not be an option for the continuous treatment of highly toxic organic/inorganic contaminants. Once the toxicant concentration becomes too high or the process operated for a long time, the amount of toxicant accumulated will reach saturation. Beyond this point, an organism's metabolism may be interrupted, resulting in death of the organism. This scenario is not existed in the case of dead biomass, which is flexible to environmental conditions and toxicant concentrations. Thus, owing to its favorable characteristics, biosorption has received much attention in recent years.

  20. Sinusoidal Order Estimation Using Angles between Subspaces

    Directory of Open Access Journals (Sweden)

    Søren Holdt Jensen

    2009-01-01

    Full Text Available We consider the problem of determining the order of a parametric model from a noisy signal based on the geometry of the space. More specifically, we do this using the nontrivial angles between the candidate signal subspace model and the noise subspace. The proposed principle is closely related to the subspace orthogonality property known from the MUSIC algorithm, and we study its properties and compare it to other related measures. For the problem of estimating the number of complex sinusoids in white noise, a computationally efficient implementation exists, and this problem is therefore considered in detail. In computer simulations, we compare the proposed method to various well-known methods for order estimation. These show that the proposed method outperforms the other previously published subspace methods and that it is more robust to the noise being colored than the previously published methods.

  1. ℓ0TV: A new method for image restoration in the presence of impulse noise

    KAUST Repository

    Yuan, Ganzhao; Ghanem, Bernard

    2015-01-01

    In this paper, we propose a new method, called L0T V -PADMM, which solves the TV-based restoration problem with L0-norm data fidelity. To effectively deal with the resulting non-convex nonsmooth optimization problem, we first reformulate it as an equivalent MPEC (Mathematical Program with Equilibrium Constraints), and then solve it using a proximal Alternating Direction Method of Multipliers (PADMM). Our L0TV-PADMM method finds a desirable solution to the original L0-norm optimization problem and is proven to be convergent under mild conditions. We apply L0TV-PADMM to the problems of image denoising and deblurring in the presence of impulse noise. Our extensive experiments demonstrate that L0TV-PADMM outperforms state-of-the-art image restoration methods.

  2. Can Morphing Methods Predict Intermediate Structures?

    Science.gov (United States)

    Weiss, Dahlia R.; Levitt, Michael

    2009-01-01

    Movement is crucial to the biological function of many proteins, yet crystallographic structures of proteins can give us only a static snapshot. The protein dynamics that are important to biological function often happen on a timescale that is unattainable through detailed simulation methods such as molecular dynamics as they often involve crossing high-energy barriers. To address this coarse-grained motion, several methods have been implemented as web servers in which a set of coordinates is usually linearly interpolated from an initial crystallographic structure to a final crystallographic structure. We present a new morphing method that does not extrapolate linearly and can therefore go around high-energy barriers and which can produce different trajectories between the same two starting points. In this work, we evaluate our method and other established coarse-grained methods according to an objective measure: how close a coarse-grained dynamics method comes to a crystallographically determined intermediate structure when calculating a trajectory between the initial and final crystal protein structure. We test this with a set of five proteins with at least three crystallographically determined on-pathway high-resolution intermediate structures from the Protein Data Bank. For simple hinging motions involving a small conformational change, segmentation of the protein into two rigid sections outperforms other more computationally involved methods. However, large-scale conformational change is best addressed using a nonlinear approach and we suggest that there is merit in further developing such methods. PMID:18996395

  3. Optic disk localization by a robust fusion method

    Science.gov (United States)

    Zhang, Jielin; Yin, Fengshou; Wong, Damon W. K.; Liu, Jiang; Baskaran, Mani; Cheng, Ching-Yu; Wong, Tien Yin

    2013-02-01

    The optic disk localization plays an important role in developing computer-aided diagnosis (CAD) systems for ocular diseases such as glaucoma, diabetic retinopathy and age-related macula degeneration. In this paper, we propose an intelligent fusion of methods for the localization of the optic disk in retinal fundus images. Three different approaches are developed to detect the location of the optic disk separately. The first method is the maximum vessel crossing method, which finds the region with the most number of blood vessel crossing points. The second one is the multichannel thresholding method, targeting the area with the highest intensity. The final method searches the vertical and horizontal region-of-interest separately on the basis of blood vessel structure and neighborhood entropy profile. Finally, these three methods are combined using an intelligent fusion method to improve the overall accuracy. The proposed algorithm was tested on the STARE database and the ORIGAlight database, each consisting of images with various pathologies. The preliminary result on the STARE database can achieve 81.5%, while a higher result of 99% can be obtained for the ORIGAlight database. The proposed method outperforms each individual approach and state-of-the-art method which utilizes an intensity-based approach. The result demonstrates a high potential for this method to be used in retinal CAD systems.

  4. Prevalence of pain in the head, back and feet in refugees previously exposed to torture: a ten-year follow-up study

    DEFF Research Database (Denmark)

    Olsen, Dorthe Reff; Montgomery, Edith; Bøjholm, Søren

    2007-01-01

    AIM: To estimate change over 10 years concerning the prevalence of pain in the head, back and feet, among previously tortured refugees settled in Denmark, and to compare associations between methods of torture and prevalent pain at baseline and at 10-year follow-up. METHODS: 139 refugees previous...... associated with the type and bodily focus of the torture. This presents a considerable challenge to future evidence-based development of effective treatment programs....

  5. Validation of the Online version of the Previous Day Food Questionnaire for schoolchildren

    Directory of Open Access Journals (Sweden)

    Raquel ENGEL

    Full Text Available ABSTRACT Objective To evaluate the validity of the web-based version of the Previous Day Food Questionnaire Online for schoolchildren from the 2nd to 5th grades of elementary school. Methods Participants were 312 schoolchildren aged 7 to 12 years of a public school from the city of Florianópolis, Santa Catarina, Brazil. Validity was assessed by sensitivity, specificity, as well as by agreement rates (match, omission, and intrusion rates of food items reported by children on the Previous Day Food Questionnaire Online, using direct observation of foods/beverages eaten during school meals (mid-morning snack or afternoon snack on the previous day as the reference. Multivariate multinomial logistic regression analysis was used to evaluate the influence of participants’ characteristics on omission and intrusion rates. Results The results showed adequate sensitivity (67.7% and specificity (95.2%. There were low omission and intrusion rates of 22.8% and 29.5%, respectively when all food items were analyzed. Pizza/hamburger showed the highest omission rate, whereas milk and milk products showed the highest intrusion rate. The participants who attended school in the afternoon shift presented a higher probability of intrusion compared to their peers who attended school in the morning. Conclusion The Previous Day Food Questionnaire Online possessed satisfactory validity for the assessment of food intake at the group level in schoolchildren from the 2nd to 5th grades of public school.

  6. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kookjin [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science; Carlberg, Kevin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Elman, Howard C. [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science and Inst. for Advanced Computer Studies

    2018-03-29

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weighted $\\ell^2$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $\\ell^2$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.

  7. HEART TRANSPLANTATION IN PATIENTS WITH PREVIOUS OPEN HEART SURGERY

    Directory of Open Access Journals (Sweden)

    R. Sh. Saitgareev

    2016-01-01

    Full Text Available Heart Transplantation (HTx to date remains the most effective and radical method of treatment of patients with end-stage heart failure. The defi cit of donor hearts is forcing to resort increasingly to the use of different longterm mechanical circulatory support systems, including as a «bridge» to the follow-up HTx. According to the ISHLT Registry the number of recipients underwent cardiopulmonary bypass surgery increased from 40% in the period from 2004 to 2008 to 49.6% for the period from 2009 to 2015. HTx performed in repeated patients, on the one hand, involves considerable technical diffi culties and high risks; on the other hand, there is often no alternative medical intervention to HTx, and if not dictated by absolute contradictions the denial of the surgery is equivalent to 100% mortality. This review summarizes the results of a number of published studies aimed at understanding the immediate and late results of HTx in patients, previously underwent open heart surgery. The effect of resternotomy during HTx and that of the specifi c features associated with its implementation in recipients previously operated on open heart, and its effects on the immediate and long-term survival were considered in this review. Results of studies analyzing the risk factors for perioperative complications in repeated recipients were also demonstrated. Separately, HTx risks after implantation of prolonged mechanical circulatory support systems were examined. The literature does not allow to clearly defi ning the impact factor of earlier performed open heart surgery on the course of perioperative period and on the prognosis of survival in recipients who underwent HTx. On the other hand, subject to the regular fl ow of HTx and the perioperative period the risks in this clinical situation are justifi ed as a long-term prognosis of recipients previously conducted open heart surgery and are comparable to those of patients who underwent primary HTx. Studies

  8. An iterative method for selecting degenerate multiplex PCR primers.

    Science.gov (United States)

    Souvenir, Richard; Buhler, Jeremy; Stormo, Gary; Zhang, Weixiong

    2007-01-01

    Single-nucleotide polymorphism (SNP) genotyping is an important molecular genetics process, which can produce results that will be useful in the medical field. Because of inherent complexities in DNA manipulation and analysis, many different methods have been proposed for a standard assay. One of the proposed techniques for performing SNP genotyping requires amplifying regions of DNA surrounding a large number of SNP loci. To automate a portion of this particular method, it is necessary to select a set of primers for the experiment. Selecting these primers can be formulated as the Multiple Degenerate Primer Design (MDPD) problem. The Multiple, Iterative Primer Selector (MIPS) is an iterative beam-search algorithm for MDPD. Theoretical and experimental analyses show that this algorithm performs well compared with the limits of degenerate primer design. Furthermore, MIPS outperforms an existing algorithm that was designed for a related degenerate primer selection problem.

  9. Underestimation of Severity of Previous Whiplash Injuries

    Science.gov (United States)

    Naqui, SZH; Lovell, SJ; Lovell, ME

    2008-01-01

    INTRODUCTION We noted a report that more significant symptoms may be expressed after second whiplash injuries by a suggested cumulative effect, including degeneration. We wondered if patients were underestimating the severity of their earlier injury. PATIENTS AND METHODS We studied recent medicolegal reports, to assess subjects with a second whiplash injury. They had been asked whether their earlier injury was worse, the same or lesser in severity. RESULTS From the study cohort, 101 patients (87%) felt that they had fully recovered from their first injury and 15 (13%) had not. Seventy-six subjects considered their first injury of lesser severity, 24 worse and 16 the same. Of the 24 that felt the violence of their first accident was worse, only 8 had worse symptoms, and 16 felt their symptoms were mainly the same or less than their symptoms from their second injury. Statistical analysis of the data revealed that the proportion of those claiming a difference who said the previous injury was lesser was 76% (95% CI 66–84%). The observed proportion with a lesser injury was considerably higher than the 50% anticipated. CONCLUSIONS We feel that subjects may underestimate the severity of an earlier injury and associated symptoms. Reasons for this may include secondary gain rather than any proposed cumulative effect. PMID:18201501

  10. Secondary recurrent miscarriage is associated with previous male birth.

    LENUS (Irish Health Repository)

    Ooi, Poh Veh

    2012-01-31

    Secondary recurrent miscarriage (RM) is defined as three or more consecutive pregnancy losses after delivery of a viable infant. Previous reports suggest that a firstborn male child is associated with less favourable subsequent reproductive potential, possibly due to maternal immunisation against male-specific minor histocompatibility antigens. In a retrospective cohort study of 85 cases of secondary RM we aimed to determine if secondary RM was associated with (i) gender of previous child, maternal age, or duration of miscarriage history, and (ii) increased risk of pregnancy complications. Fifty-three women (62.0%; 53\\/85) gave birth to a male child prior to RM compared to 32 (38.0%; 32\\/85) who gave birth to a female child (p=0.002). The majority (91.7%; 78\\/85) had uncomplicated, term deliveries and normal birth weight neonates, with one quarter of the women previously delivered by Caesarean section. All had routine RM investigations and 19.0% (16\\/85) had an abnormal result. Fifty-seven women conceived again and 33.3% (19\\/57) miscarried, but there was no significant difference in failure rates between those with a previous male or female child (13\\/32 vs. 6\\/25, p=0.2). When patients with abnormal results were excluded, or when women with only one previous child were considered, there was still no difference in these rates. A previous male birth may be associated with an increased risk of secondary RM but numbers preclude concluding whether this increases recurrence risk. The suggested association with previous male birth provides a basis for further investigations at a molecular level.

  11. Secondary recurrent miscarriage is associated with previous male birth.

    LENUS (Irish Health Repository)

    Ooi, Poh Veh

    2011-01-01

    Secondary recurrent miscarriage (RM) is defined as three or more consecutive pregnancy losses after delivery of a viable infant. Previous reports suggest that a firstborn male child is associated with less favourable subsequent reproductive potential, possibly due to maternal immunisation against male-specific minor histocompatibility antigens. In a retrospective cohort study of 85 cases of secondary RM we aimed to determine if secondary RM was associated with (i) gender of previous child, maternal age, or duration of miscarriage history, and (ii) increased risk of pregnancy complications. Fifty-three women (62.0%; 53\\/85) gave birth to a male child prior to RM compared to 32 (38.0%; 32\\/85) who gave birth to a female child (p=0.002). The majority (91.7%; 78\\/85) had uncomplicated, term deliveries and normal birth weight neonates, with one quarter of the women previously delivered by Caesarean section. All had routine RM investigations and 19.0% (16\\/85) had an abnormal result. Fifty-seven women conceived again and 33.3% (19\\/57) miscarried, but there was no significant difference in failure rates between those with a previous male or female child (13\\/32 vs. 6\\/25, p=0.2). When patients with abnormal results were excluded, or when women with only one previous child were considered, there was still no difference in these rates. A previous male birth may be associated with an increased risk of secondary RM but numbers preclude concluding whether this increases recurrence risk. The suggested association with previous male birth provides a basis for further investigations at a molecular level.

  12. Performance Analysis of a Maximum Power Point Tracking Technique using Silver Mean Method

    Directory of Open Access Journals (Sweden)

    Shobha Rani Depuru

    2018-01-01

    Full Text Available The proposed paper presents a simple and particularly efficacious Maximum Power Point Tracking (MPPT algorithm based on Silver Mean Method (SMM. This method operates by choosing a search interval from the P-V characteristics of the given solar array and converges to MPP of the Solar Photo-Voltaic (SPV system by shrinking its interval. After achieving the maximum power, the algorithm stops shrinking and maintains constant voltage until the next interval is decided. The tracking capability efficiency and performance analysis of the proposed algorithm are validated by the simulation and experimental results with a 100W solar panel for variable temperature and irradiance conditions. The results obtained confirm that even without any perturbation and observation process, the proposed method still outperforms the traditional perturb and observe (P&O method by demonstrating far better steady state output, more accuracy and higher efficiency.

  13. A Channelization-Based DOA Estimation Method for Wideband Signals

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2016-07-01

    Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  14. The frequency of previously undetectable deletions involving 3' Exons of the PMS2 gene.

    Science.gov (United States)

    Vaughn, Cecily P; Baker, Christine L; Samowitz, Wade S; Swensen, Jeffrey J

    2013-01-01

    Lynch syndrome is characterized by mutations in one of four mismatch repair genes, MLH1, MSH2, MSH6, or PMS2. Clinical mutation analysis of these genes includes sequencing of exonic regions and deletion/duplication analysis. However, detection of deletions and duplications in PMS2 has previously been confined to Exons 1-11 due to gene conversion between PMS2 and the pseudogene PMS2CL in the remaining 3' exons (Exons 12-15). We have recently described an MLPA-based method that permits detection of deletions of PMS2 Exons 12-15; however, the frequency of such deletions has not yet been determined. To address this question, we tested for 3' deletions in 58 samples that were reported to be negative for PMS2 mutations using previously available methods. All samples were from individuals whose tumors exhibited loss of PMS2 immunohistochemical staining without concomitant loss of MLH1 immunostaining. We identified seven samples in this cohort with deletions in the 3' region of PMS2, including three previously reported samples with deletions of Exons 13-15 (two samples) and Exons 14-15. Also detected were deletions of Exons 12-15, Exon 13, and Exon 14 (two samples). Breakpoint analysis of the intragenic deletions suggests they occurred through Alu-mediated recombination. Our results indicate that ∼12% of samples suspected of harboring a PMS2 mutation based on immunohistochemical staining, for which mutations have not yet been identified, would benefit from testing using the new methodology. Copyright © 2012 Wiley Periodicals, Inc.

  15. Cultivation-based multiplex phenotyping of human gut microbiota allows targeted recovery of previously uncultured bacteria

    DEFF Research Database (Denmark)

    Rettedal, Elizabeth; Gumpert, Heidi; Sommer, Morten

    2014-01-01

    The human gut microbiota is linked to a variety of human health issues and implicated in antibiotic resistance gene dissemination. Most of these associations rely on culture-independent methods, since it is commonly believed that gut microbiota cannot be easily or sufficiently cultured. Here, we...... microbiota. Based on the phenotypic mapping, we tailor antibiotic combinations to specifically select for previously uncultivated bacteria. Utilizing this method we cultivate and sequence the genomes of four isolates, one of which apparently belongs to the genus Oscillibacter; uncultivated Oscillibacter...

  16. Estimating the effect of current, previous and never use of drugs in studies based on prescription registries

    DEFF Research Database (Denmark)

    Nielsen, Lars Hougaard; Løkkegaard, Ellen; Andreasen, Anne Helms

    2009-01-01

    of this misclassification for analysing the risk of breast cancer. MATERIALS AND METHODS: Prescription data were obtained from Danish Registry of Medicinal Products Statistics and we applied various methods to approximate treatment episodes. We analysed the duration of HT episodes to study the ability to identify......PURPOSE: Many studies which investigate the effect of drugs categorize the exposure variable into never, current, and previous use of the study drug. When prescription registries are used to make this categorization, the exposure variable possibly gets misclassified since the registries do...... not carry any information on the time of discontinuation of treatment.In this study, we investigated the amount of misclassification of exposure (never, current, previous use) to hormone therapy (HT) when the exposure variable was based on prescription data. Furthermore, we evaluated the significance...

  17. Reoperative sentinel lymph node biopsy after previous mastectomy.

    Science.gov (United States)

    Karam, Amer; Stempel, Michelle; Cody, Hiram S; Port, Elisa R

    2008-10-01

    Sentinel lymph node (SLN) biopsy is the standard of care for axillary staging in breast cancer, but many clinical scenarios questioning the validity of SLN biopsy remain. Here we describe our experience with reoperative-SLN (re-SLN) biopsy after previous mastectomy. Review of the SLN database from September 1996 to December 2007 yielded 20 procedures done in the setting of previous mastectomy. SLN biopsy was performed using radioisotope with or without blue dye injection superior to the mastectomy incision, in the skin flap in all patients. In 17 of 20 patients (85%), re-SLN biopsy was performed for local or regional recurrence after mastectomy. Re-SLN biopsy was successful in 13 of 20 patients (65%) after previous mastectomy. Of the 13 patients, 2 had positive re-SLN, and completion axillary dissection was performed, with 1 having additional positive nodes. In the 11 patients with negative re-SLN, 2 patients underwent completion axillary dissection demonstrating additional negative nodes. One patient with a negative re-SLN experienced chest wall recurrence combined with axillary recurrence 11 months after re-SLN biopsy. All others remained free of local or axillary recurrence. Re-SLN biopsy was unsuccessful in 7 of 20 patients (35%). In three of seven patients, axillary dissection was performed, yielding positive nodes in two of the three. The remaining four of seven patients all had previous modified radical mastectomy, so underwent no additional axillary surgery. In this small series, re-SLN was successful after previous mastectomy, and this procedure may play some role when axillary staging is warranted after mastectomy.

  18. A survey of formal methods for determining functional joint axes.

    Science.gov (United States)

    Ehrig, Rainald M; Taylor, William R; Duda, Georg N; Heller, Markus O

    2007-01-01

    Axes of rotation e.g. at the knee, are often generated from clinical gait analysis data to be used in the assessment of kinematic abnormalities, the diagnosis of disease, or the ongoing monitoring of a patient's condition. They are additionally used in musculoskeletal models to aid in the description of joint and segment kinematics for patient specific analyses. Currently available methods to describe joint axes from segment marker positions share the problem that when one segment is transformed into the coordinate system of another, artefacts associated with motion of the markers relative to the bone can become magnified. In an attempt to address this problem, a symmetrical axis of rotation approach (SARA) is presented here to determine a unique axis of rotation that can consider the movement of two dynamic body segments simultaneously, and then compared its performance in a survey against a number of previously proposed techniques. Using a generated virtual joint, with superimposed marker error conditions to represent skin movement artefacts, fitting methods (geometric axis fit, cylinder axis fit, algebraic axis fit) and transformation techniques (axis transformation technique, mean helical axis, Schwartz approach) were classified and compared with the SARA. Nearly all approaches were able to estimate the axis of rotation to within an RMS error of 0.1cm at large ranges of motion (90 degrees ). Although the geometric axis fit produced the least RMS error of approximately 1.2 cm at lower ranges of motion (5 degrees ) with a stationary axis, the SARA and Axis Transformation Technique outperformed all other approaches under the most demanding marker artefact conditions for all ranges of motion. The cylinder and algebraic axis fit approaches were unable to compute competitive AoR estimates. Whilst these initial results using the SARA are promising and are fast enough to be determined "on-line", the technique must now be proven in a clinical environment.

  19. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    Science.gov (United States)

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  20. 77 FR 70176 - Previous Participation Certification

    Science.gov (United States)

    2012-11-23

    ... participants' previous participation in government programs and ensure that the past record is acceptable prior... information is designed to be 100 percent automated and digital submission of all data and certifications is... government programs and ensure that the past record is acceptable prior to granting approval to participate...

  1. Risks of cardiovascular adverse events and death in patients with previous stroke undergoing emergency noncardiac, nonintracranial surgery

    DEFF Research Database (Denmark)

    Christiansen, Mia N.; Andersson, Charlotte; Gislason, Gunnar H.

    2017-01-01

    Background: The outcomes of emergent noncardiac, nonintracranial surgery in patients with previous stroke remain unknown. Methods: All emergency surgeries performed in Denmark (2005 to 2011) were analyzed according to time elapsed between previous ischemic stroke and surgery. The risks of 30-day...... mortality and major adverse cardiovascular events were estimated as odds ratios (ORs) and 95% CIs using adjusted logistic regression models in a priori defined groups (reference was no previous stroke). In patients undergoing surgery immediately (within 1 to 3 days) or early after stroke (within 4 to 14...... and general anesthesia less frequent in patients with previous stroke (all P Risks of major adverse cardiovascular events and mortality were high for patients with stroke less than 3 months (20.7 and 16.4% events; OR = 4.71 [95% CI, 4.18 to 5.32] and 1.65 [95% CI, 1.45 to 1.88]), and remained...

  2. Comparative study of SVM methods combined with voxel selection for object category classification on fMRI data.

    Science.gov (United States)

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-02-16

    Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.

  3. Fast neuromimetic object recognition using FPGA outperforms GPU implementations.

    Science.gov (United States)

    Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph

    2013-08-01

    Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.

  4. Better Metrics to Automatically Predict the Quality of a Text Summary

    Directory of Open Access Journals (Sweden)

    Judith D. Schlesinger

    2012-09-01

    Full Text Available In this paper we demonstrate a family of metrics for estimating the quality of a text summary relative to one or more human-generated summaries. The improved metrics are based on features automatically computed from the summaries to measure content and linguistic quality. The features are combined using one of three methods—robust regression, non-negative least squares, or canonical correlation, an eigenvalue method. The new metrics significantly outperform the previous standard for automatic text summarization evaluation, ROUGE.

  5. The Source Equivalence Acceleration Method

    International Nuclear Information System (INIS)

    Everson, Matthew S.; Forget, Benoit

    2015-01-01

    Highlights: • We present a new acceleration method, the Source Equivalence Acceleration Method. • SEAM forms an equivalent coarse group problem for any spatial method. • Equivalence is also formed across different spatial methods and angular quadratures. • Testing is conducted using OpenMOC and performance is compared with CMFD. • Results show that SEAM is preferable for very expensive transport calculations. - Abstract: Fine-group whole-core reactor analysis remains one of the long sought goals of the reactor physics community. Such a detailed analysis is typically too computationally expensive to be realized on anything except the largest of supercomputers. Recondensation using the Discrete Generalized Multigroup (DGM) method, though, offers a relatively cheap alternative to solving the fine group transport problem. DGM, however, suffered from inconsistencies when applied to high-order spatial methods. While an exact spatial recondensation method was developed and provided full spatial consistency with the fine group problem, this approach substantially increased memory requirements for realistic problems. The method described in this paper, called the Source Equivalence Acceleration Method (SEAM), forms a coarse-group problem which preserves the fine-group problem even when using higher order spatial methods. SEAM allows recondensation to converge to the fine-group solution with minimal memory requirements and little additional overhead. This method also provides for consistency when using different spatial methods and angular quadratures between the coarse group and fine group problems. SEAM was implemented in OpenMOC, a 2D MOC code developed at MIT, and its performance tested against Coarse Mesh Finite Difference (CMFD) acceleration on the C5G7 benchmark problem and on a 361 group version of the problem. For extremely expensive transport calculations, SEAM was able to outperform CMFD, resulting in speed-ups of 20–45 relative to the normal power

  6. Surgical Results of Trabeculectomy and Ahmed Valve Implantation Following a Previous Failed Trabeculectomy in Primary Congenital Glaucoma Patients

    OpenAIRE

    Lee, Naeun; Ma, Kyoung Tak; Bae, Hyoung Won; Hong, Samin; Seong, Gong Je; Hong, Young Jae; Kim, Chan Yun

    2015-01-01

    Purpose To compare the surgical results of trabeculectomy and Ahmed glaucoma valve implantation after a previous failed trabeculectomy. Methods A retrospective comparative case series review was performed on 31 eye surgeries in 20 patients with primary congenital glaucoma who underwent trabeculectomy or Ahmed glaucoma valve implantation after a previous failed trabeculectomy with mitomycin C. Results The preoperative mean intraocular pressure was 25.5 mmHg in the trabeculectomy group and 26.9...

  7. FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector

    International Nuclear Information System (INIS)

    Schaefer, Dirk; Grass, Michael; Haar, Peter van de

    2011-01-01

    Purpose: Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. Methods: The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. Results: The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical

  8. State space Newton's method for topology optimization

    DEFF Research Database (Denmark)

    Evgrafov, Anton

    2014-01-01

    /10/1-type constraints on the design field through penalties in many topology optimization approaches. We test the algorithm on the benchmark problems of dissipated power minimization for Stokes flows, and in all cases the algorithm outperforms the traditional first order reduced space/nested approaches...

  9. [Prevalence of previously diagnosed diabetes mellitus in Mexico.

    Science.gov (United States)

    Rojas-Martínez, Rosalba; Basto-Abreu, Ana; Aguilar-Salinas, Carlos A; Zárate-Rojas, Emiliano; Villalpando, Salvador; Barrientos-Gutiérrez, Tonatiuh

    2018-01-01

    To compare the prevalence of previously diagnosed diabetes in 2016 with previous national surveys and to describe treatment and its complications. Mexico's national surveys Ensa 2000, Ensanut 2006, 2012 and 2016 were used. For 2016, logistic regression models and measures of central tendency and dispersion were obtained. The prevalence of previously diagnosed diabetes in 2016 was 9.4%. The increase of 2.2% relative to 2012 was not significant and only observed in patients older than 60 years. While preventive measures have increased, the access to medical treatment and lifestyle has not changed. The treatment has been modified, with an increase in insulin and decrease in hypoglycaemic agents. Population aging, lack of screening actions and the increase in diabetes complications will lead to an increase on the burden of disease. Policy measures targeting primary and secondary prevention of diabetes are crucial.

  10. An Initialization Method Based on Hybrid Distance for k-Means Algorithm.

    Science.gov (United States)

    Yang, Jie; Ma, Yan; Zhang, Xiangfen; Li, Shunbao; Zhang, Yuping

    2017-11-01

    The traditional [Formula: see text]-means algorithm has been widely used as a simple and efficient clustering method. However, the performance of this algorithm is highly dependent on the selection of initial cluster centers. Therefore, the method adopted for choosing initial cluster centers is extremely important. In this letter, we redefine the density of points according to the number of its neighbors, as well as the distance between points and their neighbors. In addition, we define a new distance measure that considers both Euclidean distance and density. Based on that, we propose an algorithm for selecting initial cluster centers that can dynamically adjust the weighting parameter. Furthermore, we propose a new internal clustering validation measure, the clustering validation index based on the neighbors (CVN), which can be exploited to select the optimal result among multiple clustering results. Experimental results show that the proposed algorithm outperforms existing initialization methods on real-world data sets and demonstrates the adaptability of the proposed algorithm to data sets with various characteristics.

  11. An Invocation Cost Optimization Method for Web Services in Cloud Environment

    Directory of Open Access Journals (Sweden)

    Lianyong Qi

    2017-01-01

    Full Text Available The advent of cloud computing technology has enabled users to invoke various web services in a “pay-as-you-go” manner. However, due to the flexible pricing model of web services in cloud environment, a cloud user’ service invocation cost may be influenced by many factors (e.g., service invocation time, which brings a great challenge for cloud users’ cost-effective web service invocation. In view of this challenge, in this paper, we first investigate the multiple factors that influence the invocation cost of a cloud service, for example, user’s job size, service invocation time, and service quality level; and afterwards, a novel Cloud Service Cost Optimization Method named CS-COM is put forward, by considering the above multiple impact factors. Finally, a set of experiments are designed, deployed, and tested to validate the feasibility of our proposal in terms of cost optimization. The experiment results show that our proposed CS-COM method outperforms other related methods.

  12. Advanced Steel Microstructural Classification by Deep Learning Methods.

    Science.gov (United States)

    Azimi, Seyed Majid; Britz, Dominik; Engstler, Michael; Fritz, Mario; Mücklich, Frank

    2018-02-01

    The inner structure of a material is called microstructure. It stores the genesis of a material and determines all its physical and chemical properties. While microstructural characterization is widely spread and well known, the microstructural classification is mostly done manually by human experts, which gives rise to uncertainties due to subjectivity. Since the microstructure could be a combination of different phases or constituents with complex substructures its automatic classification is very challenging and only a few prior studies exist. Prior works focused on designed and engineered features by experts and classified microstructures separately from the feature extraction step. Recently, Deep Learning methods have shown strong performance in vision applications by learning the features from data together with the classification step. In this work, we propose a Deep Learning method for microstructural classification in the examples of certain microstructural constituents of low carbon steel. This novel method employs pixel-wise segmentation via Fully Convolutional Neural Network (FCNN) accompanied by a max-voting scheme. Our system achieves 93.94% classification accuracy, drastically outperforming the state-of-the-art method of 48.89% accuracy. Beyond the strong performance of our method, this line of research offers a more robust and first of all objective way for the difficult task of steel quality appreciation.

  13. Support vector inductive logic programming outperforms the naive Bayes classifier and inductive logic programming for the classification of bioactive chemical compounds.

    Science.gov (United States)

    Cannon, Edward O; Amini, Ata; Bender, Andreas; Sternberg, Michael J E; Muggleton, Stephen H; Glen, Robert C; Mitchell, John B O

    2007-05-01

    We investigate the classification performance of circular fingerprints in combination with the Naive Bayes Classifier (MP2D), Inductive Logic Programming (ILP) and Support Vector Inductive Logic Programming (SVILP) on a standard molecular benchmark dataset comprising 11 activity classes and about 102,000 structures. The Naive Bayes Classifier treats features independently while ILP combines structural fragments, and then creates new features with higher predictive power. SVILP is a very recently presented method which adds a support vector machine after common ILP procedures. The performance of the methods is evaluated via a number of statistical measures, namely recall, specificity, precision, F-measure, Matthews Correlation Coefficient, area under the Receiver Operating Characteristic (ROC) curve and enrichment factor (EF). According to the F-measure, which takes both recall and precision into account, SVILP is for seven out of the 11 classes the superior method. The results show that the Bayes Classifier gives the best recall performance for eight of the 11 targets, but has a much lower precision, specificity and F-measure. The SVILP model on the other hand has the highest recall for only three of the 11 classes, but generally far superior specificity and precision. To evaluate the statistical significance of the SVILP superiority, we employ McNemar's test which shows that SVILP performs significantly (p < 5%) better than both other methods for six out of 11 activity classes, while being superior with less significance for three of the remaining classes. While previously the Bayes Classifier was shown to perform very well in molecular classification studies, these results suggest that SVILP is able to extract additional knowledge from the data, thus improving classification results further.

  14. A multi-frequency fatigue testing method for wind turbine rotor blades

    DEFF Research Database (Denmark)

    Eder, Martin Alexander; Belloni, Federico; Tesauro, Angelo

    2017-01-01

    Rotor blades are among the most delicate components of modern wind turbines. Reliability is a crucial aspect, since blades shall ideally remain free of failure under ultra-high cycle loading conditions throughout their designated lifetime of 20–25 years. Full-scale blade tests are the most accurate...... means to experimentally simulate damage evolution under operating conditions, and are therefore used to demonstrate that a blade type fulfils the reliability requirements to an acceptable degree of confidence. The state-of-the-art testing method for rotor blades in industry is based on resonance...... higher modes contribute more significantly due to their higher cycle count. A numerical feasibility study based on a publicly available large utility rotor blade is used to demonstrate the ability of the proposed approach to outperform the state-of-the-art testing method without compromising fatigue test...

  15. Data Analytics of Mobile Serious Games: Applying Bayesian Data Analysis Methods

    Directory of Open Access Journals (Sweden)

    Heide Lukosch

    2018-03-01

    Full Text Available Traditional teaching methods in the field of resuscitation training show some limitations, while teaching the right actions in critical situations could increase the number of people saved after a cardiac arrest. For our study, we developed a mobile game to support the transfer of theoretical knowledge on resuscitation.  The game has been tested at three schools of further education. A number of data has been collected from 171 players. To analyze this large data set from different sources and quality, different types of data modeling and analyses had to be applied. This approach showed its usefulness in analyzing the large set of data from different sources. It revealed some interesting findings, such as that female players outperformed the male ones, and that the game fostering informal, self-directed is equally efficient as the traditional formal learning method.

  16. Spectrophotometric determination of uranium by previous extraction chromatography separation in polimetalic mineral, phosphorites and technological licours

    International Nuclear Information System (INIS)

    Moreno Bermudez, J.; Cabrera Quevedo, C.; Alfonso Mendez, L.; Rodriguez Aguilera, M.

    1994-01-01

    The development of an analytical procedure for spectrophotometric determination of uranium in polimetalic mineral, phosphorites and technological licours is described. The method is based on the previous separation of interfering elements by extraction chromatography and on spectrophotometric determination of uranium (IV) with arsenazo III in concentrated hydrochloric acid. Tributyl phosphate impregnate on politetrafluoroethylene is used as stationary phase and 5.5 M nitric acid is used as movie phase. The influence of matrix-component elements was studies. The development procedure was applied to real samples, being the results compared with those obtained by other well established analytical methods like gamma-spectrometry, laser fluorimetric, spectrophotometry previous uranium separation by liquid liquid extraction and anion exchange. The reproducibility is evaluated and the detection limited has been established for each studied matrix. A procedure for correcting the thorium interference has been developed for samples with a Th/ 3U8O higher than 0.2

  17. A Bayesian estimate of the concordance correlation coefficient with skewed data.

    Science.gov (United States)

    Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir

    2015-01-01

    Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real-life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Relation between financial market structure and the real economy: comparison between clustering methods.

    Science.gov (United States)

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T

    2015-01-01

    We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover,we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging [corrected].

  19. Relation between financial market structure and the real economy: comparison between clustering methods.

    Directory of Open Access Journals (Sweden)

    Nicoló Musmeci

    Full Text Available We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover,we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging [corrected].

  20. A novel collaborative representation and SCAD based classification method for fibrosis and inflammatory activity analysis of chronic hepatitis C

    Science.gov (United States)

    Cai, Jiaxin; Chen, Tingting; Li, Yan; Zhu, Nenghui; Qiu, Xuan

    2018-03-01

    In order to analysis the fibrosis stage and inflammatory activity grade of chronic hepatitis C, a novel classification method based on collaborative representation (CR) with smoothly clipped absolute deviation penalty (SCAD) penalty term, called CR-SCAD classifier, is proposed for pattern recognition. After that, an auto-grading system based on CR-SCAD classifier is introduced for the prediction of fibrosis stage and inflammatory activity grade of chronic hepatitis C. The proposed method has been tested on 123 clinical cases of chronic hepatitis C based on serological indexes. Experimental results show that the performance of the proposed method outperforms the state-of-the-art baselines for the classification of fibrosis stage and inflammatory activity grade of chronic hepatitis C.

  1. Research on the range side lobe suppression method for modulated stepped frequency radar signals

    Science.gov (United States)

    Liu, Yinkai; Shan, Tao; Feng, Yuan

    2018-05-01

    The magnitude of time-domain range sidelobe of modulated stepped frequency radar affects the imaging quality of inverse synthetic aperture radar (ISAR). In this paper, the cause of high sidelobe in modulated stepped frequency radar imaging is analyzed first in real environment. Then, the chaos particle swarm optimization (CPSO) is used to select the amplitude and phase compensation factors according to the minimum sidelobe criterion. Finally, the compensated one-dimensional range images are obtained. Experimental results show that the amplitude-phase compensation method based on CPSO algorithm can effectively reduce the sidelobe peak value of one-dimensional range images, which outperforms the common sidelobe suppression methods and avoids the coverage of weak scattering points by strong scattering points due to the high sidelobes.

  2. A study of active learning methods for named entity recognition in clinical text.

    Science.gov (United States)

    Chen, Yukun; Lasko, Thomas A; Mei, Qiaozhu; Denny, Joshua C; Xu, Hua

    2015-12-01

    Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random

  3. Space-partition method for the variance-based sensitivity analysis: Optimal partition scheme and comparative study

    International Nuclear Information System (INIS)

    Zhai, Qingqing; Yang, Jun; Zhao, Yu

    2014-01-01

    Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one

  4. Incidence of Acneform Lesions in Previously Chemically Damaged Persons-2004

    Directory of Open Access Journals (Sweden)

    N Dabiri

    2008-04-01

    Full Text Available ABSTRACT: Introduction & Objective: Chemical gas weapons especially nitrogen mustard which was used in Iraq-Iran war against Iranian troops have several harmful effects on skin. Some other chemical agents also can cause acne form lesions on skin. The purpose of this study was to compare the incidence of acneform in previously chemically damaged soldiers and non chemically damaged persons. Materials & Methods: In this descriptive and analytical study, 180 chemically damaged soldiers, who have been referred to dermatology clinic between 2000 – 2004, and forty non-chemically damaged people, were chosen randomly and examined for acneform lesions. SPSS software was used for statistic analysis of the data. Results: The mean age of the experimental group was 37.5 ± 5.2 and that of the control group was 38.7 ± 5.9 years. The mean percentage of chemical damage in cases was 31 percent and the time after the chemical damage was 15.2 ± 1.1 years. Ninety seven cases (53.9 percent of the subjects and 19 people (47.5 percent of the control group had some degree of acne. No significant correlation was found in incidence, degree of lesions, site of lesions and age of subjects between two groups. No significant correlation was noted between percentage of chemical damage and incidence and degree of lesions in case group. Conclusion: Incidence of acneform lesions among previously chemically injured peoples was not higher than the normal cases.

  5. Horizontal and Vertical Rule Bases Method in Fuzzy Controllers

    Directory of Open Access Journals (Sweden)

    Sadegh Aminifar

    2013-01-01

    Full Text Available Concept of horizontal and vertical rule bases is introduced. Using this method enables the designers to look for main behaviors of system and describes them with greater approximations. The rules which describe the system in first stage are called horizontal rule base. In the second stage, the designer modulates the obtained surface by describing needed changes on first surface for handling real behaviors of system. The rules used in the second stage are called vertical rule base. Horizontal and vertical rule bases method has a great roll in easing of extracting the optimum control surface by using too lesser rules than traditional fuzzy systems. This research involves with control of a system with high nonlinearity and in difficulty to model it with classical methods. As a case study for testing proposed method in real condition, the designed controller is applied to steaming room with uncertain data and variable parameters. A comparison between PID and traditional fuzzy counterpart and our proposed system shows that our proposed system outperforms PID and traditional fuzzy systems in point of view of number of valve switching and better surface following. The evaluations have done both with model simulation and DSP implementation.

  6. 75 FR 76056 - FEDERAL REGISTER CITATION OF PREVIOUS ANNOUNCEMENT:

    Science.gov (United States)

    2010-12-07

    ... SECURITIES AND EXCHANGE COMMISSION Sunshine Act Meeting FEDERAL REGISTER CITATION OF PREVIOUS ANNOUNCEMENT: STATUS: Closed meeting. PLACE: 100 F Street, NE., Washington, DC. DATE AND TIME OF PREVIOUSLY ANNOUNCED MEETING: Thursday, December 9, 2010 at 2 p.m. CHANGE IN THE MEETING: Time change. The closed...

  7. Methods for determining unimpeded aircraft taxiing time and evaluating airport taxiing performance

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2017-04-01

    Full Text Available The objective of this study is to improve the methods of determining unimpeded (nominal taxiing time, which is the reference time used for estimating taxiing delay, a widely accepted performance indicator of airport surface movement. After reviewing existing methods used widely by different air navigation service providers (ANSP, new methods relying on computer software and statistical tools, and econometrics regression models are proposed. Regression models are highly recommended because they require less detailed data and can serve the needs of general performance analysis of airport surface operations. The proposed econometrics model outperforms existing ones by introducing more explanatory variables, especially taking aircraft passing and over-passing into the considering of queue length calculation and including runway configuration, ground delay program, and weather factors. The length of the aircraft queue in the taxiway system and the interaction between queues are major contributors to long taxi-out times. The proposed method provides a consistent and more accurate method of calculating taxiing delay and it can be used for ATM-related performance analysis and international comparison.

  8. Urinary incontinence and vaginal squeeze pressure two years post-cesarean delivery in primiparous women with previous gestational diabetes mellitus

    OpenAIRE

    Barbosa, Angélica Mércia Pascon; Dias, Adriano; Marini, Gabriela; Calderon, Iracema Mattos Paranhos; Witkin, Steven; Rudge, Marilza Vieira Cunha

    2011-01-01

    OBJECTIVE: To assess the prevalence of urinary incontinence and associated vaginal squeeze pressure in primiparous women with and without previous gestational diabetes mellitus two years post-cesarean delivery. METHODS: Primiparous women who delivered by cesarean two years previously were interviewed about the delivery and the occurrence of incontinence. Incontinence was reported by the women and vaginal pressure evaluated by a Perina perineometer. Sixty-three women with gestational diabetes ...

  9. Implant breast reconstruction after salvage mastectomy in previously irradiated patients.

    Science.gov (United States)

    Persichetti, Paolo; Cagli, Barbara; Simone, Pierfranco; Cogliandro, Annalisa; Fortunato, Lucio; Altomare, Vittorio; Trodella, Lucio

    2009-04-01

    The most common surgical approach in case of local tumor recurrence after quadrantectomy and radiotherapy is salvage mastectomy. Breast reconstruction is the subsequent phase of the treatment and the plastic surgeon has to operate on previously irradiated and manipulated tissues. The medical literature highlights that breast reconstruction with tissue expanders is not a pursuable option, considering previous radiotherapy a contraindication. The purpose of this retrospective study is to evaluate the influence of previous radiotherapy on 2-stage breast reconstruction (tissue expander/implant). Only patients with analogous timing of radiation therapy and the same demolitive and reconstructive procedures were recruited. The results of this study prove that, after salvage mastectomy in previously irradiated patients, implant reconstruction is still possible. Further comparative studies are, of course, advisable to draw any conclusion on the possibility to perform implant reconstruction in previously irradiated patients.

  10. A MODIFIED EMBEDDED ZERO-TREE WAVELET METHOD FOR MEDICAL IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    T. Celine Therese Jenny

    2010-11-01

    Full Text Available The Embedded Zero-tree Wavelet (EZW is a lossy compression method that allows for progressive transmission of a compressed image. By exploiting the natural zero-trees found in a wavelet decomposed image, the EZW algorithm is able to encode large portions of insignificant regions of an still image with a minimal number of bits. The upshot of this encoding is an algorithm that is able to achieve relatively high peak signal to noise ratios (PSNR for high compression levels. The EZW algorithm is to encode large portions of insignificant regions of an image with a minimal number of bits. Vector Quantization (VQ method can be performed as a post processing step to reduce the coded file size. Vector Quantization (VQ method can be reduces redundancy of the image data in order to be able to store or transmit data in an efficient form. It is demonstrated by experimental results that the proposed method outperforms several well-known lossless image compression techniques for still images that contain 256 colors or less.

  11. Regional frequency analysis of extreme rainfalls using partial L moments method

    Science.gov (United States)

    Zakaria, Zahrahtul Amani; Shabri, Ani

    2013-07-01

    An approach based on regional frequency analysis using L moments and LH moments are revisited in this study. Subsequently, an alternative regional frequency analysis using the partial L moments (PL moments) method is employed, and a new relationship for homogeneity analysis is developed. The results were then compared with those obtained using the method of L moments and LH moments of order two. The Selangor catchment, consisting of 37 sites and located on the west coast of Peninsular Malaysia, is chosen as a case study. PL moments for the generalized extreme value (GEV), generalized logistic (GLO), and generalized Pareto distributions were derived and used to develop the regional frequency analysis procedure. PL moment ratio diagram and Z test were employed in determining the best-fit distribution. Comparison between the three approaches showed that GLO and GEV distributions were identified as the suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation used for performance evaluation shows that the method of PL moments would outperform L and LH moments methods for estimation of large return period events.

  12. Missing value imputation in DNA microarrays based on conjugate gradient method.

    Science.gov (United States)

    Dorri, Fatemeh; Azmi, Paeiz; Dorri, Faezeh

    2012-02-01

    Analysis of gene expression profiles needs a complete matrix of gene array values; consequently, imputation methods have been suggested. In this paper, an algorithm that is based on conjugate gradient (CG) method is proposed to estimate missing values. k-nearest neighbors of the missed entry are first selected based on absolute values of their Pearson correlation coefficient. Then a subset of genes among the k-nearest neighbors is labeled as the best similar ones. CG algorithm with this subset as its input is then used to estimate the missing values. Our proposed CG based algorithm (CGimpute) is evaluated on different data sets. The results are compared with sequential local least squares (SLLSimpute), Bayesian principle component analysis (BPCAimpute), local least squares imputation (LLSimpute), iterated local least squares imputation (ILLSimpute) and adaptive k-nearest neighbors imputation (KNNKimpute) methods. The average of normalized root mean squares error (NRMSE) and relative NRMSE in different data sets with various missing rates shows CGimpute outperforms other methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. View-invariant gait recognition method by three-dimensional convolutional neural network

    Science.gov (United States)

    Xing, Weiwei; Li, Ying; Zhang, Shunli

    2018-01-01

    Gait as an important biometric feature can identify a human at a long distance. View change is one of the most challenging factors for gait recognition. To address the cross view issues in gait recognition, we propose a view-invariant gait recognition method by three-dimensional (3-D) convolutional neural network. First, 3-D convolutional neural network (3DCNN) is introduced to learn view-invariant feature, which can capture the spatial information and temporal information simultaneously on normalized silhouette sequences. Second, a network training method based on cross-domain transfer learning is proposed to solve the problem of the limited gait training samples. We choose the C3D as the basic model, which is pretrained on the Sports-1M and then fine-tune C3D model to adapt gait recognition. In the recognition stage, we use the fine-tuned model to extract gait features and use Euclidean distance to measure the similarity of gait sequences. Sufficient experiments are carried out on the CASIA-B dataset and the experimental results demonstrate that our method outperforms many other methods.

  14. Previously unidentified changes in renal cell carcinoma gene expression identified by parametric analysis of microarray data

    International Nuclear Information System (INIS)

    Lenburg, Marc E; Liou, Louis S; Gerry, Norman P; Frampton, Garrett M; Cohen, Herbert T; Christman, Michael F

    2003-01-01

    Renal cell carcinoma is a common malignancy that often presents as a metastatic-disease for which there are no effective treatments. To gain insights into the mechanism of renal cell carcinogenesis, a number of genome-wide expression profiling studies have been performed. Surprisingly, there is very poor agreement among these studies as to which genes are differentially regulated. To better understand this lack of agreement we profiled renal cell tumor gene expression using genome-wide microarrays (45,000 probe sets) and compare our analysis to previous microarray studies. We hybridized total RNA isolated from renal cell tumors and adjacent normal tissue to Affymetrix U133A and U133B arrays. We removed samples with technical defects and removed probesets that failed to exhibit sequence-specific hybridization in any of the samples. We detected differential gene expression in the resulting dataset with parametric methods and identified keywords that are overrepresented in the differentially expressed genes with the Fisher-exact test. We identify 1,234 genes that are more than three-fold changed in renal tumors by t-test, 800 of which have not been previously reported to be altered in renal cell tumors. Of the only 37 genes that have been identified as being differentially expressed in three or more of five previous microarray studies of renal tumor gene expression, our analysis finds 33 of these genes (89%). A key to the sensitivity and power of our analysis is filtering out defective samples and genes that are not reliably detected. The widespread use of sample-wise voting schemes for detecting differential expression that do not control for false positives likely account for the poor overlap among previous studies. Among the many genes we identified using parametric methods that were not previously reported as being differentially expressed in renal cell tumors are several oncogenes and tumor suppressor genes that likely play important roles in renal cell

  15. 28 CFR 10.5 - Incorporation of papers previously filed.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Incorporation of papers previously filed... CARRYING ON ACTIVITIES WITHIN THE UNITED STATES Registration Statement § 10.5 Incorporation of papers previously filed. Papers and documents already filed with the Attorney General pursuant to the said act and...

  16. Denoising of Microscopy Images: A Review of the State-of-the-Art, and a New Sparsity-Based Method.

    Science.gov (United States)

    Meiniel, William; Olivo-Marin, Jean-Christophe; Angelini, Elsa D

    2018-08-01

    This paper reviews the state-of-the-art in denoising methods for biological microscopy images and introduces a new and original sparsity-based algorithm. The proposed method combines total variation (TV) spatial regularization, enhancement of low-frequency information, and aggregation of sparse estimators and is able to handle simple and complex types of noise (Gaussian, Poisson, and mixed), without any a priori model and with a single set of parameter values. An extended comparison is also presented, that evaluates the denoising performance of the thirteen (including ours) state-of-the-art denoising methods specifically designed to handle the different types of noises found in bioimaging. Quantitative and qualitative results on synthetic and real images show that the proposed method outperforms the other ones on the majority of the tested scenarios.

  17. Application of Beyond Bound Decoding for High Speed Optical Communications

    DEFF Research Database (Denmark)

    Li, Bomin; Larsen, Knud J.; Vegas Olmos, Juan José

    2013-01-01

    This paper studies the application of beyond bound decoding method for high speed optical communications. This hard-decision decoding method outperforms traditional minimum distance decoding method, with a total net coding gain of 10.36 dB.......This paper studies the application of beyond bound decoding method for high speed optical communications. This hard-decision decoding method outperforms traditional minimum distance decoding method, with a total net coding gain of 10.36 dB....

  18. No discrimination against previous mates in a sexually cannibalistic spider

    Science.gov (United States)

    Fromhage, Lutz; Schneider, Jutta M.

    2005-09-01

    In several animal species, females discriminate against previous mates in subsequent mating decisions, increasing the potential for multiple paternity. In spiders, female choice may take the form of selective sexual cannibalism, which has been shown to bias paternity in favor of particular males. If cannibalistic attacks function to restrict a male's paternity, females may have little interest to remate with males having survived such an attack. We therefore studied the possibility of female discrimination against previous mates in sexually cannibalistic Argiope bruennichi, where females almost always attack their mate at the onset of copulation. We compared mating latency and copulation duration of males having experienced a previous copulation either with the same or with a different female, but found no evidence for discrimination against previous mates. However, males copulated significantly shorter when inserting into a used, compared to a previously unused, genital pore of the female.

  19. Brain deactivation in the outperformance in bimodal tasks: an FMRI study.

    Directory of Open Access Journals (Sweden)

    Tzu-Ching Chiang

    Full Text Available While it is known that some individuals can effectively perform two tasks simultaneously, other individuals cannot. How the brain deals with performing simultaneous tasks remains unclear. In the present study, we aimed to assess which brain areas corresponded to various phenomena in task performance. Nineteen subjects were requested to sequentially perform three blocks of tasks, including two unimodal tasks and one bimodal task. The unimodal tasks measured either visual feature binding or auditory pitch comparison, while the bimodal task required performance of the two tasks simultaneously. The functional magnetic resonance imaging (fMRI results are compatible with previous studies showing that distinct brain areas, such as the visual cortices, frontal eye field (FEF, lateral parietal lobe (BA7, and medial and inferior frontal lobe, are involved in processing of visual unimodal tasks. In addition, the temporal lobes and Brodmann area 43 (BA43 were involved in processing of auditory unimodal tasks. These results lend support to concepts of modality-specific attention. Compared to the unimodal tasks, bimodal tasks required activation of additional brain areas. Furthermore, while deactivated brain areas were related to good performance in the bimodal task, these areas were not deactivated where the subject performed well in only one of the two simultaneous tasks. These results indicate that efficient information processing does not require some brain areas to be overly active; rather, the specific brain areas need to be relatively deactivated to remain alert and perform well on two tasks simultaneously. Meanwhile, it can also offer a neural basis for biofeedback in training courses, such as courses in how to perform multiple tasks simultaneously.

  20. Why Did the Bear Cross the Road? Comparing the Performance of Multiple Resistance Surfaces and Connectivity Modeling Methods

    Directory of Open Access Journals (Sweden)

    Samuel A. Cushman

    2014-12-01

    Full Text Available There have been few assessments of the performance of alternative resistance surfaces, and little is known about how connectivity modeling approaches differ in their ability to predict organism movements. In this paper, we evaluate the performance of four connectivity modeling approaches applied to two resistance surfaces in predicting the locations of highway crossings by American black bears in the northern Rocky Mountains, USA. We found that a resistance surface derived directly from movement data greatly outperformed a resistance surface produced from analysis of genetic differentiation, despite their heuristic similarities. Our analysis also suggested differences in the performance of different connectivity modeling approaches. Factorial least cost paths appeared to slightly outperform other methods on the movement-derived resistance surface, but had very poor performance on the resistance surface obtained from multi-model landscape genetic analysis. Cumulative resistant kernels appeared to offer the best combination of high predictive performance and sensitivity to differences in resistance surface parameterization. Our analysis highlights that even when two resistance surfaces include the same variables and have a high spatial correlation of resistance values, they may perform very differently in predicting animal movement and population connectivity.

  1. Robust Vacuum-/Air-Dried Graphene Aerogels and Fast Recoverable Shape-Memory Hybrid Foams.

    Science.gov (United States)

    Li, Chenwei; Qiu, Ling; Zhang, Baoqing; Li, Dan; Liu, Chen-Yang

    2016-02-17

    New graphene aerogels can be fabricated by vacuum/air drying, and because of the mechanical robustness of the graphene aerogels, shape-memory polymer/graphene hybrid foams can be fabricated by a simple infiltration-air-drying-crosslinking method. Due to the superelasticity, high strength, and good electrical conductivity of the as-prepared graphene aerogels, the shape-memory hybrid foams exhibit excellent thermotropical and electrical shape-memory properties, outperforming previously reported shape-memory polymer foams. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Efficient Data Gathering Methods in Wireless Sensor Networks Using GBTR Matrix Completion

    Directory of Open Access Journals (Sweden)

    Donghao Wang

    2016-09-01

    Full Text Available To obtain efficient data gathering methods for wireless sensor networks (WSNs, a novel graph based transform regularized (GBTR matrix completion algorithm is proposed. The graph based transform sparsity of the sensed data is explored, which is also considered as a penalty term in the matrix completion problem. The proposed GBTR-ADMM algorithm utilizes the alternating direction method of multipliers (ADMM in an iterative procedure to solve the constrained optimization problem. Since the performance of the ADMM method is sensitive to the number of constraints, the GBTR-A2DM2 algorithm obtained to accelerate the convergence of GBTR-ADMM. GBTR-A2DM2 benefits from merging two constraint conditions into one as well as using a restart rule. The theoretical analysis shows the proposed algorithms obtain satisfactory time complexity. Extensive simulation results verify that our proposed algorithms outperform the state of the art algorithms for data collection problems in WSNs in respect to recovery accuracy, convergence rate, and energy consumption.

  3. Highly efficient parallel direct solver for solving dense complex matrix equations from method of moments

    Directory of Open Access Journals (Sweden)

    Yan Chen

    2017-03-01

    Full Text Available Based on the vectorised and cache optimised kernel, a parallel lower upper decomposition with a novel communication avoiding pivoting scheme is developed to solve dense complex matrix equations generated by the method of moments. The fine-grain data rearrangement and assembler instructions are adopted to reduce memory accessing times and improve CPU cache utilisation, which also facilitate vectorisation of the code. Through grouping processes in a binary tree, a parallel pivoting scheme is designed to optimise the communication pattern and thus reduces the solving time of the proposed solver. Two large electromagnetic radiation problems are solved on two supercomputers, respectively, and the numerical results demonstrate that the proposed method outperforms those in open source and commercial libraries.

  4. Can motto-goals outperform learning and performance goals? Influence of goal setting on performance and affect in a complex problem solving task

    Directory of Open Access Journals (Sweden)

    Miriam S. Rohe

    2016-09-01

    Full Text Available In this paper, we bring together research on complex problem solving with that on motivational psychology about goal setting. Complex problems require motivational effort because of their inherent difficulties. Goal Setting Theory has shown with simple tasks that high, specific performance goals lead to better performance outcome than do-your-best goals. However, in complex tasks, learning goals have proven more effective than performance goals. Based on the Zurich Resource Model (Storch & Krause, 2014, so-called motto-goals (e.g., "I breathe happiness" should activate a person’s resources through positive affect. It was found that motto-goals are effective with unpleasant duties. Therefore, we tested the hypothesis that motto-goals outperform learning and performance goals in the case of complex problems. A total of N = 123 subjects participated in the experiment. In dependence of their goal condition, subjects developed a personal motto, learning, or performance goal. This goal was adapted for the computer-simulated complex scenario Tailorshop, where subjects worked as managers in a small fictional company. Other than expected, there was no main effect of goal condition for the management performance. As hypothesized, motto goals led to higher positive and lower negative affect than the other two goal types. Even though positive affect decreased and negative affect increased in all three groups during Tailorshop completion, participants with motto goals reported the lowest rates of negative affect over time. Exploratory analyses investigated the role of affect in complex problem solving via mediational analyses and the influence of goal type on perceived goal attainment.

  5. [Meningitis and white matter lesions due to Streptococcus mitis in a previously healthy child].

    Science.gov (United States)

    Yiş, Reyhan; Yüksel, Ciğdem Nükhet; Derundere, Umit; Yiş, Uluç

    2011-10-01

    Streptococcus mitis, an important member of viridans streptococci, is found in the normal flora of the oropharynx, gastrointestinal tract, female genital tract and skin. Although it is of low pathogenicity and virulence, it may cause serious infections in immunocompromised patients. Meningitis caused by S.mitis has been described in patients with previous spinal anesthesia, neurosurgical procedure, malignancy, bacterial endocarditis with neurological complications and alcoholics, but it is rare in patients who are previously healthy. In this report, a rare case of meningoencephalitis caused by S.mitis developed in a previously healthy child has been presented. A previously healthy eight-year-old girl who presented with fever, altered state of consciousness, and headache was hospitalized in intensive care unit with the diagnosis of meningitis. Past history revealed that she was treated with amoxicillin-clavulanate for acute sinusitis ten days before her admission. Whole blood count revealed the followings: hemoglobin 13 g/dl, white blood cell count 18.6 x 109/L (90% neutrophils), platelet count 200 x 109/L and 150 leucocytes were detected on cerebrospinal fluid (CSF) examination. Protein and glucose levels of CSF were 80 mg/dl and 40 mg/dl (concomitant blood glucose 100 mg/dl), respectively. Brain magnetic resonance imaging (MRI) revealed widespread white matter lesions, and alpha-hemolytic streptococci were grown in CSF culture. The isolate was identified as S.mitis with conventional methods, and also confirmed by VITEK2 (bioMerieux, France) and API 20 STREP (bioMerieux, France) systems. Isolate was found susceptible to penicillin, erythromycin, clindamycin, tetracycline, cefotaxime, vancomycin and chloramphenicol. Regarding the etiology, echocardiography revealed no vegetation nor valve pathology, and peripheral blood smear showed no abnormality. Immunoglobulin and complement levels were within normal limits. Ongoing inflammation in maxillary sinuses detected in

  6. Effective Diagnosis of Alzheimer's Disease by Means of Association Rules

    Science.gov (United States)

    Chaves, R.; Ramírez, J.; Górriz, J. M.; López, M.; Salas-Gonzalez, D.; Illán, I.; Segovia, F.; Padilla, P.

    In this paper we present a novel classification method of SPECT images for the early diagnosis of the Alzheimer's disease (AD). The proposed method is based on Association Rules (ARs) aiming to discover interesting associations between attributes contained in the database. The system uses firstly voxel-as-features (VAF) and Activation Estimation (AE) to find tridimensional activated brain regions of interest (ROIs) for each patient. These ROIs act as inputs to secondly mining ARs between activated blocks for controls, with a specified minimum support and minimum confidence. ARs are mined in supervised mode, using information previously extracted from the most discriminant rules for centering interest in the relevant brain areas, reducing the computational requirement of the system. Finally classification process is performed depending on the number of previously mined rules verified by each subject, yielding an up to 95.87% classification accuracy, thus outperforming recent developed methods for AD diagnosis.

  7. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  8. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke

    2017-01-01

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  9. Use of (D, MUF) and maximum-likelihood methods for detecting falsification and diversion in data-verification problems

    International Nuclear Information System (INIS)

    Goldman, A.S.; Beedgen, R.

    1982-01-01

    The investigation of data falsification and/or diversion is of major concern in nuclear materials accounting procedures used in international safeguards. In this paper, two procedures, denoted by (D,MUF) and LR (Likelihood Ratio), are discussed and compared when testing the hypothesis that neither diversion nor falsification has taken place versus the one-sided alternative that at least one of these parameters is positive. Critical regions and detection probabilities are given for both tests. It is shown that the LR method outperforms (D,MUF) when diversion and falsification take place

  10. A previous hamstring injury affects kicking mechanics in soccer players.

    Science.gov (United States)

    Navandar, Archit; Veiga, Santiago; Torres, Gonzalo; Chorro, David; Navarro, Enrique

    2018-01-10

    Although the kicking skill is influenced by limb dominance and sex, how a previous hamstring injury affects kicking has not been studied in detail. Thus, the objective of this study was to evaluate the effect of sex and limb dominance on kicking in limbs with and without a previous hamstring injury. 45 professional players (males: n=19, previously injured players=4, age=21.16 ± 2.00 years; females: n=19, previously injured players=10, age=22.15 ± 4.50 years) performed 5 kicks each with their preferred and non-preferred limb at a target 7m away, which were recorded with a three-dimensional motion capture system. Kinematic and kinetic variables were extracted for the backswing, leg cocking, leg acceleration and follow through phases. A shorter backswing (20.20 ± 3.49% vs 25.64 ± 4.57%), and differences in knee flexion angle (58 ± 10o vs 72 ± 14o) and hip flexion velocity (8 ± 0rad/s vs 10 ± 2rad/s) were observed in previously injured, non-preferred limb kicks for females. A lower peak hip linear velocity (3.50 ± 0.84m/s vs 4.10 ± 0.45m/s) was observed in previously injured, preferred limb kicks of females. These differences occurred in the backswing and leg-cocking phases where the hamstring muscles were the most active. A variation in the functioning of the hamstring muscles and that of the gluteus maximus and iliopsoas in the case of a previous injury could account for the differences observed in the kicking pattern. Therefore, the effects of a previous hamstring injury must be considered while designing rehabilitation programs to re-educate kicking movement.

  11. Panel presentation: Should some type of incentive regulation replace traditional methods for LDC's?

    International Nuclear Information System (INIS)

    Richard, O.G.

    1992-01-01

    This paper discusses the problems with existing fixed-rate price regulation and how a deregulation of both the pipeline and gas utility companies are needed to enhance competition. The paper suggests alternative methods to traditional regulation which include a financial incentive package which allows or encourages utilities to make investments in more efficient energy management, to improve load factors to balance the energy demands between industrial and residential users, and reward purchases of gas supplies that out-perform an agreed upon level of rates of a cost index. Other incentive programs are proposed by the author with a relative detailed discussion on each topic

  12. A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.

    Science.gov (United States)

    Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing

    2016-12-01

    To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients' breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured

  13. Comparing DIF methods for data with dual dependency

    Directory of Open Access Journals (Sweden)

    Ying Jin

    2016-09-01

    Full Text Available Abstract Background The current study compared four differential item functioning (DIF methods to examine their performances in terms of accounting for dual dependency (i.e., person and item clustering effects simultaneously by a simulation study, which is not sufficiently studied under the current DIF literature. The four methods compared are logistic regression accounting neither person nor item clustering effect, hierarchical logistic regression accounting for person clustering effect, the testlet model accounting for the item clustering effect, and the multilevel testlet model accounting for both person and item clustering effects. The secondary goal of the current study was to evaluate the trade-off between simple models and complex models for the accuracy of DIF detection. An empirical example analyzing the 2011 TIMSS Mathematics data was also included to demonstrate the differential performances of the four DIF methods. A number of DIF analyses have been done on the TIMSS data, and rarely had these analyses accounted for the dual dependence of the data. Results Results indicated the complex models did not outperform simple models under certain conditions, especially when DIF parameters were considered in addition to significance tests. Conclusions Results of the current study could provide supporting evidence for applied researchers in selecting the appropriate DIF methods under various conditions.

  14. Subsequent childbirth after a previous traumatic birth.

    Science.gov (United States)

    Beck, Cheryl Tatano; Watson, Sue

    2010-01-01

    Nine percent of new mothers in the United States who participated in the Listening to Mothers II Postpartum Survey screened positive for meeting the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria for posttraumatic stress disorder after childbirth. Women who have had a traumatic birth experience report fewer subsequent children and a longer length of time before their second baby. Childbirth-related posttraumatic stress disorder impacts couples' physical relationship, communication, conflict, emotions, and bonding with their children. The purpose of this study was to describe the meaning of women's experiences of a subsequent childbirth after a previous traumatic birth. Phenomenology was the research design used. An international sample of 35 women participated in this Internet study. Women were asked, "Please describe in as much detail as you can remember your subsequent pregnancy, labor, and delivery following your previous traumatic birth." Colaizzi's phenomenological data analysis approach was used to analyze the stories of the 35 women. Data analysis yielded four themes: (a) riding the turbulent wave of panic during pregnancy; (b) strategizing: attempts to reclaim their body and complete the journey to motherhood; (c) bringing reverence to the birthing process and empowering women; and (d) still elusive: the longed-for healing birth experience. Subsequent childbirth after a previous birth trauma has the potential to either heal or retraumatize women. During pregnancy, women need permission and encouragement to grieve their prior traumatic births to help remove the burden of their invisible pain.

  15. High-resolution melting (HRM) re-analysis of a polyposis patients cohort reveals previously undetected heterozygous and mosaic APC gene mutations.

    Science.gov (United States)

    Out, Astrid A; van Minderhout, Ivonne J H M; van der Stoep, Nienke; van Bommel, Lysette S R; Kluijt, Irma; Aalfs, Cora; Voorendt, Marsha; Vossen, Rolf H A M; Nielsen, Maartje; Vasen, Hans F A; Morreau, Hans; Devilee, Peter; Tops, Carli M J; Hes, Frederik J

    2015-06-01

    Familial adenomatous polyposis is most frequently caused by pathogenic variants in either the APC gene or the MUTYH gene. The detection rate of pathogenic variants depends on the severity of the phenotype and sensitivity of the screening method, including sensitivity for mosaic variants. For 171 patients with multiple colorectal polyps without previously detectable pathogenic variant, APC was reanalyzed in leukocyte DNA by one uniform technique: high-resolution melting (HRM) analysis. Serial dilution of heterozygous DNA resulted in a lowest detectable allelic fraction of 6% for the majority of variants. HRM analysis and subsequent sequencing detected pathogenic fully heterozygous APC variants in 10 (6%) of the patients and pathogenic mosaic variants in 2 (1%). All these variants were previously missed by various conventional scanning methods. In parallel, HRM APC scanning was applied to DNA isolated from polyp tissue of two additional patients with apparently sporadic polyposis and without detectable pathogenic APC variant in leukocyte DNA. In both patients a pathogenic mosaic APC variant was present in multiple polyps. The detection of pathogenic APC variants in 7% of the patients, including mosaics, illustrates the usefulness of a complete APC gene reanalysis of previously tested patients, by a supplementary scanning method. HRM is a sensitive and fast pre-screening method for reliable detection of heterozygous and mosaic variants, which can be applied to leukocyte and polyp derived DNA.

  16. Conhecimento, atitude e prática sobre métodos anticoncepcionais entre adolescentes gestantes Knowledge, attitudes, and practices on previous use of contraceptive methods among pregnant teenagers

    Directory of Open Access Journals (Sweden)

    Márcio Alves Vieira Belo

    2004-08-01

    describe the knowledge, attitudes and practices related to previous contraceptive methods used among pregnant teenagers as well as to outline some sociodemographic characteristics and sexual practices. METHODS: An observational study associated to the KAP (Knowledge, Attitudes, and Practices survey was carried out in 156 pregnant teenagers aged 19 years or more. A structured questionnaire was applied before their first prenatal visit from October 1999 to August 2000. Univariate and bivariate analyses were performed using Pearson's and Yates' chi-square test and logistic regression. RESULTS: The adolescents had an average age of 16.1 years and most were in their first pregnancy (78.8%. Average age of menarche was 12.2 years and their first sexual intercourse was at the age of 14.5 years. Condoms (99.4% and oral contraceptives (98% were the most common contraceptive methods known. Of all, 67.3% were not using any contraceptive method before getting pregnant. The main reason reported for not using any contraceptive method was wanting to get pregnant (24.5%. The older ones who reported having religious beliefs and had a higher socioeconomic status had better knowledge on contraceptive methods. Teenagers who had had previous pregnancies reported more often use of contraceptive methods before getting pregnant. CONCLUSIONS: The pregnant teenagers showed to have adequate knowledge of contraceptive methods and agreed to use them throughout their teenage years. Religion, age group, and socioeconomic status were directly related to their knowledge on contraceptive methods, and multiple pregnancies brought more awareness on that. Of all, 54% had used any contraceptive on first sexual intercourse but their use decreased over time and shortly after their first intercourse the studied teenagers got pregnant.

  17. Max-out-in pivot rule with Dantzig's safeguarding rule for the simplex method

    International Nuclear Information System (INIS)

    Tipawanna, Monsicha; Sinapiromsaran, Krung

    2014-01-01

    The simplex method is used to solve linear programming problem by improving the current basic feasible solution. It uses a pivot rule to guide the search in the feasible region. The pivot rule is used to select an entering index in simplex method. Nowadays, many pivot rule have been presented, but no pivot rule shows superior performance than other. Therefore, this is still an active research in linear programming. In this research, we present the max-out-in pivot rule with Dantzig's safeguarding for simplex method. This rule is based on maximum improvement of objective value of the current basic feasible point similar to the Dantzig's rule. We can illustrate by Klee and Minty problems that our rule outperforms that of Dantzig's rule by the number of iterations for solving linear programming problems

  18. Urethrotomy has a much lower success rate than previously reported.

    Science.gov (United States)

    Santucci, Richard; Eisenberg, Lauren

    2010-05-01

    We evaluated the success rate of direct vision internal urethrotomy as a treatment for simple male urethral strictures. A retrospective chart review was performed on 136 patients who underwent urethrotomy from January 1994 through March 2009. The Kaplan-Meier method was used to analyze stricture-free probability after the first, second, third, fourth and fifth urethrotomy. Patients with complex strictures (36) were excluded from the study for reasons including previous urethroplasty, neophallus or previous radiation, and 24 patients were lost to followup. Data were available for 76 patients. The stricture-free rate after the first urethrotomy was 8% with a median time to recurrence of 7 months. For the second urethrotomy stricture-free rate was 6% with a median time to recurrence of 9 months. For the third urethrotomy stricture-free rate was 9% with a median time to recurrence of 3 months. For procedures 4 and 5 stricture-free rate was 0% with a median time to recurrence of 20 and 8 months, respectively. Urethrotomy is a popular treatment for male urethral strictures. However, the performance characteristics are poor. Success rates were no higher than 9% in this series for first or subsequent urethrotomy during the observation period. Most of the patients in this series will be expected to experience failure with longer followup and the expected long-term success rate from any (1 through 5) urethrotomy approach is 0%. Urethrotomy should be considered a temporizing measure until definitive curative reconstruction can be planned. 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  19. A Hyper-Heuristic Ensemble Method for Static Job-Shop Scheduling.

    Science.gov (United States)

    Hart, Emma; Sim, Kevin

    2016-01-01

    We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyper-heuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.

  20. Predicting Radiation Pneumonitis After Stereotactic Ablative Radiation Therapy in Patients Previously Treated With Conventional Thoracic Radiation Therapy

    International Nuclear Information System (INIS)

    Liu Hui; Zhang Xu; Vinogradskiy, Yevgeniy Y.; Swisher, Stephen G.; Komaki, Ritsuko; Chang, Joe Y.

    2012-01-01

    Purpose: To determine the incidence of and risk factors for radiation pneumonitis (RP) after stereotactic ablative radiation therapy (SABR) to the lung in patients who had previously undergone conventional thoracic radiation therapy. Methods and Materials: Seventy-two patients who had previously received conventionally fractionated radiation therapy to the thorax were treated with SABR (50 Gy in 4 fractions) for recurrent disease or secondary parenchymal lung cancer (T 10 and mean lung dose (MLD) of the previous plan and the V 10 -V 40 and MLD of the composite plan were also related to RP. Multivariate analysis revealed that ECOG PS scores of 2-3 before SABR (P=.009), FEV1 ≤65% before SABR (P=.012), V 20 ≥30% of the composite plan (P=.021), and an initial PTV in the bilateral mediastinum (P=.025) were all associated with RP. Conclusions: We found that severe RP was relatively common, occurring in 20.8% of patients, and could be predicted by an ECOG PS score of 2-3, an FEV1 ≤65%, a previous PTV spanning the bilateral mediastinum, and V 20 ≥30% on composite (previous RT+SABR) plans. Prospective studies are needed to validate these predictors and the scoring system on which they are based.

  1. A novel visual saliency detection method for infrared video sequences

    Science.gov (United States)

    Wang, Xin; Zhang, Yuzhen; Ning, Chen

    2017-12-01

    Infrared video applications such as target detection and recognition, moving target tracking, and so forth can benefit a lot from visual saliency detection, which is essentially a method to automatically localize the ;important; content in videos. In this paper, a novel visual saliency detection method for infrared video sequences is proposed. Specifically, for infrared video saliency detection, both the spatial saliency and temporal saliency are considered. For spatial saliency, we adopt a mutual consistency-guided spatial cues combination-based method to capture the regions with obvious luminance contrast and contour features. For temporal saliency, a multi-frame symmetric difference approach is proposed to discriminate salient moving regions of interest from background motions. Then, the spatial saliency and temporal saliency are combined to compute the spatiotemporal saliency using an adaptive fusion strategy. Besides, to highlight the spatiotemporal salient regions uniformly, a multi-scale fusion approach is embedded into the spatiotemporal saliency model. Finally, a Gestalt theory-inspired optimization algorithm is designed to further improve the reliability of the final saliency map. Experimental results demonstrate that our method outperforms many state-of-the-art saliency detection approaches for infrared videos under various backgrounds.

  2. Multimodal biometric method that combines veins, prints, and shape of a finger

    Science.gov (United States)

    Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Kim, Jeong Nyeo

    2011-01-01

    Multimodal biometrics provides high recognition accuracy and population coverage by using various biometric features. A single finger contains finger veins, fingerprints, and finger geometry features; by using multimodal biometrics, information on these multiple features can be simultaneously obtained in a short time and their fusion can outperform the use of a single feature. This paper proposes a new finger recognition method based on the score-level fusion of finger veins, fingerprints, and finger geometry features. This research is novel in the following four ways. First, the performances of the finger-vein and fingerprint recognition are improved by using a method based on a local derivative pattern. Second, the accuracy of the finger geometry recognition is greatly increased by combining a Fourier descriptor with principal component analysis. Third, a fuzzy score normalization method is introduced; its performance is better than the conventional Z-score normalization method. Fourth, finger-vein, fingerprint, and finger geometry recognitions are combined by using three support vector machines and a weighted SUM rule. Experimental results showed that the equal error rate of the proposed method was 0.254%, which was lower than those of the other methods.

  3. 22 CFR 40.93 - Aliens unlawfully present after previous immigration violation.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Aliens unlawfully present after previous... TO BOTH NONIMMIGRANTS AND IMMIGRANTS UNDER THE IMMIGRATION AND NATIONALITY ACT, AS AMENDED Aliens Previously Removed § 40.93 Aliens unlawfully present after previous immigration violation. An alien described...

  4. Revisiting chlorophyll extraction methods in biological soil crusts - methodology for determination of chlorophyll a and chlorophyll a + b as compared to previous methods

    Science.gov (United States)

    Caesar, Jennifer; Tamm, Alexandra; Ruckteschler, Nina; Lena Leifke, Anna; Weber, Bettina

    2018-03-01

    Chlorophyll concentrations of biological soil crust (biocrust) samples are commonly determined to quantify the relevance of photosynthetically active organisms within these surface soil communities. Whereas chlorophyll extraction methods for freshwater algae and leaf tissues of vascular plants are well established, there is still some uncertainty regarding the optimal extraction method for biocrusts, where organism composition is highly variable and samples comprise major amounts of soil. In this study we analyzed the efficiency of two different chlorophyll extraction solvents, the effect of grinding the soil samples prior to the extraction procedure, and the impact of shaking as an intermediate step during extraction. The analyses were conducted on four different types of biocrusts. Our results show that for all biocrust types chlorophyll contents obtained with ethanol were significantly lower than those obtained using dimethyl sulfoxide (DMSO) as a solvent. Grinding of biocrust samples prior to analysis caused a highly significant decrease in chlorophyll content for green algal lichen- and cyanolichen-dominated biocrusts, and a tendency towards lower values for moss- and algae-dominated biocrusts. Shaking of the samples after each extraction step had a significant positive effect on the chlorophyll content of green algal lichen- and cyanolichen-dominated biocrusts. Based on our results we confirm a DMSO-based chlorophyll extraction method without grinding pretreatment and suggest the addition of an intermediate shaking step for complete chlorophyll extraction (see Supplement S6 for detailed manual). Determination of a universal chlorophyll extraction method for biocrusts is essential for the inter-comparability of publications conducted across all continents.

  5. A study of several CAD methods for classification of clustered microcalcifications

    Science.gov (United States)

    Wei, Liyang; Yang, Yongyi; Nishikawa, Robert M.; Jiang, Yulei

    2005-04-01

    In this paper we investigate several state-of-the-art machine-learning methods for automated classification of clustered microcalcifications (MCs), aimed to assisting radiologists for more accurate diagnosis of breast cancer in a computer-aided diagnosis (CADx) scheme. The methods we consider include: support vector machine (SVM), kernel Fisher discriminant (KFD), and committee machines (ensemble averaging and AdaBoost), most of which have been developed recently in statistical learning theory. We formulate differentiation of malignant from benign MCs as a supervised learning problem, and apply these learning methods to develop the classification algorithms. As input, these methods use image features automatically extracted from clustered MCs. We test these methods using a database of 697 clinical mammograms from 386 cases, which include a wide spectrum of difficult-to-classify cases. We use receiver operating characteristic (ROC) analysis to evaluate and compare the classification performance by the different methods. In addition, we also investigate how to combine information from multiple-view mammograms of the same case so that the best decision can be made by a classifier. In our experiments, the kernel-based methods (i.e., SVM, KFD) yield the best performance, significantly outperforming a well-established CADx approach based on neural network learning.

  6. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    Science.gov (United States)

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.

  7. Predicting protein complexes using a supervised learning method combined with local structural information.

    Science.gov (United States)

    Dong, Yadong; Sun, Yongqi; Qin, Chao

    2018-01-01

    The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.

  8. Comparison between ARIMA and DES Methods of Forecasting Population for Housing Demand in Johor

    Directory of Open Access Journals (Sweden)

    Alias Ahmad Rizal

    2016-01-01

    Full Text Available Forecasting accuracy is a primary criterion in selecting appropriate method of prediction. Even though there are various methods of forecasting however not all of these methods are able to predict with good accuracy. This paper presents an evaluation of two methods of population forecasting for housing demand. These methods are Autoregressive Integrated Moving Average (ARIMA and Double Exponential Smoothing (DES. Both of the methods are principally adopting univariate time series analysis which uses past and present data for forecasting. Secondary data obtained from Department of Statistics, Malaysia was used to forecast population for housing demand in Johor. Forecasting processes had generated 14 models to each of the methods and these models where evaluated using Mean Absolute Percentage Error (MAPE. It was found that 14 of Double Exponential Smoothing models and also 14 of ARIMA models had resulted to 1.674% and 5.524% of average MAPE values respectively. Hence, the Double Exponential Smoothing method outperformed the ARIMA method by reducing 4.00 % in forecasting model population for Johor state. These findings help researchers and government agency in selecting appropriate forecasting model for housing demand.

  9. Modified automatic term selection v2: A faster algorithm to calculate inelastic scattering cross-sections

    Energy Technology Data Exchange (ETDEWEB)

    Rusz, Ján, E-mail: jan.rusz@fysik.uu.se

    2017-06-15

    Highlights: • New algorithm for calculating double differential scattering cross-section. • Shown good convergence properties. • Outperforms older MATS algorithm, particularly in zone axis calculations. - Abstract: We present a new algorithm for calculating inelastic scattering cross-section for fast electrons. Compared to the previous Modified Automatic Term Selection (MATS) algorithm (Rusz et al. [18]), it has far better convergence properties in zone axis calculations and it allows to identify contributions of individual atoms. One can think of it as a blend of MATS algorithm and a method described by Weickenmeier and Kohl [10].

  10. 18 CFR 154.302 - Previously submitted material.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Previously submitted material. 154.302 Section 154.302 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... concurrently with the rate change filing. There must be furnished to the Director, Office of Energy Market...

  11. Subsequent pregnancy outcome after previous foetal death

    NARCIS (Netherlands)

    Nijkamp, J. W.; Korteweg, F. J.; Holm, J. P.; Timmer, A.; Erwich, J. J. H. M.; van Pampus, M. G.

    Objective: A history of foetal death is a risk factor for complications and foetal death in subsequent pregnancies as most previous risk factors remain present and an underlying cause of death may recur. The purpose of this study was to evaluate subsequent pregnancy outcome after foetal death and to

  12. An Improved Method for Solving Multiobjective Integer Linear Fractional Programming Problem

    Directory of Open Access Journals (Sweden)

    Meriem Ait Mehdi

    2014-01-01

    Full Text Available We describe an improvement of Chergui and Moulaï’s method (2008 that generates the whole efficient set of a multiobjective integer linear fractional program based on the branch and cut concept. The general step of this method consists in optimizing (maximizing without loss of generality one of the fractional objective functions over a subset of the original continuous feasible set; then if necessary, a branching process is carried out until obtaining an integer feasible solution. At this stage, an efficient cut is built from the criteria’s growth directions in order to discard a part of the feasible domain containing only nonefficient solutions. Our contribution concerns firstly the optimization process where a linear program that we define later will be solved at each step rather than a fractional linear program. Secondly, local ideal and nadir points will be used as bounds to prune some branches leading to nonefficient solutions. The computational experiments show that the new method outperforms the old one in all the treated instances.

  13. Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History

    Directory of Open Access Journals (Sweden)

    Danping Wang

    2017-01-01

    Full Text Available A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional, 10 CEC2005 benchmark functions (30-dimensional, and a real-world problem (multilevel image segmentation problems. Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.

  14. A hybrid filtering method based on a novel empirical mode decomposition for friction signals

    International Nuclear Information System (INIS)

    Li, Chengwei; Zhan, Liwei

    2015-01-01

    During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods. (paper)

  15. An Efficient Ensemble Learning Method for Gene Microarray Classification

    Directory of Open Access Journals (Sweden)

    Alireza Osareh

    2013-01-01

    Full Text Available The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. However, it has been also revealed that the basic classification techniques have intrinsic drawbacks in achieving accurate gene classification and cancer diagnosis. On the other hand, classifier ensembles have received increasing attention in various applications. Here, we address the gene classification issue using RotBoost ensemble methodology. This method is a combination of Rotation Forest and AdaBoost techniques which in turn preserve both desirable features of an ensemble architecture, that is, accuracy and diversity. To select a concise subset of informative genes, 5 different feature selection algorithms are considered. To assess the efficiency of the RotBoost, other nonensemble/ensemble techniques including Decision Trees, Support Vector Machines, Rotation Forest, AdaBoost, and Bagging are also deployed. Experimental results have revealed that the combination of the fast correlation-based feature selection method with ICA-based RotBoost ensemble is highly effective for gene classification. In fact, the proposed method can create ensemble classifiers which outperform not only the classifiers produced by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods, that is, Bagging and AdaBoost.

  16. Left ventricular asynergy score as an indicator of previous myocardial infarction

    International Nuclear Information System (INIS)

    Backman, C.; Jacobsson, K.A.; Linderholm, H.; Osterman, G.

    1986-01-01

    Sixty-eight patients with coronary heart disease (CHD) i.e. a hisotry of angina of effort and/or previous 'possible infarction' were examined inter alia with ECG and cinecardioangiography. A system of scoring was designed which allowed a semiquantitative estimate of the left ventricular asynergy from cinecardioangiography - the left ventricular motion score (LVMS). The LVMS was associated with the presence of a previous myocardial infarction (MI), as indicated by the history and ECG findings. The ECG changes specific for a previous MI were associated with high LVMS values and unspecific or absent ECG changes with low LVMS values. Decision thresholds for ECG changes and asynergy in diagnosing a previous MI were evaluated by means of a ROC analysis. The accuracy of ECG in detecting a previous MI was slightly higher when asynergy indicated a 'true MI' than when autopsy result did so in a comparable group. Therefore the accuracy of asynergy (LVMS ≥ 1) in detecting a previous MI or myocardial fibrosis in patients with CHD should be at least comparable with that of autopsy (scar > 1 cm). (orig.)

  17. In vitro fertilization outcome in women with endometriosis & previous ovarian surgery

    Directory of Open Access Journals (Sweden)

    Sonja Pop-Trajkovic

    2014-01-01

    Full Text Available Background & objectives: Women with endometriosis often need in vitro fertilization (IVF to concieve. There are conflicting data on the results of IVF in patients with endometriosis. This study was undertaken to elucidate the influence of endometriosis on IVF outcome to give the best counselling for infertile patient with this problem. Methods: The outcome measures in 78 patients with surgically confirmed endometriosis were compared with 157 patients with tubal factor infertility, all of whom have undergone IVF. The groups were matched for age and follicle stimulating hormone (FSH levels. Outcome measures included number of follicles, number of ocytes, peak oestradiol (E2 concentrations and mean number of ampoules of gonadotropins. Cumulative pregnancy, miscarriage and live birth rates were calculated in both the groups. Results: Higher cancelation rates, higher total gonadotropin requirements, lower peak E2 levels and lower oocyte yield were found in women with endometriosis and previous surgery compared with those with tubal factor infertility. However, no differences were found in fertilization, implantation, pregnancy, miscarriage, multiple births and delivery rates between the endometriosis and tubal factor infertility groups. Interpretation & conclusions: The present findings showed that women with endometriosis and previous surgery responded less well to gonadotropins during ovarian stimulation and hence the cost of treatment to achieve pregnancy was higher in this group compared with those with tubal factor infertility. However, the outcome of IVF treatment in patients with endometriosis was as good as in women with tubal factor infertility.

  18. The Effect of Cooperative Learning with DSLM on Conceptual Understanding and Scientific Reasoning among Form Four Physics Students with Different Motivation Levels

    Directory of Open Access Journals (Sweden)

    M.S. Hamzah

    2010-11-01

    Full Text Available The purpose of this study was to investigate the effect of Cooperative Learning with a Dual Situated Learning Model (CLDSLM and a Dual Situated Learning Model (DSLM on (a conceptual understanding (CU and (b scientific reasoning (SR among Form Four students. The study further investigated the effect of the CLDSLM and DSLM methods on performance in conceptual understanding and scientific reasoning among students with different motivation levels. A quasi-experimental method with the 3 x 2 Factorial Design was applied in the study. The sample consisted of 240 stu¬dents in six (form four classes selected from three different schools, i.e. two classes from each school, with students randomly selected and assigned to the treatment groups. The results showed that students in the CLDSLM group outperformed their counterparts in the DSLM group—who, in turn, significantly outperformed other students in the traditional instructional method (T group in scientific reasoning and conceptual understanding. Also, high-motivation (HM students in the CLDSLM group significantly outperformed their counterparts in the T groups in conceptual understanding and scientific reasoning. Furthermore, HM students in the CLDSLM group significantly outperformed their counterparts in the DSLM group in scientific reasoning but did not significantly outperform their counterparts on conceptual understanding. Also, the DSLM instructional method has significant positive effects on highly motivated students’ (a conceptual understanding and (b scientific reason¬ing. The results also showed that LM students in the CLDSLM group significantly outperformed their counterparts in the DSLM group and (T method group in scientific reasoning and conceptual understanding. However, the low-motivation students taught via the DSLM instructional method significantly performed higher than the low-motivation students taught via the T method in scientific reasoning. Nevertheless, they did not

  19. Emphysema and bronchiectasis in COPD patients with previous pulmonary tuberculosis: computed tomography features and clinical implications

    Directory of Open Access Journals (Sweden)

    Jin J

    2018-01-01

    Full Text Available Jianmin Jin,1 Shuling Li,2 Wenling Yu,2 Xiaofang Liu,1 Yongchang Sun1,3 1Department of Respiratory and Critical Care Medicine, Beijing Tongren Hospital, Capital Medical University, Beijing, 2Department of Radiology, Beijing Tongren Hospital, Capital Medical University, Beijing, 3Department of Respiratory and Critical Care Medicine, Peking University Third Hospital, Beijing, China Background: Pulmonary tuberculosis (PTB is a risk factor for COPD, but the clinical characteristics and the chest imaging features (emphysema and bronchiectasis of COPD with previous PTB have not been studied well.Methods: The presence, distribution, and severity of emphysema and bronchiectasis in COPD patients with and without previous PTB were evaluated by high-resolution computed tomography (HRCT and compared. Demographic data, respiratory symptoms, lung function, and sputum culture of Pseudomonas aeruginosa were also compared between patients with and without previous PTB.Results: A total of 231 COPD patients (82.2% ex- or current smokers, 67.5% male were consecutively enrolled. Patients with previous PTB (45.0% had more severe (p=0.045 and longer history (p=0.008 of dyspnea, more exacerbations in the previous year (p=0.011, and more positive culture of P. aeruginosa (p=0.001, compared with those without PTB. Patients with previous PTB showed a higher prevalence of bronchiectasis (p<0.001, which was more significant in lungs with tuberculosis (TB lesions, and a higher percentage of more severe bronchiectasis (Bhalla score ≥2, p=0.031, compared with those without previous PTB. The overall prevalence of emphysema was not different between patients with and without previous PTB, but in those with previous PTB, a higher number of subjects with middle (p=0.001 and lower (p=0.019 lobe emphysema, higher severity score (p=0.028, higher prevalence of panlobular emphysema (p=0.013, and more extensive centrilobular emphysema (p=0.039 were observed. Notably, in patients with

  20. Computer-assisted photo identification outperforms visible implant elastomers in an endangered salamander, Eurycea tonkawae.

    Directory of Open Access Journals (Sweden)

    Nathan F Bendik

    Full Text Available Despite recognition that nearly one-third of the 6300 amphibian species are threatened with extinction, our understanding of the general ecology and population status of many amphibians is relatively poor. A widely-used method for monitoring amphibians involves injecting captured individuals with unique combinations of colored visible implant elastomer (VIE. We compared VIE identification to a less-invasive method - computer-assisted photographic identification (photoID - in endangered Jollyville Plateau salamanders (Eurycea tonkawae, a species with a known range limited to eight stream drainages in central Texas. We based photoID on the unique pigmentation patterns on the dorsal head region of 1215 individual salamanders using identification software Wild-ID. We compared the performance of photoID methods to VIEs using both 'high-quality' and 'low-quality' images, which were taken using two different camera types and technologies. For high-quality images, the photoID method had a false rejection rate of 0.76% compared to 1.90% for VIEs. Using a comparable dataset of lower-quality images, the false rejection rate was much higher (15.9%. Photo matching scores were negatively correlated with time between captures, suggesting that evolving natural marks could increase misidentification rates in longer term capture-recapture studies. Our study demonstrates the utility of large-scale capture-recapture using photo identification methods for Eurycea and other species with stable natural marks that can be reliably photographed.

  1. A Feature Subset Selection Method Based On High-Dimensional Mutual Information

    Directory of Open Access Journals (Sweden)

    Chee Keong Kwoh

    2011-04-01

    Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.

  2. Search method for long-duration gravitational-wave transients from neutron stars

    International Nuclear Information System (INIS)

    Prix, R.; Giampanis, S.; Messenger, C.

    2011-01-01

    We introduce a search method for a new class of gravitational-wave signals, namely, long-duration O(hours-weeks) transients from spinning neutron stars. We discuss the astrophysical motivation from glitch relaxation models and we derive a rough estimate for the maximal expected signal strength based on the superfluid excess rotational energy. The transient signal model considered here extends the traditional class of infinite-duration continuous-wave signals by a finite start-time and duration. We derive a multidetector Bayes factor for these signals in Gaussian noise using F-statistic amplitude priors, which simplifies the detection statistic and allows for an efficient implementation. We consider both a fully coherent statistic, which is computationally limited to directed searches for known pulsars, and a cheaper semicoherent variant, suitable for wide parameter-space searches for transients from unknown neutron stars. We have tested our method by Monte-Carlo simulation, and we find that it outperforms orthodox maximum-likelihood approaches both in sensitivity and in parameter-estimation quality.

  3. Reasoning with Previous Decisions: Beyond the Doctrine of Precedent

    DEFF Research Database (Denmark)

    Komárek, Jan

    2013-01-01

    in different jurisdictions use previous judicial decisions in their argument, we need to move beyond the concept of precedent to a wider notion, which would embrace practices and theories in legal systems outside the Common law tradition. This article presents the concept of ‘reasoning with previous decisions...... law method’, but they are no less rational and intellectually sophisticated. The reason for the rather conceited attitude of some comparatists is in the dominance of the common law paradigm of precedent and the accompanying ‘case law method’. If we want to understand how courts and lawyers......’ as such an alternative and develops its basic models. The article first points out several shortcomings inherent in limiting the inquiry into reasoning with previous decisions by the common law paradigm (1). On the basis of numerous examples provided in section (1), I will present two basic models of reasoning...

  4. On the Tengiz petroleum deposit previous study

    International Nuclear Information System (INIS)

    Nysangaliev, A.N.; Kuspangaliev, T.K.

    1997-01-01

    Tengiz petroleum deposit previous study is described. Some consideration about structure of productive formation, specific characteristic properties of petroleum-bearing collectors are presented. Recommendation on their detail study and using of experience on exploration and development of petroleum deposit which have analogy on most important geological and industrial parameters are given. (author)

  5. Correlation Filter Learning Toward Peak Strength for Visual Tracking.

    Science.gov (United States)

    Sui, Yao; Wang, Guanghui; Zhang, Li

    2018-04-01

    This paper presents a novel visual tracking approach to correlation filter learning toward peak strength of correlation response. Previous methods leverage all features of the target and the immediate background to learn a correlation filter. Some features, however, may be distractive to tracking, like those from occlusion and local deformation, resulting in unstable tracking performance. This paper aims at solving this issue and proposes a novel algorithm to learn the correlation filter. The proposed approach, by imposing an elastic net constraint on the filter, can adaptively eliminate those distractive features in the correlation filtering. A new peak strength metric is proposed to measure the discriminative capability of the learned correlation filter. It is demonstrated that the proposed approach effectively strengthens the peak of the correlation response, leading to more discriminative performance than previous methods. Extensive experiments on a challenging visual tracking benchmark demonstrate that the proposed tracker outperforms most state-of-the-art methods.

  6. An effective suggestion method for keyword search of databases

    KAUST Repository

    Huang, Hai

    2016-09-09

    This paper solves the problem of providing high-quality suggestions for user keyword queries over databases. With the assumption that the returned suggestions are independent, existing query suggestion methods over databases score candidate suggestions individually and return the top-k best of them. However, the top-k suggestions have high redundancy with respect to the topics. To provide informative suggestions, the returned k suggestions are expected to be diverse, i.e., maximizing the relevance to the user query and the diversity with respect to topics that the user might be interested in simultaneously. In this paper, an objective function considering both factors is defined for evaluating a suggestion set. We show that maximizing the objective function is a submodular function maximization problem subject to n matroid constraints, which is an NP-hard problem. An greedy approximate algorithm with an approximation ratio O((Formula presented.)) is also proposed. Experimental results show that our suggestion outperforms other methods on providing relevant and diverse suggestions. © 2016 Springer Science+Business Media New York

  7. Prediction of successful trial of labour in patients with a previous caesarean section

    International Nuclear Information System (INIS)

    Shaheen, N.; Khalil, S.; Iftikhar, P.

    2014-01-01

    Objective: To determine the prediction rate of success in trial of labour after one previous caesarean section. Methods: The cross-sectional study was conducted at the Department of Obstetrics and Gynaecology, Cantonment General Hospital, Rawalpindi, from January 1, 2012 to January 31, 2013, and comprised women with one previous Caesarean section and with single alive foetus at 37-41 weeks of gestation. Women with more than one Caesarean section, unknown site of uterine scar, bony pelvic deformity, placenta previa, intra-uterine growth restriction, deep transverse arrest in previous labour and non-reassuring foetal status at the time of admission were excluded. Intrapartum risk assessment included Bishop score at admission, rate of cervical dilatation and scar tenderness. SPSS 21 was used for statistical analysis. Results: Out of a total of 95 women, the trial was successful in 68 (71.6%). Estimated foetal weight and number of prior vaginal deliveries had a high predictive value for successful trial of labour after Caesarean section. Estimated foetal weight had an odds ratio of 0.46 (p<0.001), while number of prior vaginal deliveries had an odds ratio of 0.85 with (p=0.010). Other factors found to be predictive of successful trial included Bishop score at the time of admission (p<0.037) and rate of cervical dilatation in the first stage of labour (p<0.021). Conclusion: History of prior vaginal deliveries, higher Bishop score at the time of admission, rapid rate of cervical dilatation and lower estimated foetal weight were predictive of a successful trial of labour after Caesarean section. (author)

  8. A purely Lagrangian method for simulating the shallow water equations on a sphere using smooth particle hydrodynamics

    Science.gov (United States)

    Capecelatro, Jesse

    2018-03-01

    It has long been suggested that a purely Lagrangian solution to global-scale atmospheric/oceanic flows can potentially outperform tradition Eulerian schemes. Meanwhile, a demonstration of a scalable and practical framework remains elusive. Motivated by recent progress in particle-based methods when applied to convection dominated flows, this work presents a fully Lagrangian method for solving the inviscid shallow water equations on a rotating sphere in a smooth particle hydrodynamics framework. To avoid singularities at the poles, the governing equations are solved in Cartesian coordinates, augmented with a Lagrange multiplier to ensure that fluid particles are constrained to the surface of the sphere. An underlying grid in spherical coordinates is used to facilitate efficient neighbor detection and parallelization. The method is applied to a suite of canonical test cases, and conservation, accuracy, and parallel performance are assessed.

  9. Functional MRI of the visual cortex and visual testing in patients with previous optic neuritis

    DEFF Research Database (Denmark)

    Langkilde, Annika Reynberg; Frederiksen, J.L.; Rostrup, Egill

    2002-01-01

    to both the results of the contrast sensitivity test and to the Snellen visual acuity. Our results indicate that fMRI is a useful method for the study of ON, even in cases where the visual acuity is severely impaired. The reduction in activated volume could be explained as a reduced neuronal input......The volume of cortical activation as detected by functional magnetic resonance imaging (fMRI) in the visual cortex has previously been shown to be reduced following optic neuritis (ON). In order to understand the cause of this change, we studied the cortical activation, both the size...... of the activated area and the signal change following ON, and compared the results with results of neuroophthalmological testing. We studied nine patients with previous acute ON and 10 healthy persons served as controls using fMRI with visual stimulation. In addition to a reduced activated volume, patients showed...

  10. Bayesian and Classical Machine Learning Methods: A Comparison for Tree Species Classification with LiDAR Waveform Signatures

    Directory of Open Access Journals (Sweden)

    Tan Zhou

    2017-12-01

    Full Text Available A plethora of information contained in full-waveform (FW Light Detection and Ranging (LiDAR data offers prospects for characterizing vegetation structures. This study aims to investigate the capacity of FW LiDAR data alone for tree species identification through the integration of waveform metrics with machine learning methods and Bayesian inference. Specifically, we first conducted automatic tree segmentation based on the waveform-based canopy height model (CHM using three approaches including TreeVaW, watershed algorithms and the combination of TreeVaW and watershed (TW algorithms. Subsequently, the Random forests (RF and Conditional inference forests (CF models were employed to identify important tree-level waveform metrics derived from three distinct sources, such as raw waveforms, composite waveforms, the waveform-based point cloud and the combined variables from these three sources. Further, we discriminated tree (gray pine, blue oak, interior live oak and shrub species through the RF, CF and Bayesian multinomial logistic regression (BMLR using important waveform metrics identified in this study. Results of the tree segmentation demonstrated that the TW algorithms outperformed other algorithms for delineating individual tree crowns. The CF model overcomes waveform metrics selection bias caused by the RF model which favors correlated metrics and enhances the accuracy of subsequent classification. We also found that composite waveforms are more informative than raw waveforms and waveform-based point cloud for characterizing tree species in our study area. Both classical machine learning methods (the RF and CF and the BMLR generated satisfactory average overall accuracy (74% for the RF, 77% for the CF and 81% for the BMLR and the BMLR slightly outperformed the other two methods. However, these three methods suffered from low individual classification accuracy for the blue oak which is prone to being misclassified as the interior live oak due

  11. Convex Hull Aided Registration Method (CHARM).

    Science.gov (United States)

    Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian

    2017-09-01

    Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. First, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve non-rigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.

  12. A model-based approach for identifying signatures of ancient balancing selection in genetic data.

    Science.gov (United States)

    DeGiorgio, Michael; Lohmueller, Kirk E; Nielsen, Rasmus

    2014-08-01

    While much effort has focused on detecting positive and negative directional selection in the human genome, relatively little work has been devoted to balancing selection. This lack of attention is likely due to the paucity of sophisticated methods for identifying sites under balancing selection. Here we develop two composite likelihood ratio tests for detecting balancing selection. Using simulations, we show that these methods outperform competing methods under a variety of assumptions and demographic models. We apply the new methods to whole-genome human data, and find a number of previously-identified loci with strong evidence of balancing selection, including several HLA genes. Additionally, we find evidence for many novel candidates, the strongest of which is FANK1, an imprinted gene that suppresses apoptosis, is expressed during meiosis in males, and displays marginal signs of segregation distortion. We hypothesize that balancing selection acts on this locus to stabilize the segregation distortion and negative fitness effects of the distorter allele. Thus, our methods are able to reproduce many previously-hypothesized signals of balancing selection, as well as discover novel interesting candidates.

  13. A model-based approach for identifying signatures of ancient balancing selection in genetic data.

    Directory of Open Access Journals (Sweden)

    Michael DeGiorgio

    2014-08-01

    Full Text Available While much effort has focused on detecting positive and negative directional selection in the human genome, relatively little work has been devoted to balancing selection. This lack of attention is likely due to the paucity of sophisticated methods for identifying sites under balancing selection. Here we develop two composite likelihood ratio tests for detecting balancing selection. Using simulations, we show that these methods outperform competing methods under a variety of assumptions and demographic models. We apply the new methods to whole-genome human data, and find a number of previously-identified loci with strong evidence of balancing selection, including several HLA genes. Additionally, we find evidence for many novel candidates, the strongest of which is FANK1, an imprinted gene that suppresses apoptosis, is expressed during meiosis in males, and displays marginal signs of segregation distortion. We hypothesize that balancing selection acts on this locus to stabilize the segregation distortion and negative fitness effects of the distorter allele. Thus, our methods are able to reproduce many previously-hypothesized signals of balancing selection, as well as discover novel interesting candidates.

  14. A special purpose knowledge-based face localization method

    Science.gov (United States)

    Hassanat, Ahmad; Jassim, Sabah

    2008-04-01

    This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.

  15. A Single Image Dehazing Method Using Average Saturation Prior

    Directory of Open Access Journals (Sweden)

    Zhenfei Gu

    2017-01-01

    Full Text Available Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP, which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.

  16. Should previous mammograms be digitised in the transition to digital mammography?

    International Nuclear Information System (INIS)

    Taylor-Phillips, S.; Gale, A.G.; Wallis, M.G.

    2009-01-01

    Breast screening specificity is improved if previous mammograms are available, which presents a challenge when converting to digital mammography. Two display options were investigated: mounting previous film mammograms on a multiviewer adjacent to the workstation, or digitising them for soft copy display. Eight qualified screen readers were videotaped undertaking routine screen reading for two 45-min sessions in each scenario. Analysis of gross eye and head movements showed that when digitised, previous mammograms were examined a greater number of times per case (p=0.03), due to a combination of being used in 19% more cases (p=0.04) and where used, looked at a greater number of times (28% increase, p=0.04). Digitising previous mammograms reduced both the average time taken per case by 18% (p=0.04) and the participants' perceptions of workload (p < 0.05). Digitising previous analogue mammograms may be advantageous, in particular in increasing their level of use. (orig.)

  17. Learning gene regulatory networks from only positive and unlabeled data

    Directory of Open Access Journals (Sweden)

    Elkan Charles

    2010-05-01

    Full Text Available Abstract Background Recently, supervised learning methods have been exploited to reconstruct gene regulatory networks from gene expression data. The reconstruction of a network is modeled as a binary classification problem for each pair of genes. A statistical classifier is trained to recognize the relationships between the activation profiles of gene pairs. This approach has been proven to outperform previous unsupervised methods. However, the supervised approach raises open questions. In particular, although known regulatory connections can safely be assumed to be positive training examples, obtaining negative examples is not straightforward, because definite knowledge is typically not available that a given pair of genes do not interact. Results A recent advance in research on data mining is a method capable of learning a classifier from only positive and unlabeled examples, that does not need labeled negative examples. Applied to the reconstruction of gene regulatory networks, we show that this method significantly outperforms the current state of the art of machine learning methods. We assess the new method using both simulated and experimental data, and obtain major performance improvement. Conclusions Compared to unsupervised methods for gene network inference, supervised methods are potentially more accurate, but for training they need a complete set of known regulatory connections. A supervised method that can be trained using only positive and unlabeled data, as presented in this paper, is especially beneficial for the task of inferring gene regulatory networks, because only an incomplete set of known regulatory connections is available in public databases such as RegulonDB, TRRD, KEGG, Transfac, and IPA.

  18. AN EFFICIENT DATA MINING METHOD TO FIND FREQUENT ITEM SETS IN LARGE DATABASE USING TR- FCTM

    Directory of Open Access Journals (Sweden)

    Saravanan Suba

    2016-01-01

    Full Text Available Mining association rules in large database is one of most popular data mining techniques for business decision makers. Discovering frequent item set is the core process in association rule mining. Numerous algorithms are available in the literature to find frequent patterns. Apriori and FP-tree are the most common methods for finding frequent items. Apriori finds significant frequent items using candidate generation with more number of data base scans. FP-tree uses two database scans to find significant frequent items without using candidate generation. This proposed TR-FCTM (Transaction Reduction- Frequency Count Table Method discovers significant frequent items by generating full candidates once to form frequency count table with one database scan. Experimental results of TR-FCTM shows that this algorithm outperforms than Apriori and FP-tree.

  19. Determination of the Boltzmann constant with cylindrical acoustic gas thermometry: new and previous results combined

    Science.gov (United States)

    Feng, X. J.; Zhang, J. T.; Lin, H.; Gillis, K. A.; Mehl, J. B.; Moldover, M. R.; Zhang, K.; Duan, Y. N.

    2017-10-01

    We report a new determination of the Boltzmann constant k B using a cylindrical acoustic gas thermometer. We determined the length of the copper cavity from measurements of its microwave resonance frequencies. This contrasts with our previous work (Zhang et al 2011 Int. J. Thermophys. 32 1297, Lin et al 2013 Metrologia 50 417, Feng et al 2015 Metrologia 52 S343) that determined the length of a different cavity using two-color optical interferometry. In this new study, the half-widths of the acoustic resonances are closer to their theoretical values than in our previous work. Despite significant changes in resonator design and the way in which the cylinder length is determined, the value of k B is substantially unchanged. We combined this result with our four previous results to calculate a global weighted mean of our k B determinations. The calculation follows CODATA’s method (Mohr and Taylor 2000 Rev. Mod. Phys. 72 351) for obtaining the weighted mean value of k B that accounts for the correlations among the measured quantities in this work and in our four previous determinations of k B. The weighted mean {{\\boldsymbol{\\hat{k}}}{B}} is 1.380 6484(28)  ×  10-23 J K-1 with the relative standard uncertainty of 2.0  ×  10-6. The corresponding value of the universal gas constant is 8.314 459(17) J K-1 mol-1 with the relative standard uncertainty of 2.0  ×  10-6.

  20. Do attitudes of families concerned influence features of children who claim to remember previous lives?

    Science.gov (United States)

    Pasricha, Satwant K

    2011-01-01

    Reported cases of nearly 2600 children (subjects) who claim to remember previous lives have been investigated in cultures with and without belief in reincarnation. The authenticity in most cases has been established. To study the influence of attitudes of parents of the subjects, families of the deceased person with whom they are identified and attention paid by others on the features of the cases. The study is based on field investigations. Data is derived from analysis of a larger series of an ongoing project. Information on initial and subsequent attitudes of subjects' mothers was available for 292 and 136 cases, respectively; attitudes of 227 families of deceased person (previous personality) with whom he is identified, and the extent of attention received from outsiders for 252 cases. Observations and interviews with multiple firsthand informants on both sides of the case as well as some neutral informants supplemented by examination of objective data were the chief methods of investigation. The initial attitude of mothers varied from encouragement (21%) to neutral or tolerance (51%) to discouragement (28%). However, it changed significantly from neutrality to taking measures to induce amnesia in their children for previous life memories due to various psychosocial pressures and prevalent beliefs. Families of the previous personalities, once convinced, showed complete acceptance in a majority of cases. Outside attention was received in 58% cases. The positive attitude of parents might facilitate expression of memories but subsequently attitudes of persons concerned do not seem to alter features of the cases.

  1. Is Previous Respiratory Disease a Risk Factor for Lung Cancer?

    Science.gov (United States)

    Denholm, Rachel; Schüz, Joachim; Straif, Kurt; Stücker, Isabelle; Jöckel, Karl-Heinz; Brenner, Darren R.; De Matteis, Sara; Boffetta, Paolo; Guida, Florence; Brüske, Irene; Wichmann, Heinz-Erich; Landi, Maria Teresa; Caporaso, Neil; Siemiatycki, Jack; Ahrens, Wolfgang; Pohlabeln, Hermann; Zaridze, David; Field, John K.; McLaughlin, John; Demers, Paul; Szeszenia-Dabrowska, Neonila; Lissowska, Jolanta; Rudnai, Peter; Fabianova, Eleonora; Dumitru, Rodica Stanescu; Bencko, Vladimir; Foretova, Lenka; Janout, Vladimir; Kendzia, Benjamin; Peters, Susan; Behrens, Thomas; Vermeulen, Roel; Brüning, Thomas; Kromhout, Hans

    2014-01-01

    Rationale: Previous respiratory diseases have been associated with increased risk of lung cancer. Respiratory conditions often co-occur and few studies have investigated multiple conditions simultaneously. Objectives: Investigate lung cancer risk associated with chronic bronchitis, emphysema, tuberculosis, pneumonia, and asthma. Methods: The SYNERGY project pooled information on previous respiratory diseases from 12,739 case subjects and 14,945 control subjects from 7 case–control studies conducted in Europe and Canada. Multivariate logistic regression models were used to investigate the relationship between individual diseases adjusting for co-occurring conditions, and patterns of respiratory disease diagnoses and lung cancer. Analyses were stratified by sex, and adjusted for age, center, ever-employed in a high-risk occupation, education, smoking status, cigarette pack-years, and time since quitting smoking. Measurements and Main Results: Chronic bronchitis and emphysema were positively associated with lung cancer, after accounting for other respiratory diseases and smoking (e.g., in men: odds ratio [OR], 1.33; 95% confidence interval [CI], 1.20–1.48 and OR, 1.50; 95% CI, 1.21–1.87, respectively). A positive relationship was observed between lung cancer and pneumonia diagnosed 2 years or less before lung cancer (OR, 3.31; 95% CI, 2.33–4.70 for men), but not longer. Co-occurrence of chronic bronchitis and emphysema and/or pneumonia had a stronger positive association with lung cancer than chronic bronchitis “only.” Asthma had an inverse association with lung cancer, the association being stronger with an asthma diagnosis 5 years or more before lung cancer compared with shorter. Conclusions: Findings from this large international case–control consortium indicate that after accounting for co-occurring respiratory diseases, chronic bronchitis and emphysema continue to have a positive association with lung cancer. PMID:25054566

  2. 14 CFR 121.406 - Credit for previous CRM/DRM training.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Credit for previous CRM/DRM training. 121... previous CRM/DRM training. (a) For flightcrew members, the Administrator may credit CRM training received before March 19, 1998 toward all or part of the initial ground CRM training required by § 121.419. (b...

  3. Previous medical history of diseases in children with attention deficit hyperactivity disorder and their parents

    Directory of Open Access Journals (Sweden)

    Ayyoub Malek

    2014-02-01

    Full Text Available Introduction: The etiology of Attention deficit hyperactivity disorder (ADHD is complex and most likely includes genetic and environmental factors. This study was conducted to evaluatethe role of previous medical history of diseases in ADHD children and their parents during theearlier years of the ADHD children's lives. Methods: In this case-control study, 164 ADHD children attending to Child and AdolescentPsychiatric Clinics of Tabriz University of Medical Sciences, Iran, compared with 166 normal children selected in a random-cluster method from primary and guidance schools. ADHDrating scale (Parents version and clinical interview based on schedule for Schedule forAffective Disorders and Schizophrenia for School-Age Children-Present and Lifetime Version(K-SADS were used to diagnose ADHD cases and to select the control group. Two groupswere compared for the existence of previous medical history of diseases in children andparents. Fisher's exact test and logistic regression model were used for data analysis. Results: The frequency of maternal history of medical disorders (28.7% vs. 12.0%; P = 0.001was significantly higher in children with ADHD compared with the control group. The frequency of jaundice, dysentery, epilepsy, asthma, allergy, and head trauma in the medicalhistory of children were not significantly differed between the two groups. Conclusion: According to this preliminary study, it may be concluded that the maternal historyof medical disorders is one of contributing risk factors for ADHD.

  4. Are previous episodes of bacterial vaginosis a predictor for vaginal symptoms in breast cancer patients treated with aromatase inhibitors?

    DEFF Research Database (Denmark)

    Gade, Malene R; Goukasian, Irina; Panduro, Nathalie

    2018-01-01

    Objective To estimate the prevalence of vaginal symptoms in postmenopausal women with breast cancer exposed to aromatase inhibitors, and to investigate if the risk of vaginal symptoms is associated with previous episodes of bacterial vaginosis. Methods Patients from Rigshospitalet and Herlev...... University Hospital, Denmark, were identified through the register of Danish Breast Cancer Cooperation Group and 78 patients participated in the study. Semiquantitave questionnaires and telephone interview were used to assess the prevalence of vaginal symptoms and previous episode(s) of bacterial vaginosis....... Multivariable logistic regression models were used to assess the association between vaginal symptoms and previous episodes of bacterial vaginosis. Results Moderate to severe symptoms due to vaginal itching/irritation were experienced by 6.4% (95% CI: 2.8-14.1%), vaginal dryness by 28.4% (95% CI: 19...

  5. Challenging previous conceptions of vegetarianism and eating disorders.

    Science.gov (United States)

    Fisak, B; Peterson, R D; Tantleff-Dunn, S; Molnar, J M

    2006-12-01

    The purpose of this study was to replicate and expand upon previous research that has examined the potential association between vegetarianism and disordered eating. Limitations of previous research studies are addressed, including possible low reliability of measures of eating pathology within vegetarian samples, use of only a few dietary restraint measures, and a paucity of research examining potential differences in body image and food choice motives of vegetarians versus nonvegetarians. Two hundred and fifty-six college students completed a number of measures of eating pathology and body image, and a food choice motives questionnaire. Interestingly, no significant differences were found between vegetarians and nonvegetarians in measures of eating pathology or body image. However, significant differences in food choice motives were found. Implications for both researchers and clinicians are discussed.

  6. Hybrid statistics-simulations based method for atom-counting from ADF STEM images

    Energy Technology Data Exchange (ETDEWEB)

    De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)

    2017-06-15

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.

  7. Distinctive distribution of lymphocytes in unruptured and previously untreated brain arteriovenous malformation

    Directory of Open Access Journals (Sweden)

    Yi Guo

    2014-12-01

    Full Text Available Aim: To test the hypothesis that lymphocyte infiltration in brain arteriovenous malformation (bAVM is not associated with iron deposition (indicator of micro-hemorrhage. Methods: Sections of unruptured, previously untreated bAVM specimens (n = 19 were stained immunohistochemically for T-lymphocytes (CD3 + , B-lymphocytes (CD20 + , plasma cells (CD138 + and macrophages (CD68 + . Iron deposition was assessed by hematoxylin and eosin and Prussian blue stains. Superficial temporal arteries (STA were used as control. Results: Both T-lymphocytes and macrophages were present in unruptured, previously untreated bAVM specimens, whereas few B cells and plasma cells were detected. Iron deposition was detected in 8 specimens (42%; 95% confidence intervals = 20-67%. The samples with iron deposition tended to have more macrophages than those without (666 ± 313 vs. 478 ± 174 cells/mm 2 ; P = 0.11. T-cells were clustered on the luminal side of the endothelial surface, on the vessel-wall, and in the perivascular regions. There was no correlation between T-lymphocyte load and iron deposition (P = 0.88. No macrophages and lymphocytes were detected in STA controls. Conclusion: T-lymphocytes were present in bAVM specimens. Unlike macrophages, the load and location of T-lymphocytes were not associated with iron deposition, suggesting the possibility of an independent cell-mediated immunological mechanism in bAVM pathogenesis.

  8. Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); McKay, James; Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Farmer, Ben; Conrad, Jan [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Roebber, Elinore [McGill University, Department of Physics, Montreal, QC (Canada); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Collaboration: The GAMBIT Scanner Workgroup

    2017-11-15

    We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics. (orig.)

  9. Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods

    Science.gov (United States)

    Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor

    2016-10-01

    Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.

  10. An Extended Two-Phase Method for Accessing Sections of Out-of-Core Arrays

    Directory of Open Access Journals (Sweden)

    Rajeev Thakur

    1996-01-01

    Full Text Available A number of applications on parallel computers deal with very large data sets that cannot fit in main memory. In such applications, data must be stored in files on disks and fetched into memory during program execution. Parallel programs with large out-of-core arrays stored in files must read/write smaller sections of the arrays from/to files. In this article, we describe a method for accessing sections of out-of-core arrays efficiently. Our method, the extended two-phase method, uses collective l/O: Processors cooperate to combine several l/O requests into fewer larger granularity requests, to reorder requests so that the file is accessed in proper sequence, and to eliminate simultaneous l/O requests for the same data. In addition, the l/O workload is divided among processors dynamically, depending on the access requests. We present performance results obtained from two real out-of-core parallel applications – matrix multiplication and a Laplace's equation solver – and several synthetic access patterns, all on the Intel Touchstone Delta. These results indicate that the extended two-phase method significantly outperformed a direct (noncollective method for accessing out-of-core array sections.

  11. Measurement of thermally ablated lesions in sonoelastographic images using level set methods

    Science.gov (United States)

    Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.

    2008-03-01

    The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.

  12. An unsupervised text mining method for relation extraction from biomedical literature.

    Directory of Open Access Journals (Sweden)

    Changqin Quan

    Full Text Available The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1 Protein-protein interactions extraction, and (2 Gene-suicide association extraction. The evaluation of task (1 on the benchmark dataset (AImed corpus showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene-suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.

  13. Evaluation of signal energy calculation methods for a light-sharing SiPM-based PET detector

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Qingyang [School of Automation and Electrical Engineering, University of Science & Technology Beijing, Beijing 100083 (China); Beijing Engineering Research Center of Industrial Spectrum Imaging, University of Science and Technology Beijing, Beijing 100083 (China); Ma, Tianyu; Xu, Tianpeng; Liu, Yaqiang; Wang, Shi [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Gu, Yu, E-mail: guyu@ustb.edu.cn [School of Automation and Electrical Engineering, University of Science & Technology Beijing, Beijing 100083 (China)

    2017-03-11

    Signals of a light-sharing positron emission tomography (PET) detector are commonly multiplexed to three analog pulses (E, X, and Y) and then digitally sampled. From this procedure, the signal energy that are critical to detector performance are obtained. In this paper, different signal energy calculation strategies for a self-developed SiPM-based PET detector, including pulse height and different integration methods, are evaluated in terms of energy resolution and spread of the crystal response in the flood histogram using a root-mean-squared (RMS) index. Results show that integrations outperform the pulse height. Integration using the maximum derivative value of the pulse E as the landmark point and 28 integrated points (448 ns) has the best performance in these evaluated methods for our detector. Detector performance in terms of energy and position is improved with this integration method. The proposed methodology is expected to be applicable for other light-sharing PET detectors.

  14. A Multifeatures Fusion and Discrete Firefly Optimization Method for Prediction of Protein Tyrosine Sulfation Residues.

    Science.gov (United States)

    Guo, Song; Liu, Chunhua; Zhou, Peng; Li, Yanling

    2016-01-01

    Tyrosine sulfation is one of the ubiquitous protein posttranslational modifications, where some sulfate groups are added to the tyrosine residues. It plays significant roles in various physiological processes in eukaryotic cells. To explore the molecular mechanism of tyrosine sulfation, one of the prerequisites is to correctly identify possible protein tyrosine sulfation residues. In this paper, a novel method was presented to predict protein tyrosine sulfation residues from primary sequences. By means of informative feature construction and elaborate feature selection and parameter optimization scheme, the proposed predictor achieved promising results and outperformed many other state-of-the-art predictors. Using the optimal features subset, the proposed method achieved mean MCC of 94.41% on the benchmark dataset, and a MCC of 90.09% on the independent dataset. The experimental performance indicated that our new proposed method could be effective in identifying the important protein posttranslational modifications and the feature selection scheme would be powerful in protein functional residues prediction research fields.

  15. Sensitivity-based virtual fields for the non-linear virtual fields method

    Science.gov (United States)

    Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice

    2017-09-01

    The virtual fields method is an approach to inversely identify material parameters using full-field deformation data. In this manuscript, a new set of automatically-defined virtual fields for non-linear constitutive models has been proposed. These new sensitivity-based virtual fields reduce the influence of noise on the parameter identification. The sensitivity-based virtual fields were applied to a numerical example involving small strain plasticity; however, the general formulation derived for these virtual fields is applicable to any non-linear constitutive model. To quantify the improvement offered by these new virtual fields, they were compared with stiffness-based and manually defined virtual fields. The proposed sensitivity-based virtual fields were consistently able to identify plastic model parameters and outperform the stiffness-based and manually defined virtual fields when the data was corrupted by noise.

  16. An effective image classification method with the fusion of invariant feature and a new color descriptor

    Science.gov (United States)

    Mansourian, Leila; Taufik Abdullah, Muhamad; Nurliyana Abdullah, Lili; Azman, Azreen; Mustaffa, Mas Rina

    2017-02-01

    Pyramid Histogram of Words (PHOW), combined Bag of Visual Words (BoVW) with the spatial pyramid matching (SPM) in order to add location information to extracted features. However, different PHOW extracted from various color spaces, and they did not extract color information individually, that means they discard color information, which is an important characteristic of any image that is motivated by human vision. This article, concatenated PHOW Multi-Scale Dense Scale Invariant Feature Transform (MSDSIFT) histogram and a proposed Color histogram to improve the performance of existing image classification algorithms. Performance evaluation on several datasets proves that the new approach outperforms other existing, state-of-the-art methods.

  17. Failure Mode and Effect Analysis using Soft Set Theory and COPRAS Method

    Directory of Open Access Journals (Sweden)

    Ze-Ling Wang

    2017-01-01

    Full Text Available Failure mode and effect analysis (FMEA is a risk management technique frequently applied to enhance the system performance and safety. In recent years, many researchers have shown an intense interest in improving FMEA due to inherent weaknesses associated with the classical risk priority number (RPN method. In this study, we develop a new risk ranking model for FMEA based on soft set theory and COPRAS method, which can deal with the limitations and enhance the performance of the conventional FMEA. First, trapezoidal fuzzy soft set is adopted to manage FMEA team membersr linguistic assessments on failure modes. Then, a modified COPRAS method is utilized for determining the ranking order of the failure modes recognized in FMEA. Especially, we treat the risk factors as interdependent and employ the Choquet integral to obtain the aggregate risk of failures in the new FMEA approach. Finally, a practical FMEA problem is analyzed via the proposed approach to demonstrate its applicability and effectiveness. The result shows that the FMEA model developed in this study outperforms the traditional RPN method and provides a more reasonable risk assessment of failure modes.

  18. Response to health insurance by previously uninsured rural children.

    Science.gov (United States)

    Tilford, J M; Robbins, J M; Shema, S J; Farmer, F L

    1999-08-01

    To examine the healthcare utilization and costs of previously uninsured rural children. Four years of claims data from a school-based health insurance program located in the Mississippi Delta. All children who were not Medicaid-eligible or were uninsured, were eligible for limited benefits under the program. The 1987 National Medical Expenditure Survey (NMES) was used to compare utilization of services. The study represents a natural experiment in the provision of insurance benefits to a previously uninsured population. Premiums for the claims cost were set with little or no information on expected use of services. Claims from the insurer were used to form a panel data set. Mixed model logistic and linear regressions were estimated to determine the response to insurance for several categories of health services. The use of services increased over time and approached the level of utilization in the NMES. Conditional medical expenditures also increased over time. Actuarial estimates of claims cost greatly exceeded actual claims cost. The provision of a limited medical, dental, and optical benefit package cost approximately $20-$24 per member per month in claims paid. An important uncertainty in providing health insurance to previously uninsured populations is whether a pent-up demand exists for health services. Evidence of a pent-up demand for medical services was not supported in this study of rural school-age children. States considering partnerships with private insurers to implement the State Children's Health Insurance Program could lower premium costs by assembling basic data on previously uninsured children.

  19. Improvements on the seismic catalog previous to the 2011 El Hierro eruption.

    Science.gov (United States)

    Domínguez Cerdeña, Itahiza; del Fresno, Carmen

    2017-04-01

    Precursors from the submarine eruption of El Hierro (Canary Islands) in 2011 included 10,000 low magnitude earthquakes and 5 cm crustal deformation within 81 days previous to the eruption onset on the 10th October. Seismicity revealed a 20 km horizontal migration from the North to the South of the island and depths ranging from 10 and 17 km with deeper events occurring further South. The earthquakes of the seismic catalog were manually picked by the IGN almost in real time, but there has not been a subsequent revision to check for new non located events jet and the completeness magnitude for the seismic catalog have strong changes during the entire swarm due to the variable number of events per day. In this work we used different techniques to improve the quality of the seismic catalog. First we applied different automatic algorithms to detect new events including the LTA-STA method. Then, we performed a semiautomatic system to correlate the new P and S detections with known phases from the original catalog. The new detected earthquakes were also located using Hypoellipse algorithm. The resulting new catalog included 15,000 new events mainly concentrated in the last weeks of the swarm and we assure a completeness magnitude of 1.2 during the whole series. As the seismicity from the original catalog was already relocated using hypoDD algorithm, we improved the location of the new events using a master-cluster relocation. This method consists in relocating earthquakes towards a cluster of well located events instead of a single event as the master-event method. In our case this cluster correspond to the relocated earthquakes from the original catalog. Finally, we obtained a new equation for the local magnitude estimation which allow us to include corrections for each seismic station in order to avoid local effects. The resulting magnitude catalog has a better fit with the moment magnitude catalog obtained for the strong earthquakes of this series in previous studies

  20. Abiraterone in metastatic prostate cancer without previous chemotherapy

    NARCIS (Netherlands)

    Ryan, Charles J.; Smith, Matthew R.; de Bono, Johann S.; Molina, Arturo; Logothetis, Christopher J.; de Souza, Paul; Fizazi, Karim; Mainwaring, Paul; Piulats, Josep M.; Ng, Siobhan; Carles, Joan; Mulders, Peter F. A.; Basch, Ethan; Small, Eric J.; Saad, Fred; Schrijvers, Dirk; van Poppel, Hendrik; Mukherjee, Som D.; Suttmann, Henrik; Gerritsen, Winald R.; Flaig, Thomas W.; George, Daniel J.; Yu, Evan Y.; Efstathiou, Eleni; Pantuck, Allan; Winquist, Eric; Higano, Celestia S.; Taplin, Mary-Ellen; Park, Youn; Kheoh, Thian; Griffin, Thomas; Scher, Howard I.; Rathkopf, Dana E.; Boyce, A.; Costello, A.; Davis, I.; Ganju, V.; Horvath, L.; Lynch, R.; Marx, G.; Parnis, F.; Shapiro, J.; Singhal, N.; Slancar, M.; van Hazel, G.; Wong, S.; Yip, D.; Carpentier, P.; Luyten, D.; de Reijke, T.

    2013-01-01

    Abiraterone acetate, an androgen biosynthesis inhibitor, improves overall survival in patients with metastatic castration-resistant prostate cancer after chemotherapy. We evaluated this agent in patients who had not received previous chemotherapy. In this double-blind study, we randomly assigned

  1. Multi-National Banknote Classification Based on Visible-light Line Sensor and Convolutional Neural Network.

    Science.gov (United States)

    Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung

    2017-07-08

    Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods.

  2. Squamous cell carcinoma arising in previously burned or irradiated skin

    International Nuclear Information System (INIS)

    Edwards, M.J.; Hirsch, R.M.; Broadwater, J.R.; Netscher, D.T.; Ames, F.C.

    1989-01-01

    Squamous cell carcinoma (SCC) arising in previously burned or irradiated skin was reviewed in 66 patients treated between 1944 and 1986. Healing of the initial injury was complicated in 70% of patients. Mean interval from initial injury to diagnosis of SCC was 37 years. The overwhelming majority of patients presented with a chronic intractable ulcer in previously injured skin. The regional relapse rate after surgical excision was very high, 58% of all patients. Predominant patterns of recurrence were in local skin and regional lymph nodes (93% of recurrences). Survival rates at 5, 10, and 20 years were 52%, 34%, and 23%, respectively. Five-year survival rates in previously burned and irradiated patients were not significantly different (53% and 50%, respectively). This review, one of the largest reported series, better defines SCC arising in previously burned or irradiated skin as a locally aggressive disease that is distinct from SCC arising in sunlight-damaged skin. An increased awareness of the significance of chronic ulceration in scar tissue may allow earlier diagnosis. Regional disease control and survival depend on surgical resection of all known disease and may require radical lymph node dissection or amputation

  3. Sunburn and sun-protective behaviors among adults with and without previous nonmelanoma skin cancer: a population-based study

    Science.gov (United States)

    Fischer, Alexander H.; Wang, Timothy S.; Yenokyan, Gayane; Kang, Sewon; Chien, Anna L.

    2016-01-01

    Background Individuals with previous nonmelanoma skin cancer (NMSC) are at increased risk for subsequent skin cancer, and should therefore limit UV exposure. Objective To determine whether individuals with previous NMSC engage in better sun protection than those with no skin cancer history. Methods We pooled self-reported data (2005 and 2010 National Health Interview Surveys) from US non-Hispanic white adults (758 with and 34,161 without previous NMSC). We calculated adjusted prevalence odds ratios (aPOR) and 95% confidence intervals (95% CI), taking into account the complex survey design. Results Individuals with previous NMSC versus no history of NMSC had higher rates of frequent use of shade (44.3% versus 27.0%; aPOR=1.41; 1.16–1.71), long sleeves (20.5% versus 7.7%; aPOR=1.55; 1.21–1.98), a wide-brimmed hat (26.1% versus 10.5%; aPOR=1.52; 1.24–1.87), and sunscreen (53.7% versus 33.1%; aPOR=2.11; 95% CI=1.73–2.59), but did not have significantly lower odds of recent sunburn (29.7% versus 40.7%; aPOR=0.95; 0.77–1.17). Among subjects with previous NMSC, recent sunburn was inversely associated with age, sun avoidance, and shade but not sunscreen. Limitations Self-reported cross-sectional data and unavailable information quantifying regular sun exposure. Conclusion Physicians should emphasize sunburn prevention when counseling patients with previous NMSC, especially younger adults, focusing on shade and sun avoidance over sunscreen. PMID:27198078

  4. The cost-constrained traveling salesman problem

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P.R.

    1990-10-01

    The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP. We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.

  5. Outcomes of Ahmed glaucoma valve implantation in advanced primary congenital glaucoma with previous surgical failure

    Science.gov (United States)

    Huang, Jingjing; Lin, Jialiu; Wu, Ziqiang; Xu, Hongzhi; Zuo, Chengguo; Ge, Jian

    2015-01-01

    Purpose The purpose of this study was to evaluate the intermediate surgical results of Ahmed glaucoma valve (AGV) implantation in patients less than 7 years of age, with advanced primary congenital glaucoma who have failed previous surgeries. Patients and methods Consecutive patients with advanced primary congenital glaucoma that failed previous operations and had undergone subsequent AGV implantation were evaluated retrospectively. Surgical success was defined as 1) intraocular pressure (IOP) ≥6 and ≤21 mmHg; 2) IOP reduction of at least 30% relative to preoperative values; and 3) without the need for additional surgical intervention for IOP control, loss of light perception, or serious complications. Results Fourteen eyes of eleven patients were studied. Preoperatively, the average axial length was 27.71±1.52 (25.56–30.80) mm, corneal diameter was 14.71±1.07 (13.0–16.0) mm, cup-to-disc ratio was 0.95±0.04 (0.9–1.0), and IOP was 39.5±5.7 (30–55) mmHg. The mean follow-up time was 18.29±10.96 (5–44, median 18) months. There were significant reductions in IOPs and the number of glaucoma medications (Pglaucoma unresponsive to previous surgical intervention, despite a relatively high incidence of severe surgical complications. PMID:26082610

  6. Adolescents' physical activity is associated with previous and current physical activity practice by their parents

    Directory of Open Access Journals (Sweden)

    Diego Giulliano Destro Christofaro

    Full Text Available Abstract Objective: The purpose of this study was to determine whether parents' current and previous physical activity practice is associated with adolescents' physical activity. Methods: The sample was composed of 1231 adolescents (14-17 years, and 1202 mothers and 871 fathers were interviewed. Weight and height of the adolescents were measured. Self-reported parents' weight and height were obtained. The current and previous physical activity levels (Baecke's questionnaire of parents (during childhood and adolescence and adolescents' physical activity levels were obtained using a questionnaire. The magnitude of the associations between parent and adolescent physical activity levels was determined by binary logistic regression (adjusted by sex, age, and socioeconomic level of adolescents and education level of parents. Results: The current physical activity practice by parents was associated with adolescents' physical activity (p < 0.001. The physical activities reported by parents in their childhood and adolescence were also associated with higher physical activity levels among adolescents. Adolescents whose parents were both physically active in the past and present were six times (OR = 6.67 [CI = 1.94-22.79] more likely to be physically active compared to adolescents with no parents who were physically active in the past. Conclusions: The current and previous physical activities of parents were associated with higher levels of physical activity in adolescents, even after controlling for confounding factors.

  7. Dissociation in decision bias mechanism between probabilistic information and previous decision

    Directory of Open Access Journals (Sweden)

    Yoshiyuki eKaneko

    2015-05-01

    Full Text Available Target detection performance is known to be influenced by events in the previous trials. It has not been clear, however, whether this bias effect is due to the previous sensory stimulus, motor response, or decision. Also it remains open whether or not the previous trial effect emerges via the same mechanism as the effect of knowledge about the target probability. In the present study, we asked normal human subjects to make a decision about the presence or absence of a visual target. We presented a pre-cue indicating the target probability before the stimulus, and also a decision-response mapping cue after the stimulus so as to tease apart the effect of decision from that of motor response. We found that the target detection performance was significantly affected by the probability cue in the current trial and also by the decision in the previous trial. While the information about the target probability modulated the decision criteria, the previous decision modulated the sensitivity to target-relevant sensory signals (d-prime. Using functional magnetic resonance imaging, we also found that activation in the left intraparietal sulcus was decreased when the probability cue indicated a high probability of the target. By contrast, activation in the right inferior frontal gyrus was increased when the subjects made a target-present decision in the previous trial, but this change was observed specifically when the target was present in the current trial. Activation in these regions was associated with individual-difference in the decision computation parameters. We argue that the previous decision biases the target detection performance by modulating the processing of target-selective information, and this mechanism is distinct from modulation of decision criteria due to expectation of a target.

  8. Dissociation in decision bias mechanism between probabilistic information and previous decision

    Science.gov (United States)

    Kaneko, Yoshiyuki; Sakai, Katsuyuki

    2015-01-01

    Target detection performance is known to be influenced by events in the previous trials. It has not been clear, however, whether this bias effect is due to the previous sensory stimulus, motor response, or decision. Also it remains open whether or not the previous trial effect emerges via the same mechanism as the effect of knowledge about the target probability. In the present study, we asked normal human subjects to make a decision about the presence or absence of a visual target. We presented a pre-cue indicating the target probability before the stimulus, and also a decision-response mapping cue after the stimulus so as to tease apart the effect of decision from that of motor response. We found that the target detection performance was significantly affected by the probability cue in the current trial and also by the decision in the previous trial. While the information about the target probability modulated the decision criteria, the previous decision modulated the sensitivity to target-relevant sensory signals (d-prime). Using functional magnetic resonance imaging (fMRI), we also found that activation in the left intraparietal sulcus (IPS) was decreased when the probability cue indicated a high probability of the target. By contrast, activation in the right inferior frontal gyrus (IFG) was increased when the subjects made a target-present decision in the previous trial, but this change was observed specifically when the target was present in the current trial. Activation in these regions was associated with individual-difference in the decision computation parameters. We argue that the previous decision biases the target detection performance by modulating the processing of target-selective information, and this mechanism is distinct from modulation of decision criteria due to expectation of a target. PMID:25999844

  9. A Heuristic and Hybrid Method for the Tank Allocation Problem in Maritime Bulk Shipping

    DEFF Research Database (Denmark)

    Vilhelmsen, Charlotte; Larsen, Jesper; Lusby, Richard Martin

    Many bulk ships have multiple tanks and can thereby carry multiple inhomogeneous products at a time. A major challenge when operating such ships is how to best allocate cargoes to available tanks while taking tank capacity, safety restrictions, ship stability and strength as well as other...... ship route. We have developed a randomised heuristic for eciently nding feasible allocations and computational results show that it can solve 99% of the considered instances within 0.5 seconds and all of them if allowed longer time. The heuristic is designed to work as an ecient subproblem solver...... and in such a setting with running times below e.g. 5 seconds, the heuristic clearly outperforms an earlier method by consistently solving more instances and eectively cutting 84% of the average running time. Furthermore, we have combined our heuristic with a modied version of the earlier method to derive a hybrid...

  10. Sequential Probability Ratio Testing with Power Projective Base Method Improves Decision-Making for BCI

    Science.gov (United States)

    Liu, Rong

    2017-01-01

    Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781

  11. Cardiovascular magnetic resonance in adults with previous cardiovascular surgery.

    Science.gov (United States)

    von Knobelsdorff-Brenkenhoff, Florian; Trauzeddel, Ralf Felix; Schulz-Menger, Jeanette

    2014-03-01

    Cardiovascular magnetic resonance (CMR) is a versatile non-invasive imaging modality that serves a broad spectrum of indications in clinical cardiology and has proven evidence. Most of the numerous applications are appropriate in patients with previous cardiovascular surgery in the same manner as in non-surgical subjects. However, some specifics have to be considered. This review article is intended to provide information about the application of CMR in adults with previous cardiovascular surgery. In particular, the two main scenarios, i.e. following coronary artery bypass surgery and following heart valve surgery, are highlighted. Furthermore, several pictorial descriptions of other potential indications for CMR after cardiovascular surgery are given.

  12. AN EFFICIENT INITIALIZATION METHOD FOR K-MEANS CLUSTERING OF HYPERSPECTRAL DATA

    Directory of Open Access Journals (Sweden)

    A. Alizade Naeini

    2014-10-01

    Full Text Available K-means is definitely the most frequently used partitional clustering algorithm in the remote sensing community. Unfortunately due to its gradient decent nature, this algorithm is highly sensitive to the initial placement of cluster centers. This problem deteriorates for the high-dimensional data such as hyperspectral remotely sensed imagery. To tackle this problem, in this paper, the spectral signatures of the endmembers in the image scene are extracted and used as the initial positions of the cluster centers. For this purpose, in the first step, A Neyman–Pearson detection theory based eigen-thresholding method (i.e., the HFC method has been employed to estimate the number of endmembers in the image. Afterwards, the spectral signatures of the endmembers are obtained using the Minimum Volume Enclosing Simplex (MVES algorithm. Eventually, these spectral signatures are used to initialize the k-means clustering algorithm. The proposed method is implemented on a hyperspectral dataset acquired by ROSIS sensor with 103 spectral bands over the Pavia University campus, Italy. For comparative evaluation, two other commonly used initialization methods (i.e., Bradley & Fayyad (BF and Random methods are implemented and compared. The confusion matrix, overall accuracy and Kappa coefficient are employed to assess the methods’ performance. The evaluations demonstrate that the proposed solution outperforms the other initialization methods and can be applied for unsupervised classification of hyperspectral imagery for landcover mapping.

  13. A GPS Satellite Clock Offset Prediction Method Based on Fitting Clock Offset Rates Data

    Directory of Open Access Journals (Sweden)

    WANG Fuhong

    2016-12-01

    Full Text Available It is proposed that a satellite atomic clock offset prediction method based on fitting and modeling clock offset rates data. This method builds quadratic model or linear model combined with periodic terms to fit the time series of clock offset rates, and computes the model coefficients of trend with the best estimation. The clock offset precisely estimated at the initial prediction epoch is directly adopted to calculate the model coefficient of constant. The clock offsets in the rapid ephemeris (IGR provided by IGS are used as modeling data sets to perform certain experiments for different types of GPS satellite clocks. The results show that the clock prediction accuracies of the proposed method for 3, 6, 12 and 24 h achieve 0.43, 0.58, 0.90 and 1.47 ns respectively, which outperform the traditional prediction method based on fitting original clock offsets by 69.3%, 61.8%, 50.5% and 37.2%. Compared with the IGU real-time clock products provided by IGS, the prediction accuracies of the new method have improved about 15.7%, 23.7%, 27.4% and 34.4% respectively.

  14. High Titers of Chlamydia trachomatis Antibodies in Brazilian Women with Tubal Occlusion or Previous Ectopic Pregnancy

    Directory of Open Access Journals (Sweden)

    A. C. S. Machado

    2007-01-01

    Full Text Available Objective. To evaluate serum chlamydia antibody titers (CATs in tubal occlusion or previous ectopic pregnancy and the associated risk factors. Methods. The study population consisted of 55 women wih tubal damage and 55 parous women. CAT was measured using the whole-cell inclusion immunofluorescence test and cervical chlamydial DNA detected by PCR. Odds ratios were calculated to assess variables associated with C. trachomatis infection. Results. The prevalence of chlamydial antibodies and antibody titers in women with tubal occlusion or previous ectopic pregnancy was significantly higher (P<.01 than in parous women. Stepwise logistic regression analysis showed that chlamydia IgG antibodies were associated with tubal damage and with a larger number of lifetime sexual partners. Conclusions. Chlamydia antibody titers were associated with tubal occlusion, prior ectopic pregnancy, and with sexual behavior, suggesting that a chlamydia infection was the major contributor to the tubal damage in these women.

  15. A Hybrid Method for Interpolating Missing Data in Heterogeneous Spatio-Temporal Datasets

    Directory of Open Access Journals (Sweden)

    Min Deng

    2016-02-01

    Full Text Available Space-time interpolation is widely used to estimate missing or unobserved values in a dataset integrating both spatial and temporal records. Although space-time interpolation plays a key role in space-time modeling, existing methods were mainly developed for space-time processes that exhibit stationarity in space and time. It is still challenging to model heterogeneity of space-time data in the interpolation model. To overcome this limitation, in this study, a novel space-time interpolation method considering both spatial and temporal heterogeneity is developed for estimating missing data in space-time datasets. The interpolation operation is first implemented in spatial and temporal dimensions. Heterogeneous covariance functions are constructed to obtain the best linear unbiased estimates in spatial and temporal dimensions. Spatial and temporal correlations are then considered to combine the interpolation results in spatial and temporal dimensions to estimate the missing data. The proposed method is tested on annual average temperature and precipitation data in China (1984–2009. Experimental results show that, for these datasets, the proposed method outperforms three state-of-the-art methods—e.g., spatio-temporal kriging, spatio-temporal inverse distance weighting, and point estimation model of biased hospitals-based area disease estimation methods.

  16. Multiagent scheduling method with earliness and tardiness objectives in flexible job shops.

    Science.gov (United States)

    Wu, Zuobao; Weng, Michael X

    2005-04-01

    Flexible job-shop scheduling problems are an important extension of the classical job-shop scheduling problems and present additional complexity. Such problems are mainly due to the existence of a considerable amount of overlapping capacities with modern machines. Classical scheduling methods are generally incapable of addressing such capacity overlapping. We propose a multiagent scheduling method with job earliness and tardiness objectives in a flexible job-shop environment. The earliness and tardiness objectives are consistent with the just-in-time production philosophy which has attracted significant attention in both industry and academic community. A new job-routing and sequencing mechanism is proposed. In this mechanism, two kinds of jobs are defined to distinguish jobs with one operation left from jobs with more than one operation left. Different criteria are proposed to route these two kinds of jobs. Job sequencing enables to hold a job that may be completed too early. Two heuristic algorithms for job sequencing are developed to deal with these two kinds of jobs. The computational experiments show that the proposed multiagent scheduling method significantly outperforms the existing scheduling methods in the literature. In addition, the proposed method is quite fast. In fact, the simulation time to find a complete schedule with over 2000 jobs on ten machines is less than 1.5 min.

  17. An effective trust-based recommendation method using a novel graph clustering algorithm

    Science.gov (United States)

    Moradi, Parham; Ahmadian, Sajad; Akhlaghian, Fardin

    2015-10-01

    Recommender systems are programs that aim to provide personalized recommendations to users for specific items (e.g. music, books) in online sharing communities or on e-commerce sites. Collaborative filtering methods are important and widely accepted types of recommender systems that generate recommendations based on the ratings of like-minded users. On the other hand, these systems confront several inherent issues such as data sparsity and cold start problems, caused by fewer ratings against the unknowns that need to be predicted. Incorporating trust information into the collaborative filtering systems is an attractive approach to resolve these problems. In this paper, we present a model-based collaborative filtering method by applying a novel graph clustering algorithm and also considering trust statements. In the proposed method first of all, the problem space is represented as a graph and then a sparsest subgraph finding algorithm is applied on the graph to find the initial cluster centers. Then, the proposed graph clustering algorithm is performed to obtain the appropriate users/items clusters. Finally, the identified clusters are used as a set of neighbors to recommend unseen items to the current active user. Experimental results based on three real-world datasets demonstrate that the proposed method outperforms several state-of-the-art recommender system methods.

  18. Automatic Detection of Wild-type Mouse Cranial Sutures

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur; Darvann, Tron Andre; Hermann, Nuno V.

    , automatic detection of the cranial sutures becomes important. We have previously built a craniofacial, wild-type mouse atlas from a set of 10 Micro CT scans using a B-spline-based nonrigid registration method by Rueckert et al. Subsequently, all volumes were registered nonrigidly to the atlas. Using......, the observer traced the sutures on each of the mouse volumes as well. The observer outperforms the automatic approach by approximately 0.1 mm. All mice have similar errors while the suture error plots reveal that suture 1 and 2 are cumbersome, both for the observer and the automatic approach. These sutures can...

  19. Method Points: towards a metric for method complexity

    Directory of Open Access Journals (Sweden)

    Graham McLeod

    1998-11-01

    Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.

  20. Statistical assignment of DNA sequences using Bayesian phylogenetics

    DEFF Research Database (Denmark)

    Terkelsen, Kasper Munch; Boomsma, Wouter Krogh; Huelsenbeck, John P.

    2008-01-01

    We provide a new automated statistical method for DNA barcoding based on a Bayesian phylogenetic analysis. The method is based on automated database sequence retrieval, alignment, and phylogenetic analysis using a custom-built program for Bayesian phylogenetic analysis. We show on real data...... that the method outperforms Blast searches as a measure of confidence and can help eliminate 80% of all false assignment based on best Blast hit. However, the most important advance of the method is that it provides statistically meaningful measures of confidence. We apply the method to a re......-analysis of previously published ancient DNA data and show that, with high statistical confidence, most of the published sequences are in fact of Neanderthal origin. However, there are several cases of chimeric sequences that are comprised of a combination of both Neanderthal and modern human DNA....

  1. Previously unreported abnormalities in Wolfram Syndrome Type 2.

    Science.gov (United States)

    Akturk, Halis Kaan; Yasa, Seda

    2017-01-01

    Wolfram syndrome (WFS) is a rare autosomal recessive disease with non-autoimmune childhood onset insulin dependent diabetes and optic atrophy. WFS type 2 (WFS2) differs from WFS type 1 (WFS1) with upper intestinal ulcers, bleeding tendency and the lack ofdiabetes insipidus. Li-fespan is short due to related comorbidities. Only a few familieshave been reported with this syndrome with the CISD2 mutation. Here we report two siblings with a clinical diagnosis of WFS2, previously misdiagnosed with type 1 diabetes mellitus and diabetic retinopathy-related blindness. We report possible additional clinical and laboratory findings that have not been pre-viously reported, such as asymptomatic hypoparathyroidism, osteomalacia, growth hormone (GH) deficiency and hepatomegaly. Even though not a requirement for the diagnosis of WFS2 currently, our case series confirm hypogonadotropic hypogonadism to be also a feature of this syndrome, as reported before. © Polish Society for Pediatric Endocrinology and Diabetology.

  2. Space use by Black-tailed Godwits Limosa limosa limosa during settlement at a previous or a new nest location

    NARCIS (Netherlands)

    Van Den Brink, Valentijn; Schroeder, Julia; Both, Christiaan; Lourenco, Pedro M.; Piersma, Theunis; Hooijmeijer, Jos C.E.W.

    Capsule Black-tailed Godwits first return to the nest location of the previous year, even when moving to a different nest location later that season. Aims To examine the use of space by Black-tailed Godwits during the two months before egg-laying to two weeks afterwards. Methods We compare the

  3. Outcome Of Pregnancy Following A Previous Lower Segment ...

    African Journals Online (AJOL)

    Background: A previous ceasarean section is an important variable that influences patient management in subsequent pregnancies. A trial of vaginal delivery in such patients is a feasible alternative to a secondary section, thus aiding to reduce the ceasarean section rate and its associated co-morbidities. Objective: To ...

  4. Cryptococcal meningitis in a previously healthy child | Chimowa ...

    African Journals Online (AJOL)

    An 8-year-old previously healthy female presented with a 3 weeks history of headache, neck stiffness, deafness, fever and vomiting and was diagnosed with cryptococcal meningitis. She had documented hearing loss and was referred to tertiary-level care after treatment with fluconazole did not improve her neurological ...

  5. 24 CFR 1710.552 - Previously accepted state filings.

    Science.gov (United States)

    2010-04-01

    ... of Substantially Equivalent State Law § 1710.552 Previously accepted state filings. (a) Materials... and contracts or agreements contain notice of purchaser's revocation rights. In addition see § 1715.15..., unless the developer is obligated to do so in the contract. (b) If any such filing becomes inactive or...

  6. Sudden unexpected death in children with a previously diagnosed cardiovascular disorder

    NARCIS (Netherlands)

    Polderman, Florens N.; Cohen, Joeri; Blom, Nico A.; Delhaas, Tammo; Helbing, Wim A.; Lam, Jan; Sobotka-Plojhar, Marta A.; Temmerman, Arno M.; Sreeram, Narayanswani

    2004-01-01

    BACKGROUND: It is known that children with previously diagnosed heart defects die suddenly. The causes of death are often unknown. OBJECTIVE: The aim of the study was to identify all infants and children within the Netherlands with previously diagnosed heart disease who had a sudden unexpected death

  7. Sudden unexpected death in children with a previously diagnosed cardiovascular disorder

    NARCIS (Netherlands)

    Polderman, F.N.; Cohen, Joeri; Blom, N.A.; Delhaas, T.; Helbing, W.A.; Lam, J.; Sobotka-Plojhar, M.A.; Temmerman, Arno M.; Sreeram, N.

    2004-01-01

    Background: It is known that children with previously diagnosed heart defects die suddenly. The causes of death are often unknown. Objective: The aim of the study was to identify all infants and children within the Netherlands with previously diagnosed heart disease who had a sudden unexpected death

  8. Exact Outage Probability of Dual-Hop CSI-Assisted AF Relaying Over Nakagami-m Fading Channels

    KAUST Repository

    Xia, Minghua; Aissa, Sonia; Wu, Yik-Chung

    2012-01-01

    to evaluate the outage performance of the system under study. The analytical results of outage probability coincide exactly with Monte-Carlo simulation results and outperform the previously reported upper bounds in the low and medium SNR regions.

  9. Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods

    KAUST Repository

    Wang, Cheng

    2018-05-17

    Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.

  10. Incidence of cancer in adolescent idiopathic scoliosis patients treated 25 years previously

    DEFF Research Database (Denmark)

    Simony, Ane; Hansen, Emil Jesper; Christensen, Steen Bach

    2016-01-01

    , is comparable to modern equipment. This is to our knowledge the first study to report increased rates of endometrial cancers in a cohort of AIS patients, and future attention is needed to reduce the radiation dose distributed to the AIS patients both pre-operatively and during surgery.......PURPOSE: To report the incidence of cancer in a cohort of adolescent idiopathic scoliosis (AIS) patients treated 25 years previously. METHODS: 215 consecutive AIS patients treated between 1983 and 1990 were identified and requested to return for clinical and radiographic examination. The incidence....... RESULTS: From the original cohort of 215 consecutive AIS patients, radiation information was available in 211 of the patients, and medical charts were available in 209 AIS patients. 170 (83 %) of the 205 AIS patients participated in the follow-up study with questionnaires. The calculated mean total...

  11. 75 FR 20933 - Airworthiness Directives; Arrow Falcon Exporters, Inc. (previously Utah State University...

    Science.gov (United States)

    2010-04-22

    ... Hawkins and Powers Aviation, Inc.); S.M.&T. Aircraft (previously US Helicopters, Inc., UNC Helicopter, Inc... Joaquin Helicopters (previously Hawkins and Powers Aviation, Inc.); S.M.&T. Aircraft (previously US...

  12. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  13. Swarm intelligence in animal groups: when can a collective out-perform an expert?

    Directory of Open Access Journals (Sweden)

    Konstantinos V Katsikopoulos

    Full Text Available An important potential advantage of group-living that has been mostly neglected by life scientists is that individuals in animal groups may cope more effectively with unfamiliar situations. Social interaction can provide a solution to a cognitive problem that is not available to single individuals via two potential mechanisms: (i individuals can aggregate information, thus augmenting their 'collective cognition', or (ii interaction with conspecifics can allow individuals to follow specific 'leaders', those experts with information particularly relevant to the decision at hand. However, a-priori, theory-based expectations about which of these decision rules should be preferred are lacking. Using a set of simple models, we present theoretical conditions (involving group size, and diversity of individual information under which groups should aggregate information, or follow an expert, when faced with a binary choice. We found that, in single-shot decisions, experts are almost always more accurate than the collective across a range of conditions. However, for repeated decisions - where individuals are able to consider the success of previous decision outcomes - the collective's aggregated information is almost always superior. The results improve our understanding of how social animals may process information and make decisions when accuracy is a key component of individual fitness, and provide a solid theoretical framework for future experimental tests where group size, diversity of individual information, and the repeatability of decisions can be measured and manipulated.

  14. Chinese tallow trees (Triadica sebifera) from the invasive range outperform those from the native range with an active soil community or phosphorus fertilization.

    Science.gov (United States)

    Zhang, Ling; Zhang, Yaojun; Wang, Hong; Zou, Jianwen; Siemann, Evan

    2013-01-01

    Two mechanisms that have been proposed to explain success of invasive plants are unusual biotic interactions, such as enemy release or enhanced mutualisms, and increased resource availability. However, while these mechanisms are usually considered separately, both may be involved in successful invasions. Biotic interactions may be positive or negative and may interact with nutritional resources in determining invasion success. In addition, the effects of different nutrients on invasions may vary. Finally, genetic variation in traits between populations located in introduced versus native ranges may be important for biotic interactions and/or resource use. Here, we investigated the roles of soil biota, resource availability, and plant genetic variation using seedlings of Triadica sebifera in an experiment in the native range (China). We manipulated nitrogen (control or 4 g/m(2)), phosphorus (control or 0.5 g/m(2)), soil biota (untreated or sterilized field soil), and plant origin (4 populations from the invasive range, 4 populations from the native range) in a full factorial experiment. Phosphorus addition increased root, stem, and leaf masses. Leaf mass and height growth depended on population origin and soil sterilization. Invasive populations had higher leaf mass and growth rates than native populations did in fresh soil but they had lower, comparable leaf mass and growth rates in sterilized soil. Invasive populations had higher growth rates with phosphorus addition but native ones did not. Soil sterilization decreased specific leaf area in both native and exotic populations. Negative effects of soil sterilization suggest that soil pathogens may not be as important as soil mutualists for T. sebifera performance. Moreover, interactive effects of sterilization and origin suggest that invasive T. sebifera may have evolved more beneficial relationships with the soil biota. Overall, seedlings from the invasive range outperformed those from the native range, however

  15. Medical students who decompress during the M-1 year outperform those who fail and repeat it: A study of M-1 students at the University of Illinois College of Medicine at Urbana-Champaign 1988–2000

    Directory of Open Access Journals (Sweden)

    Freund Gregory G

    2005-05-01

    Full Text Available Abstract Background All medical schools must counsel poor-performing students, address their problems and assist them in developing into competent physicians. The objective of this study was to determine whether students with academic deficiencies in their M-1 year graduate more often, spend less time to complete the curriculum, and need fewer attempts at passing USMLE Step 1 and Step 2 by entering the Decompressed Program prior to failure of the M-1 year than those students who fail the M-1 year and then repeat it. Method The authors reviewed the performance of M-1 students in the Decompressed Program and compared their outcomes to M-1 students who failed and fully repeated the M-1 year. To compare the groups upon admission, t-Tests comparing the Cognitive Index of students and MCAT scores from both groups were performed. Performance of the two groups after matriculation was also analyzed. Results Decompressed students were 2.1 times more likely to graduate. Decompressed students were 2.5 times more likely to pass USMLE Step 1 on the first attempt than the repeat students. In addition, 46% of those in the decompressed group completed the program in five years compared to 18% of the repeat group. Conclusion Medical students who decompress their M-1 year prior to M-1 year failure outperform those who fail their first year and then repeat it. These findings indicate the need for careful monitoring of M-1 student performance and early intervention and counseling of struggling students.

  16. 40 CFR 152.93 - Citation of a previously submitted valid study.

    Science.gov (United States)

    2010-07-01

    ... Data Submitters' Rights § 152.93 Citation of a previously submitted valid study. An applicant may demonstrate compliance for a data requirement by citing a valid study previously submitted to the Agency. The... the original data submitter, the applicant may cite the study only in accordance with paragraphs (b...

  17. Investigation of previously derived Hyades, Coma, and M67 reddenings

    International Nuclear Information System (INIS)

    Taylor, B.J.

    1980-01-01

    New Hyades polarimetry and field star photometry have been obtained to check the Hyades reddening, which was found to be nonzero in a previous paper. The new Hyades polarimetry implies essentially zero reddening; this is also true of polarimetry published by Behr (which was incorrectly interpreted in the previous paper). Four photometric techniques which are presumed to be insensitive to blanketing are used to compare the Hyades to nearby field stars; these four techniques also yield essentially zero reddening. When all of these results are combined with others which the author has previously published and a simultaneous solution for the Hyades, Coma, and M67 reddenings is made, the results are E (B-V) =3 +- 2 (sigma) mmag, -1 +- 3 (sigma) mmag, and 46 +- 6 (sigma) mmag, respectively. No support for a nonzero Hyades reddening is offered by the new results. When the newly obtained reddenings for the Hyades, Coma, and M67 are compared with results from techniques given by Crawford and by users of the David Dunlap Observatory photometric system, no differences between the new and other reddenings are found which are larger than about 2 sigma. The author had previously found that the M67 main-sequence stars have about the same blanketing as that of Coma and less blanketing than the Hyades; this conclusion is essentially unchanged by the revised reddenings

  18. Process cells dismantling of EUREX pant: previous activities

    International Nuclear Information System (INIS)

    Gili, M.

    1998-01-01

    In the '98-'99 period some process cells of the EUREX pant will be dismantled, in order to place there the liquid wastes conditioning plant 'CORA'. This report resumes the previous activities (plant rinsing campaigns and inactive Cell 014 dismantling), run in the past three years and the drawn experience [it

  19. Total brain, cortical and white matter volumes in children previously treated with glucocorticoids

    DEFF Research Database (Denmark)

    Holm, Sara K; Madsen, Kathrine S; Vestergaard, Martin

    2018-01-01

    BACKGROUND: Perinatal exposure to glucocorticoids and elevated endogenous glucocorticoid-levels during childhood can have detrimental effects on the developing brain. Here, we examined the impact of glucocorticoid-treatment during childhood on brain volumes. METHODS: Thirty children and adolescents...... with rheumatic or nephrotic disease previously treated with glucocorticoids and 30 controls matched on age, sex, and parent education underwent magnetic resonance imaging (MRI) of the brain. Total cortical grey and white matter, brain, and intracranial volume, and total cortical thickness and surface area were...... were mainly driven by the children with rheumatic disease. Total cortical thickness and cortical surface area did not significantly differ between groups. We found no significant associations between glucocorticoid-treatment variables and volumetric measures. CONCLUSION: Observed smaller total brain...

  20. Connecting clinical and actuarial prediction with rule-based methods.

    Science.gov (United States)

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  1. 75 FR 39143 - Airworthiness Directives; Arrow Falcon Exporters, Inc. (previously Utah State University); AST...

    Science.gov (United States)

    2010-07-08

    ... (previously Precision Helicopters, LLC); Robinson Air Crane, Inc.; San Joaquin Helicopters (previously Hawkins... (Previously Hawkins & Powers Aviation); S.M. &T. Aircraft (Previously Us Helicopter Inc., UNC Helicopters, Inc...

  2. A Novel Transfer Learning Method Based on Common Space Mapping and Weighted Domain Matching

    KAUST Repository

    Liang, Ru-Ze; Xie, Wei; Li, Weizhi; Wang, Hongqi; Wang, Jim Jing-Yan; Taylor, Lisa

    2017-01-01

    In this paper, we propose a novel learning framework for the problem of domain transfer learning. We map the data of two domains to one single common space, and learn a classifier in this common space. Then we adapt the common classifier to the two domains by adding two adaptive functions to it respectively. In the common space, the target domain data points are weighted and matched to the target domain in term of distributions. The weighting terms of source domain data points and the target domain classification responses are also regularized by the local reconstruction coefficients. The novel transfer learning framework is evaluated over some benchmark cross-domain data sets, and it outperforms the existing state-of-the-art transfer learning methods.

  3. A Novel Transfer Learning Method Based on Common Space Mapping and Weighted Domain Matching

    KAUST Repository

    Liang, Ru-Ze

    2017-01-17

    In this paper, we propose a novel learning framework for the problem of domain transfer learning. We map the data of two domains to one single common space, and learn a classifier in this common space. Then we adapt the common classifier to the two domains by adding two adaptive functions to it respectively. In the common space, the target domain data points are weighted and matched to the target domain in term of distributions. The weighting terms of source domain data points and the target domain classification responses are also regularized by the local reconstruction coefficients. The novel transfer learning framework is evaluated over some benchmark cross-domain data sets, and it outperforms the existing state-of-the-art transfer learning methods.

  4. Maximum Power Point Tracking Control of a Thermoelectric Generation System Using the Extremum Seeking Control Method

    Directory of Open Access Journals (Sweden)

    Ssennoga Twaha

    2017-12-01

    Full Text Available This study proposes and implements maximum power Point Tracking (MPPT control on thermoelectric generation system using an extremum seeking control (ESC algorithm. The MPPT is applied to guarantee maximum power extraction from the TEG system. The work has been carried out through modelling of thermoelectric generator/dc-dc converter system using Matlab/Simulink. The effectiveness of ESC technique has been assessed by comparing the results with those of the Perturb and Observe (P&O MPPT method under the same operating conditions. Results indicate that ESC MPPT method extracts more power than the P&O technique, where the output power of ESC technique is higher than that of P&O by 0.47 W or 6.1% at a hot side temperature of 200 °C. It is also noted that the ESC MPPT based model is almost fourfold faster than the P&O method. This is attributed to smaller MPPT circuit of ESC compared to that of P&O, hence we conclude that the ESC MPPT method outperforms the P&O technique.

  5. Obstructive pulmonary disease in patients with previous tuberculosis ...

    African Journals Online (AJOL)

    Obstructive pulmonary disease in patients with previous tuberculosis: Pathophysiology of a community-based cohort. B.W. Allwood, R Gillespie, M Galperin-Aizenberg, M Bateman, H Olckers, L Taborda-Barata, G.L. Calligaro, Q Said-Hartley, R van Zyl-Smit, C.B. Cooper, E van Rikxoort, J Goldin, N Beyers, E.D. Bateman ...

  6. Association of Aortic Valve Sclerosis with Previous Coronary Artery Disease and Risk Factors

    Directory of Open Access Journals (Sweden)

    Filipe Carvalho Marmelo

    2014-11-01

    Full Text Available Background: Aortic valve sclerosis (AVS is characterized by increased thickness, calcification and stiffness of the aortic leaflets without fusion of the commissures. Several studies show an association between AVS and presence of coronary artery disease. Objective: The aim of this study is to investigate the association between presence of AVS with occurrence of previous coronary artery disease and classical risk factors. Methods: The sample was composed of 2,493 individuals who underwent transthoracic echocardiography between August 2011 and December 2012. The mean age of the cohort was 67.5 ± 15.9 years, and 50.7% were female. Results: The most frequent clinical indication for Doppler echocardiography was the presence of stroke (28.8%, and the most common risk factor was hypertension (60.8%. The most prevalent pathological findings on Doppler echocardiography were mitral valve sclerosis (37.1% and AVS (36.7%. There was a statistically significant association between AVS with hypertension (p < 0.001, myocardial infarction (p = 0.007, diabetes (p = 0.006 and compromised left ventricular systolic function (p < 0.001. Conclusion: Patients with AVS have higher prevalences of hypertension, stroke, hypercholesterolemia, myocardial infarction, diabetes and compromised left ventricular systolic function when compared with patients without AVS. We conclude that there is an association between presence of AVS with previous coronary artery disease and classical risk factors.

  7. A Comparison of Sequential and GPU Implementations of Iterative Methods to Compute Reachability Probabilities

    Directory of Open Access Journals (Sweden)

    Elise Cormie-Bowins

    2012-10-01

    Full Text Available We consider the problem of computing reachability probabilities: given a Markov chain, an initial state of the Markov chain, and a set of goal states of the Markov chain, what is the probability of reaching any of the goal states from the initial state? This problem can be reduced to solving a linear equation Ax = b for x, where A is a matrix and b is a vector. We consider two iterative methods to solve the linear equation: the Jacobi method and the biconjugate gradient stabilized (BiCGStab method. For both methods, a sequential and a parallel version have been implemented. The parallel versions have been implemented on the compute unified device architecture (CUDA so that they can be run on a NVIDIA graphics processing unit (GPU. From our experiments we conclude that as the size of the matrix increases, the CUDA implementations outperform the sequential implementations. Furthermore, the BiCGStab method performs better than the Jacobi method for dense matrices, whereas the Jacobi method does better for sparse ones. Since the reachability probabilities problem plays a key role in probabilistic model checking, we also compared the implementations for matrices obtained from a probabilistic model checker. Our experiments support the conjecture by Bosnacki et al. that the Jacobi method is superior to Krylov subspace methods, a class to which the BiCGStab method belongs, for probabilistic model checking.

  8. TU-AB-202-10: How Effective Are Current Atlas Selection Methods for Atlas-Based Auto-Contouring in Radiotherapy Planning?

    Energy Technology Data Exchange (ETDEWEB)

    Peressutti, D; Schipaanboord, B; Kadir, T; Gooding, M [Mirada Medical Limited, Science and Medical Technology, Oxford (United Kingdom); Soest, J van; Lustberg, T; Elmpt, W van; Dekker, A [Maastricht University Medical Centre, Department of Radiation Oncology MAASTRO - GROW School for Oncology Developmental Biology, Maastricht (Netherlands)

    2016-06-15

    Purpose: To investigate the effectiveness of atlas selection methods for improving atlas-based auto-contouring in radiotherapy planning. Methods: 275 H&N clinically delineated cases were employed as an atlas database from which atlases would be selected. A further 40 previously contoured cases were used as test patients against which atlas selection could be performed and evaluated. 26 variations of selection methods proposed in the literature and used in commercial systems were investigated. Atlas selection methods comprised either global or local image similarity measures, computed after rigid or deformable registration, combined with direct atlas search or with an intermediate template image. Workflow Box (Mirada-Medical, Oxford, UK) was used for all auto-contouring. Results on brain, brainstem, parotids and spinal cord were compared to random selection, a fixed set of 10 “good” atlases, and optimal selection by an “oracle” with knowledge of the ground truth. The Dice score and the average ranking with respect to the “oracle” were employed to assess the performance of the top 10 atlases selected by each method. Results: The fixed set of “good” atlases outperformed all of the atlas-patient image similarity-based selection methods (mean Dice 0.715 c.f. 0.603 to 0.677). In general, methods based on exhaustive comparison of local similarity measures showed better average Dice scores (0.658 to 0.677) compared to the use of either template image (0.655 to 0.672) or global similarity measures (0.603 to 0.666). The performance of image-based selection methods was found to be only slightly better than a random (0.645). Dice scores given relate to the left parotid, but similar results patterns were observed for all organs. Conclusion: Intuitively, atlas selection based on the patient CT is expected to improve auto-contouring performance. However, it was found that published approaches performed marginally better than random and use of a fixed set of

  9. Scalable Content Authentication in H.264/SVC Videos Using Perceptual Hashing based on Dempster-Shafer theory

    Directory of Open Access Journals (Sweden)

    Ye Dengpan

    2012-09-01

    Full Text Available The content authenticity of the multimedia delivery is important issue with rapid development and widely used of multimedia technology. Till now many authentication solutions had been proposed, such as cryptology and watermarking based methods. However, in latest heterogeneous network the video stream transmission has been coded in scalable way such as H.264/SVC, there is still no good authentication solution. In this paper, we firstly summarized related works and proposed a scalable content authentication scheme using a ratio of different energy (RDE based perceptual hashing in Q/S dimension, which is used Dempster-Shafer theory and combined with the latest scalable video coding (H.264/SVC construction. The idea of aldquo;sign once and verify in scalable wayardquo; can be realized. Comparing with previous methods, the proposed scheme based on perceptual hashing outperforms previous works in uncertainty (robustness and efficiencies in the H.264/SVC video streams. At last, the experiment results verified the performance of our scheme.

  10. Matched cohort study of external cephalic version in women with previous cesarean delivery.

    Science.gov (United States)

    Keepanasseril, Anish; Anand, Keerthana; Soundara Raghavan, Subrahmanian

    2017-07-01

    To evaluate the efficacy and safety of external cephalic version (ECV) among women with previous cesarean delivery. A retrospective study was conducted using data for women with previous cesarean delivery and breech presentation who underwent ECV at or after 36 weeks of pregnancy during 2011-2016. For every case, two multiparous women without previous cesarean delivery who underwent ECV and were matched for age and pregnancy duration were included. Characteristics and outcomes were compared between groups. ECV was successful for 32 (84.2%) of 38 women with previous cesarean delivery and 62 (81.6%) in the control group (P=0.728). Multivariate regression analysis confirmed that previous cesarean was not associated with ECV success (odds ratio 1.89, 95% confidence interval 0.19-18.47; P=0.244). Successful vaginal delivery after successful ECV was reported for 19 (59.4%) women in the previous cesarean delivery group and 52 (83.9%) in the control group (P<0.001). No ECV-associated complications occurred in women with previous cesarean delivery. To avoid a repeat cesarean delivery, ECV can be offered to women with breech presentation and previous cesarean delivery who are otherwise eligible for a trial of labor. © 2017 International Federation of Gynecology and Obstetrics.

  11. A deep learning-based multi-model ensemble method for cancer prediction.

    Science.gov (United States)

    Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong

    2018-01-01

    Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Road Lane Detection by Discriminating Dashed and Solid Road Lanes Using a Visible Light Camera Sensor.

    Science.gov (United States)

    Hoang, Toan Minh; Hong, Hyung Gil; Vokhidov, Husan; Park, Kang Ryoung

    2016-08-18

    With the increasing need for road lane detection used in lane departure warning systems and autonomous vehicles, many studies have been conducted to turn road lane detection into a virtual assistant to improve driving safety and reduce car accidents. Most of the previous research approaches detect the central line of a road lane and not the accurate left and right boundaries of the lane. In addition, they do not discriminate between dashed and solid lanes when detecting the road lanes. However, this discrimination is necessary for the safety of autonomous vehicles and the safety of vehicles driven by human drivers. To overcome these problems, we propose a method for road lane detection that distinguishes between dashed and solid lanes. Experimental results with the Caltech open database showed that our method outperforms conventional methods.

  13. Single image super-resolution via regularized extreme learning regression for imagery from microgrid polarimeters

    Science.gov (United States)

    Sargent, Garrett C.; Ratliff, Bradley M.; Asari, Vijayan K.

    2017-08-01

    The advantage of division of focal plane imaging polarimeters is their ability to obtain temporally synchronized intensity measurements across a scene; however, they sacrifice spatial resolution in doing so due to their spatially modulated arrangement of the pixel-to-pixel polarizers and often result in aliased imagery. Here, we propose a super-resolution method based upon two previously trained extreme learning machines (ELM) that attempt to recover missing high frequency and low frequency content beyond the spatial resolution of the sensor. This method yields a computationally fast and simple way of recovering lost high and low frequency content from demosaicing raw microgrid polarimetric imagery. The proposed method outperforms other state-of-the-art single-image super-resolution algorithms in terms of structural similarity and peak signal-to-noise ratio.

  14. Attribute and topology based change detection in a constellation of previously detected objects

    Science.gov (United States)

    Paglieroni, David W.; Beer, Reginald N.

    2016-01-19

    A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.

  15. Room-temperature and temperature-dependent QSRR modelling for predicting the nitrate radical reaction rate constants of organic chemicals using ensemble learning methods.

    Science.gov (United States)

    Gupta, S; Basant, N; Mohan, D; Singh, K P

    2016-07-01

    Experimental determinations of the rate constants of the reaction of NO3 with a large number of organic chemicals are tedious, and time and resource intensive; and the development of computational methods has widely been advocated. In this study, we have developed room-temperature (298 K) and temperature-dependent quantitative structure-reactivity relationship (QSRR) models based on the ensemble learning approaches (decision tree forest (DTF) and decision treeboost (DTB)) for predicting the rate constant of the reaction of NO3 radicals with diverse organic chemicals, under OECD guidelines. Predictive powers of the developed models were established in terms of statistical coefficients. In the test phase, the QSRR models yielded a correlation (r(2)) of >0.94 between experimental and predicted rate constants. The applicability domains of the constructed models were determined. An attempt has been made to provide the mechanistic interpretation of the selected features for QSRR development. The proposed QSRR models outperformed the previous reports, and the temperature-dependent models offered a much wider applicability domain. This is the first report presenting a temperature-dependent QSRR model for predicting the nitrate radical reaction rate constant at different temperatures. The proposed models can be useful tools in predicting the reactivities of chemicals towards NO3 radicals in the atmosphere, hence, their persistence and exposure risk assessment.

  16. Methods to maximise recovery of environmental DNA from water samples.

    Directory of Open Access Journals (Sweden)

    Rheyda Hinlo

    Full Text Available The environmental DNA (eDNA method is a detection technique that is rapidly gaining credibility as a sensitive tool useful in the surveillance and monitoring of invasive and threatened species. Because eDNA analysis often deals with small quantities of short and degraded DNA fragments, methods that maximize eDNA recovery are required to increase detectability. In this study, we performed experiments at different stages of the eDNA analysis to show which combinations of methods give the best recovery rate for eDNA. Using Oriental weatherloach (Misgurnus anguillicaudatus as a study species, we show that various combinations of DNA capture, preservation and extraction methods can significantly affect DNA yield. Filtration using cellulose nitrate filter paper preserved in ethanol or stored in a -20°C freezer and extracted with the Qiagen DNeasy kit outperformed other combinations in terms of cost and efficiency of DNA recovery. Our results support the recommendation to filter water samples within 24hours but if this is not possible, our results suggest that refrigeration may be a better option than freezing for short-term storage (i.e., 3-5 days. This information is useful in designing eDNA detection of low-density invasive or threatened species, where small variations in DNA recovery can signify the difference between detection success or failure.

  17. Eight previously unidentified mutations found in the OA1 ocular albinism gene

    Directory of Open Access Journals (Sweden)

    Dufier Jean-Louis

    2006-04-01

    Full Text Available Abstract Background Ocular albinism type 1 (OA1 is an X-linked ocular disorder characterized by a severe reduction in visual acuity, nystagmus, hypopigmentation of the retinal pigmented epithelium, foveal hypoplasia, macromelanosomes in pigmented skin and eye cells, and misrouting of the optical tracts. This disease is primarily caused by mutations in the OA1 gene. Methods The ophthalmologic phenotype of the patients and their family members was characterized. We screened for mutations in the OA1 gene by direct sequencing of the nine PCR-amplified exons, and for genomic deletions by PCR-amplification of large DNA fragments. Results We sequenced the nine exons of the OA1 gene in 72 individuals and found ten different mutations in seven unrelated families and three sporadic cases. The ten mutations include an amino acid substitution and a premature stop codon previously reported by our team, and eight previously unidentified mutations: three amino acid substitutions, a duplication, a deletion, an insertion and two splice-site mutations. The use of a novel Taq polymerase enabled us to amplify large genomic fragments covering the OA1 gene. and to detect very likely six distinct large deletions. Furthermore, we were able to confirm that there was no deletion in twenty one patients where no mutation had been found. Conclusion The identified mutations affect highly conserved amino acids, cause frameshifts or alternative splicing, thus affecting folding of the OA1 G protein coupled receptor, interactions of OA1 with its G protein and/or binding with its ligand.

  18. A Bipartite Network-based Method for Prediction of Long Non-coding RNA–protein Interactions

    Directory of Open Access Journals (Sweden)

    Mengqu Ge

    2016-02-01

    Full Text Available As one large class of non-coding RNAs (ncRNAs, long ncRNAs (lncRNAs have gained considerable attention in recent years. Mutations and dysfunction of lncRNAs have been implicated in human disorders. Many lncRNAs exert their effects through interactions with the corresponding RNA-binding proteins. Several computational approaches have been developed, but only few are able to perform the prediction of these interactions from a network-based point of view. Here, we introduce a computational method named lncRNA–protein bipartite network inference (LPBNI. LPBNI aims to identify potential lncRNA–interacting proteins, by making full use of the known lncRNA–protein interactions. Leave-one-out cross validation (LOOCV test shows that LPBNI significantly outperforms other network-based methods, including random walk (RWR and protein-based collaborative filtering (ProCF. Furthermore, a case study was performed to demonstrate the performance of LPBNI using real data in predicting potential lncRNA–interacting proteins.

  19. Predicting human splicing branchpoints by combining sequence-derived features and multi-label learning methods.

    Science.gov (United States)

    Zhang, Wen; Zhu, Xiaopeng; Fu, Yu; Tsuji, Junko; Weng, Zhiping

    2017-12-01

    Alternative splicing is the critical process in a single gene coding, which removes introns and joins exons, and splicing branchpoints are indicators for the alternative splicing. Wet experiments have identified a great number of human splicing branchpoints, but many branchpoints are still unknown. In order to guide wet experiments, we develop computational methods to predict human splicing branchpoints. Considering the fact that an intron may have multiple branchpoints, we transform the branchpoint prediction as the multi-label learning problem, and attempt to predict branchpoint sites from intron sequences. First, we investigate a variety of intron sequence-derived features, such as sparse profile, dinucleotide profile, position weight matrix profile, Markov motif profile and polypyrimidine tract profile. Second, we consider several multi-label learning methods: partial least squares regression, canonical correlation analysis and regularized canonical correlation analysis, and use them as the basic classification engines. Third, we propose two ensemble learning schemes which integrate different features and different classifiers to build ensemble learning systems for the branchpoint prediction. One is the genetic algorithm-based weighted average ensemble method; the other is the logistic regression-based ensemble method. In the computational experiments, two ensemble learning methods outperform benchmark branchpoint prediction methods, and can produce high-accuracy results on the benchmark dataset.

  20. Does more exposure to the language of instruction lead to higher academic achievement? A cross-national examination

    NARCIS (Netherlands)

    Agirdag, O.; Vanlaar, G.

    2018-01-01

    Aims and objectives: As some Programme for International Student Assessment (PISA) studies claimed that native speaking (NS) students outperform language minority (LMi) students, far-reaching inferences have been drawn by policymakers. However, previous PISA assessments were not appropriate because

  1. 75 FR 66009 - Airworthiness Directives; Cessna Aircraft Company (Type Certificate Previously Held by Columbia...

    Science.gov (United States)

    2010-10-27

    ... Company (Type Certificate Previously Held by Columbia Aircraft Manufacturing (Previously the Lancair... Company (Type Certificate Previously Held by Columbia Aircraft Manufacturing (Previously The Lancair...-15895. Applicability (c) This AD applies to the following Cessna Aircraft Company (type certificate...

  2. Gradient plasticity crack tip characterization by means of the extended finite element method

    DEFF Research Database (Denmark)

    Martínez Pañeda, Emilio; Natarajan, S.; Bordas, S.

    2017-01-01

    of the displacementfield is enriched with the stress singularity of the gradientdominatedsolution. Results reveal that the proposed numericalmethodology largely outperforms the standard finiteelement approach. The present work could have importantimplications on the use of microstructurally-motivatedmodels in large scale...

  3. A Bayesian method for assessing multiscalespecies-habitat relationships

    Science.gov (United States)

    Stuber, Erica F.; Gruber, Lutz F.; Fontaine, Joseph J.

    2017-01-01

    ContextScientists face several theoretical and methodological challenges in appropriately describing fundamental wildlife-habitat relationships in models. The spatial scales of habitat relationships are often unknown, and are expected to follow a multi-scale hierarchy. Typical frequentist or information theoretic approaches often suffer under collinearity in multi-scale studies, fail to converge when models are complex or represent an intractable computational burden when candidate model sets are large.ObjectivesOur objective was to implement an automated, Bayesian method for inference on the spatial scales of habitat variables that best predict animal abundance.MethodsWe introduce Bayesian latent indicator scale selection (BLISS), a Bayesian method to select spatial scales of predictors using latent scale indicator variables that are estimated with reversible-jump Markov chain Monte Carlo sampling. BLISS does not suffer from collinearity, and substantially reduces computation time of studies. We present a simulation study to validate our method and apply our method to a case-study of land cover predictors for ring-necked pheasant (Phasianus colchicus) abundance in Nebraska, USA.ResultsOur method returns accurate descriptions of the explanatory power of multiple spatial scales, and unbiased and precise parameter estimates under commonly encountered data limitations including spatial scale autocorrelation, effect size, and sample size. BLISS outperforms commonly used model selection methods including stepwise and AIC, and reduces runtime by 90%.ConclusionsGiven the pervasiveness of scale-dependency in ecology, and the implications of mismatches between the scales of analyses and ecological processes, identifying the spatial scales over which species are integrating habitat information is an important step in understanding species-habitat relationships. BLISS is a widely applicable method for identifying important spatial scales, propagating scale uncertainty, and

  4. Exploiting the Error-Correcting Capabilities of Low Density Parity Check Codes in Distributed Video Coding using Optical Flow

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Søgaard, Jacob; Salmistraro, Matteo

    2012-01-01

    We consider Distributed Video Coding (DVC) in presence of communication errors. First, we present DVC side information generation based on a new method of optical flow driven frame interpolation, where a highly optimized TV-L1 algorithm is used for the flow calculations and combine three flows....... Thereafter methods for exploiting the error-correcting capabilities of the LDPCA code in DVC are investigated. The proposed frame interpolation includes a symmetric flow constraint to the standard forward-backward frame interpolation scheme, which improves quality and handling of large motion. The three...... flows are combined in one solution. The proposed frame interpolation method consistently outperforms an overlapped block motion compensation scheme and a previous TV-L1 optical flow frame interpolation method with an average PSNR improvement of 1.3 dB and 2.3 dB respectively. For a GOP size of 2...

  5. A DATA FIELD METHOD FOR URBAN REMOTELY SENSED IMAGERY CLASSIFICATION CONSIDERING SPATIAL CORRELATION

    Directory of Open Access Journals (Sweden)

    Y. Zhang

    2016-06-01

    Full Text Available Spatial correlation between pixels is important information for remotely sensed imagery classification. Data field method and spatial autocorrelation statistics have been utilized to describe and model spatial information of local pixels. The original data field method can represent the spatial interactions of neighbourhood pixels effectively. However, its focus on measuring the grey level change between the central pixel and the neighbourhood pixels results in exaggerating the contribution of the central pixel to the whole local window. Besides, Geary’s C has also been proven to well characterise and qualify the spatial correlation between each pixel and its neighbourhood pixels. But the extracted object is badly delineated with the distracting salt-and-pepper effect of isolated misclassified pixels. To correct this defect, we introduce the data field method for filtering and noise limitation. Moreover, the original data field method is enhanced by considering each pixel in the window as the central pixel to compute statistical characteristics between it and its neighbourhood pixels. The last step employs a support vector machine (SVM for the classification of multi-features (e.g. the spectral feature and spatial correlation feature. In order to validate the effectiveness of the developed method, experiments are conducted on different remotely sensed images containing multiple complex object classes inside. The results show that the developed method outperforms the traditional method in terms of classification accuracies.

  6. The application of particle filters in single trial event-related potential estimation

    International Nuclear Information System (INIS)

    Mohseni, Hamid R; Nazarpour, Kianoush; Sanei, Saeid; Wilding, Edward L

    2009-01-01

    In this paper, an approach for the estimation of single trial event-related potentials (ST-ERPs) using particle filters (PFs) is presented. The method is based on recursive Bayesian mean square estimation of ERP wavelet coefficients using their previous estimates as prior information. To enable a performance evaluation of the approach in the Gaussian and non-Gaussian distributed noise conditions, we added Gaussian white noise (GWN) and real electroencephalogram (EEG) signals recorded during rest to the simulated ERPs. The results were compared to that of the Kalman filtering (KF) approach demonstrating the robustness of the PF over the KF to the added GWN noise. The proposed method also outperforms the KF when the assumption about the Gaussianity of the noise is violated. We also applied this technique to real EEG potentials recorded in an odd-ball paradigm and investigated the correlation between the amplitude and the latency of the estimated ERP components. Unlike the KF method, for the PF there was a statistically significant negative correlation between amplitude and latency of the estimated ERPs, matching previous neurophysiological findings

  7. [Fatal amnioinfusion with previous choriocarcinoma in a parturient woman].

    Science.gov (United States)

    Hrgović, Z; Bukovic, D; Mrcela, M; Hrgović, I; Siebzehnrübl, E; Karelovic, D

    2004-04-01

    The case of 36-year-old tercipare is described who developed choriocharcinoma in a previous pregnancy. During the first term labour the patient developed cardiac arrest, so reanimation and sectio cesarea was performed. A male new-born was delivered in good condition, but even after intensive therapy and reanimation occurred death of parturient woman with picture of disseminate intravascular coagulopathia (DIK). On autopsy and on histology there was no sign of malignant disease, so it was not possible to connect previous choricarcinoma with amniotic fluid embolism. Maybe was place of choriocarcinoma "locus minoris resistentiae" which later resulted with failure in placentation what was hard to prove. On autopsy we found embolia of lung with a microthrombosis of terminal circulation with punctiformis bleeding in mucous, what stands for DIK.

  8. Impact of previous virological treatment failures and adherence on the outcome of antiretroviral therapy in 2007.

    Directory of Open Access Journals (Sweden)

    Marie Ballif

    Full Text Available BACKGROUND: Combination antiretroviral treatment (cART has been very successful, especially among selected patients in clinical trials. The aim of this study was to describe outcomes of cART on the population level in a large national cohort. METHODS: Characteristics of participants of the Swiss HIV Cohort Study on stable cART at two semiannual visits in 2007 were analyzed with respect to era of treatment initiation, number of previous virologically failed regimens and self reported adherence. Starting ART in the mono/dual era before HIV-1 RNA assays became available was counted as one failed regimen. Logistic regression was used to identify risk factors for virological failure between the two consecutive visits. RESULTS: Of 4541 patients 31.2% and 68.8% had initiated therapy in the mono/dual and cART era, respectively, and been on treatment for a median of 11.7 vs. 5.7 years. At visit 1 in 2007, the mean number of previous failed regimens was 3.2 vs. 0.5 and the viral load was undetectable (4 previous failures compared to 1 were 0.9 (95% CI 0.4-1.7, 0.8 (0.4-1.6, 1.6 (0.8-3.2, 3.3 (1.7-6.6 respectively, and 2.3 (1.1-4.8 for >2 missed cART doses during the last month, compared to perfect adherence. From the cART era, odds ratios with a history of 1, 2 and >2 previous failures compared to none were 1.8 (95% CI 1.3-2.5, 2.8 (1.7-4.5 and 7.8 (4.5-13.5, respectively, and 2.8 (1.6-4.8 for >2 missed cART doses during the last month, compared to perfect adherence. CONCLUSIONS: A higher number of previous virologically failed regimens, and imperfect adherence to therapy were independent predictors of imminent virological failure.

  9. Reproductive outcomes in adolescents who had a previous birth or an induced abortion compared to adolescents' first pregnancies

    Directory of Open Access Journals (Sweden)

    Wenzlaff Paul

    2008-01-01

    Full Text Available Abstract Background Recently, attention has been focused on subsequent pregnancies among teenage mothers. Previous studies that compared the reproductive outcomes of teenage nulliparae and multiparae often did not consider the adolescents' reproductive histories. Thus, the authors compared the risks for adverse reproductive outcomes of adolescent nulliparae to teenagers who either have had an induced abortion or a previous birth. Methods In this retrospective cohort study we used perinatal data prospectively collected by obstetricians and midwives from 1990–1999 (participation rate 87–98% of all hospitals in Lower Saxony, Germany. From the 9742 eligible births among adolescents, women with multiple births, >1 previous pregnancies, or a previous spontaneous miscarriage were deleted and 8857 women Results In bivariate logistic regression analyses, compared to nulliparous teenagers, adolescents with a previous birth had higher risks for perinatal [OR = 2.08, CI = 1.11,3.89] and neonatal [OR = 4.31, CI = 1.77,10.52] mortality and adolescents with a previous abortion had higher risks for stillbirths [OR = 3.31, CI = 1.01,10.88] and preterm births [OR = 2.21, CI = 1.07,4.58]. After adjusting for maternal nationality, partner status, smoking, prenatal care and pre-pregnancy BMI, adolescents with a previous birth were at higher risk for perinatal [OR = 2.35, CI = 1.14,4.86] and neonatal mortality [OR = 4.70, CI = 1.60,13.81] and adolescents with a previous abortion had a higher risk for very low birthweight infants [OR = 2.74, CI = 1.06,7.09] than nulliparous teenagers. Conclusion The results suggest that teenagers who give birth twice as adolescents have worse outcomes in their second pregnancy compared to those teenagers who are giving birth for the first time. The prevention of the second pregnancy during adolescence is an important public health objective and should be addressed by health care providers who attend the first birth or the abortion

  10. Numerical simulation of the shot peening process under previous loading conditions

    International Nuclear Information System (INIS)

    Romero-Ángeles, B; Urriolagoitia-Sosa, G; Torres-San Miguel, C R; Molina-Ballinas, A; Benítez-García, H A; Vargas-Bustos, J A; Urriolagoitia-Calderón, G

    2015-01-01

    This research presents a numerical simulation of the shot peening process and determines the residual stress field induced into a component with a previous loading history. The importance of this analysis is based on the fact that mechanical elements under shot peening are also subjected to manufacturing processes, which convert raw material into finished product. However, material is not provided in a virgin state, it has a previous loading history caused by the manner it is fabricated. This condition could alter some beneficial aspects of the residual stress induced by shot peening and could accelerate the crack nucleation and propagation progression. Studies were performed in beams subjected to strain hardening in tension (5ε y ) before shot peening was applied. Latter results were then compared in a numerical assessment of an induced residual stress field by shot peening carried out in a component (beam) without any previous loading history. In this paper, it is clearly shown the detrimental or beneficial effect that previous loading history can bring to the mechanical component and how it can be controlled to improve the mechanical behavior of the material

  11. Does previous use affect litter box appeal in multi-cat households?

    Science.gov (United States)

    Ellis, J J; McGowan, R T S; Martin, F

    2017-08-01

    It is commonly assumed that cats actively avoid eliminated materials (especially in multi-cat homes), suggesting regular litter box cleaning as the best defense against out-of-box elimination. The relationship between previous use and litter box appeal to familiar subsequent users is currently unknown. The purpose of this study was to investigate the relationship between previous litter box use and the identity of the previous user, type of elimination, odor, and presence of physical/visual obstructions in a multi-cat household scenario. Cats preferred a clean litter box to a dirty one, but the identity of the previous user had no impact on preferences. While the presence of odor from urine and/or feces did not impact litter box preferences, the presence of odorless faux-urine and/or feces did - with the presence of faux-feces being preferred over faux-urine. Results suggest neither malodor nor chemical communication play a role in litter box preferences, and instead emphasize the importance of regular removal of physical/visual obstructions as the key factor in promoting proper litter box use. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Dexamethasone intravitreal implant in previously treated patients with diabetic macular edema : Subgroup analysis of the MEAD study

    OpenAIRE

    Augustin, A.J.; Kuppermann, B.D.; Lanzetta, P.; Loewenstein, A.; Li, X.; Cui, H.; Hashad, Y.; Whitcup, S.M.; Abujamra, S.; Acton, J.; Ali, F.; Antoszyk, A.; Awh, C.C.; Barak, A.; Bartz-Schmidt, K.U.

    2015-01-01

    Background Dexamethasone intravitreal implant 0.7?mg (DEX 0.7) was approved for treatment of diabetic macular edema (DME) after demonstration of its efficacy and safety in the MEAD registration trials. We performed subgroup analysis of MEAD study results to evaluate the efficacy and safety of DEX 0.7 treatment in patients with previously treated DME. Methods Three-year, randomized, sham-controlled phase 3 study in patients with DME, best-corrected visual acuity (BCVA) of 34?68 Early Treatment...

  13. Logic Learning Machine and standard supervised methods for Hodgkin's lymphoma prognosis using gene expression data and clinical variables.

    Science.gov (United States)

    Parodi, Stefano; Manneschi, Chiara; Verda, Damiano; Ferrari, Enrico; Muselli, Marco

    2018-03-01

    This study evaluates the performance of a set of machine learning techniques in predicting the prognosis of Hodgkin's lymphoma using clinical factors and gene expression data. Analysed samples from 130 Hodgkin's lymphoma patients included a small set of clinical variables and more than 54,000 gene features. Machine learning classifiers included three black-box algorithms ( k-nearest neighbour, Artificial Neural Network, and Support Vector Machine) and two methods based on intelligible rules (Decision Tree and the innovative Logic Learning Machine method). Support Vector Machine clearly outperformed any of the other methods. Among the two rule-based algorithms, Logic Learning Machine performed better and identified a set of simple intelligible rules based on a combination of clinical variables and gene expressions. Decision Tree identified a non-coding gene ( XIST) involved in the early phases of X chromosome inactivation that was overexpressed in females and in non-relapsed patients. XIST expression might be responsible for the better prognosis of female Hodgkin's lymphoma patients.

  14. A Method of Data Aggregation for Wearable Sensor Systems

    Directory of Open Access Journals (Sweden)

    Bo Shen

    2016-06-01

    Full Text Available Data aggregation has been considered as an effective way to decrease the data to be transferred in sensor networks. Particularly for wearable sensor systems, smaller battery has less energy, which makes energy conservation in data transmission more important. Nevertheless, wearable sensor systems usually have features like frequently dynamic changes of topologies and data over a large range, of which current aggregating methods can’t adapt to the demand. In this paper, we study the system composed of many wearable devices with sensors, such as the network of a tactical unit, and introduce an energy consumption-balanced method of data aggregation, named LDA-RT. In the proposed method, we develop a query algorithm based on the idea of ‘happened-before’ to construct a dynamic and energy-balancing routing tree. We also present a distributed data aggregating and sorting algorithm to execute top-k query and decrease the data that must be transferred among wearable devices. Combining these algorithms, LDA-RT tries to balance the energy consumptions for prolonging the lifetime of wearable sensor systems. Results of evaluation indicate that LDA-RT performs well in constructing routing trees and energy balances. It also outperforms the filter-based top-k monitoring approach in energy consumption, load balance, and the network’s lifetime, especially for highly dynamic data sources.

  15. Robust and efficient method for matching features in omnidirectional images

    Science.gov (United States)

    Zhu, Qinyi; Zhang, Zhijiang; Zeng, Dan

    2018-04-01

    Binary descriptors have been widely used in many real-time applications due to their efficiency. These descriptors are commonly designed for perspective images but perform poorly on omnidirectional images, which are severely distorted. To address this issue, this paper proposes tangent plane BRIEF (TPBRIEF) and adapted log polar grid-based motion statistics (ALPGMS). TPBRIEF projects keypoints to a unit sphere and applies the fixed test set in BRIEF descriptor on the tangent plane of the unit sphere. The fixed test set is then backprojected onto the original distorted images to construct the distortion invariant descriptor. TPBRIEF directly enables keypoint detecting and feature describing on original distorted images, whereas other approaches correct the distortion through image resampling, which introduces artifacts and adds time cost. With ALPGMS, omnidirectional images are divided into circular arches named adapted log polar grids. Whether a match is true or false is then determined by simply thresholding the match numbers in a grid pair where the two matched points located. Experiments show that TPBRIEF greatly improves the feature matching accuracy and ALPGMS robustly removes wrong matches. Our proposed method outperforms the state-of-the-art methods.

  16. US line-ups outperform UK line-ups

    OpenAIRE

    Seale-Carlisle, Travis M.; Mickes, Laura

    2016-01-01

    In the USA and the UK, many thousands of police suspects are identified by eyewitnesses every year. Unfortunately, many of those suspects are innocent, which becomes evident when they are exonerated by DNA testing, often after having been imprisoned for years. It is, therefore, imperative to use identification procedures that best enable eyewitnesses to discriminate innocent from guilty suspects. Although police investigators in both countries often administer line-up procedures, the details ...

  17. Effectiveness of sound therapy in patients with tinnitus resistant to previous treatments: importance of adjustments

    Directory of Open Access Journals (Sweden)

    Flavia Alencar de Barros Suzuki

    Full Text Available ABSTRACT INTRODUCTION: The difficulty in choosing the appropriate therapy for chronic tinnitus relates to the variable impact on the quality of life of affected patients and, thus, requires individualization of treatment. OBJECTIVE: To evaluate the effectiveness of using sound generators with individual adjustments to relieve tinnitus in patients unresponsive to previous treatments. METHODS: A prospective study of 10 patients with chronic tinnitus who were unresponsive to previous drug treatments, five males and five females, with ages ranging from 41 to 78 years. Bilateral sound generators (Reach 62 or Mind 9 models were used daily for at least 6 h during 18 months. The patients were evaluated at the beginning, after 1 month and at each 3 months until 18 months through acuphenometry, minimum masking level, the Tinnitus Handicap Inventory, visual analog scale, and the Hospital Anxiety and Depression Scale. The sound generators were adjusted at each visit. RESULTS: There was a reduction of Tinnitus Handicap Inventory in nine patients using a protocol with a customized approach, independent of psychoacoustic characteristics of tinnitus. The best response to treatment occurred in those with whistle-type tinnitus. A correlation among the adjustments and tinnitus loudness and minimum masking level was found. Only one patient, who had indication of depression by Hospital Anxiety and Depression Scale, did not respond to sound therapy. CONCLUSION: There was improvement in quality of life (Tinnitus Handicap Inventory, with good response to sound therapy using customized settings in patients who did not respond to previous treatments for tinnitus.

  18. Pituitary-adrenocortical adjustments to transport stress in horses with previous different handling and transport conditions

    Directory of Open Access Journals (Sweden)

    E. Fazio

    2016-08-01

    Full Text Available Aim: The changes of the hypothalamic pituitary adrenal (HPA axis response to a long distance transportation results in increase of adrenocorticotropic hormone (ACTH and cortisol levels. The purpose of the study was to quantify the level of short-term road transport stress on circulating ACTH and cortisol concentrations, related to the effect of previous handling and transport experience of horses. Materials and Methods: The study was performed on 56 healthy horses after short-term road transport of 30 km. The horses were divided into four groups, Groups A, B, C, and D, with respect to the handling quality: Good (Groups A and B, bad (Group D, and minimal handling (Group C conditions. According to the previous transport, experience horses were divided as follows: Horses of Groups A and D had been experienced long-distance transportation before; horses of Groups B and C had been limited experience of transportation. Results: One-way RM-ANOVA showed significant effects of transport on ACTH changes in Groups B and C and on cortisol changes in both Groups A and B. Groups A and B showed lower baseline ACTH and cortisol values than Groups C and D; Groups A and B showed lower post-transport ACTH values than Groups C and D. Groups A, B, and C showed lower post-transport cortisol values than Group D. Only Groups A and B horses have shown an adequate capacity of stress response to transportation. Conclusion: The previous transport experience and quality of handling could influence the HPA axis physiological responses of horses after short-term road transport.

  19. The effect of warm-up, static stretching and dynamic stretching on hamstring flexibility in previously injured subjects

    Directory of Open Access Journals (Sweden)

    Murray Elaine

    2009-04-01

    Full Text Available Abstract Background Warm-up and stretching are suggested to increase hamstring flexibility and reduce the risk of injury. This study examined the short-term effects of warm-up, static stretching and dynamic stretching on hamstring flexibility in individuals with previous hamstring injury and uninjured controls. Methods A randomised crossover study design, over 2 separate days. Hamstring flexibility was assessed using passive knee extension range of motion (PKE ROM. 18 previously injured individuals and 18 uninjured controls participated. On both days, four measurements of PKE ROM were recorded: (1 at baseline; (2 after warm-up; (3 after stretch (static or dynamic and (4 after a 15-minute rest. Participants carried out both static and dynamic stretches, but on different days. Data were analysed using Anova. Results Across both groups, there was a significant main effect for time (p 0.05. Using ANCOVA to adjust for the non-significant (p = 0.141 baseline difference between groups, the previously injured group demonstrated a greater response to warm-up and static stretching, however this was not statistically significant (p = 0.05. Conclusion Warm-up significantly increased hamstring flexibility. Static stretching also increased hamstring flexibility, whereas dynamic did not, in agreement with previous findings on uninjured controls. The effect of warm-up and static stretching on flexibility was greater in those with reduced flexibility post-injury, but this did not reach statistical significance. Further prospective research is required to validate the hypothesis that increased flexibility improves outcomes. Trial Registration ACTRN12608000638336

  20. Effect of Previous Irradiation on Vascular Thrombosis of Microsurgical Anastomosis: A Preclinical Study in Rats

    Science.gov (United States)

    Gallardo-Calero, Irene; López-Fernández, Alba; Romagosa, Cleofe; Vergés, Ramona; Aguirre-Canyadell, Marius; Soldado, Francisco; Velez, Roberto

    2016-01-01

    Background: The objective of the present investigation was to compare the effect of neoadjuvant irradiation on the microvascular anastomosis in cervical bundle using an experimental model in rats. Methods: One hundred forty male Sprague–Dawley rats were allocated into 4 groups: group I, control, arterial microanastomosis; group II, control, venous microanastomosis; group III, arterial microanastomosis with previous irradiation (20 Gy); and group IV, venous microanastomosis with previous irradiation (20 Gy). Clinical parameters, technical values of anastomosis, patency, and histopathological parameters were evaluated. Results: Irradiated groups (III and IV) and vein anastomosis groups (II and IV) showed significantly increased technical difficulties. Group IV showed significantly reduced patency rates (7/35) when compared with the control group (0/35). Radiotherapy significantly decreased the patency rates of the vein (7/35) when compared with the artery (1/35). Groups III and IV showed significantly reduced number of endothelial cells and also showed the presence of intimal thickening and adventitial fibrosis as compared with the control group. Conclusion: Neoadjuvant radiotherapy reduces the viability of the venous anastomosis in a preclinical rat model with a significant increase in the incidence of vein thrombosis. PMID:27975009

  1. l0TV: A Sparse Optimization Method for Impulse Noise Image Restoration

    KAUST Repository

    Yuan, Ganzhao; Ghanem, Bernard

    2017-01-01

    Total Variation (TV) is an effective and popular prior model in the field of regularization-based image processing. This paper focuses on total variation for removing impulse noise in image restoration. This type of noise frequently arises in data acquisition and transmission due to many reasons, e.g. a faulty sensor or analog-to-digital converter errors. Removing this noise is an important task in image restoration. State-of-the-art methods such as Adaptive Outlier Pursuit(AOP), which is based on TV with l02-norm data fidelity, only give sub-optimal performance. In this paper, we propose a new sparse optimization method, called l0TV-PADMM, which solves the TV-based restoration problem with l0-norm data fidelity. To effectively deal with the resulting non-convex non-smooth optimization problem, we first reformulate it as an equivalent biconvex Mathematical Program with Equilibrium Constraints (MPEC), and then solve it using a proximal Alternating Direction Method of Multipliers (PADMM). Our l0TV-PADMM method finds a desirable solution to the original l0-norm optimization problem and is proven to be convergent under mild conditions. We apply l0TV-PADMM to the problems of image denoising and deblurring in the presence of impulse noise. Our extensive experiments demonstrate that l0TV-PADMM outperforms state-of-the-art image restoration methods.

  2. l0TV: A Sparse Optimization Method for Impulse Noise Image Restoration

    KAUST Repository

    Yuan, Ganzhao

    2017-12-18

    Total Variation (TV) is an effective and popular prior model in the field of regularization-based image processing. This paper focuses on total variation for removing impulse noise in image restoration. This type of noise frequently arises in data acquisition and transmission due to many reasons, e.g. a faulty sensor or analog-to-digital converter errors. Removing this noise is an important task in image restoration. State-of-the-art methods such as Adaptive Outlier Pursuit(AOP), which is based on TV with l02-norm data fidelity, only give sub-optimal performance. In this paper, we propose a new sparse optimization method, called l0TV-PADMM, which solves the TV-based restoration problem with l0-norm data fidelity. To effectively deal with the resulting non-convex non-smooth optimization problem, we first reformulate it as an equivalent biconvex Mathematical Program with Equilibrium Constraints (MPEC), and then solve it using a proximal Alternating Direction Method of Multipliers (PADMM). Our l0TV-PADMM method finds a desirable solution to the original l0-norm optimization problem and is proven to be convergent under mild conditions. We apply l0TV-PADMM to the problems of image denoising and deblurring in the presence of impulse noise. Our extensive experiments demonstrate that l0TV-PADMM outperforms state-of-the-art image restoration methods.

  3. Robust gene selection methods using weighting schemes for microarray data analysis.

    Science.gov (United States)

    Kang, Suyeon; Song, Jongwoo

    2017-09-02

    A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.

  4. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    Directory of Open Access Journals (Sweden)

    Zhiying Song

    2017-01-01

    Full Text Available The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS method and a dynamic threshold denoising (DTD method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933 on feature images and less Euclidean distance error (ED = 2.826 on landmark points, outperforming the source data (NC = −0.496, ED = 25.847 and the compared method (NC = −0.614, ED = 16.085. Moreover, our method is about ten times faster than the compared one.

  5. Qualitative Research Methods to Advance Research on Health Inequities among Previously Incarcerated Women Living with HIV in Alabama

    Science.gov (United States)

    Sprague, Courtenay; Scanlon, Michael L.; Pantalone, David W.

    2017-01-01

    Justice-involved HIV-positive women have poor health outcomes that constitute health inequities. Researchers have yet to embrace the range of qualitative methods to elucidate how psychosocial histories are connected to pathways of vulnerability to HIV and incarceration for this key population. We used life course narratives and…

  6. Impacts of previous crops on Fusarium foot and root rot, and on yields of durum wheat in North West Tunisia

    Directory of Open Access Journals (Sweden)

    Samia CHEKALI

    2016-07-01

    Full Text Available The impacts of ten previous crop rotations (cereals, legumes and fallow on Fusarium foot and root rot of durum wheat were investigated for three cropping seasons in a trial established in 2004 in Northwest Tunisia. Fungi isolated from the roots and stem bases were identified using morphological and molecular methods, and were primarily Fusarium culmorum and F. pseudograminearum. Under low rainfall conditions, the previous crop affected F. pseudograminearum incidence on durum wheat roots but not F. culmorum. Compared to continuous cropping of durum wheat, barley as a previous crop increased disease incidence more than fivefold, while legumes and fallow tended to reduce incidence.  Barley as a previous crop increased wheat disease severity by 47%, compared to other rotations. Grain yield was negatively correlated with the incidence of F. culmorum infection, both in roots and stem bases, and fitted an exponential model (R2 = -0.61 for roots and -0.77 for stem bases, P<0.0001. Fusarium pseudograminearum was also negatively correlated with yield and fitted an exponential model (R2 = -0.53 on roots and -0.71 on stem bases, P < 0.0001 but was not correlated with severity.

  7. Statistical learning methods for aero-optic wavefront prediction and adaptive-optic latency compensation

    Science.gov (United States)

    Burns, W. Robert

    Since the early 1970's research in airborne laser systems has been the subject of continued interest. Airborne laser applications depend on being able to propagate a near diffraction-limited laser beam from an airborne platform. Turbulent air flowing over the aircraft produces density fluctuations through which the beam must propagate. Because the index of refraction of the air is directly related to the density, the turbulent flow imposes aberrations on the beam passing through it. This problem is referred to as Aero-Optics. Aero-Optics is recognized as a major technical issue that needs to be solved before airborne optical systems can become routinely fielded. This dissertation research specifically addresses an approach to mitigating the deleterious effects imposed on an airborne optical system by aero-optics. A promising technology is adaptive optics: a feedback control method that measures optical aberrations and imprints the conjugate aberrations onto an outgoing beam. The challenge is that it is a computationally-difficult problem, since aero-optic disturbances are on the order of kilohertz for practical applications. High control loop frequencies and high disturbance frequencies mean that adaptive-optic systems are sensitive to latency in sensors, mirrors, amplifiers, and computation. These latencies build up to result in a dramatic reduction in the system's effective bandwidth. This work presents two variations of an algorithm that uses model reduction and data-driven predictors to estimate the evolution of measured wavefronts over a short temporal horizon and thus compensate for feedback latency. The efficacy of the two methods are compared in this research, and evaluated against similar algorithms that have been previously developed. The best version achieved over 75% disturbance rejection in simulation in the most optically active flow region in the wake of a turret, considerably outperforming conventional approaches. The algorithm is shown to be

  8. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231

  9. A Timed Colored Petri Net Simulation-Based Self-Adaptive Collaboration Method for Production-Logistics Systems

    Directory of Open Access Journals (Sweden)

    Zhengang Guo

    2017-03-01

    Full Text Available Complex and customized manufacturing requires a high level of collaboration between production and logistics in a flexible production system. With the widespread use of Internet of Things technology in manufacturing, a great amount of real-time and multi-source manufacturing data and logistics data is created, that can be used to perform production-logistics collaboration. To solve the aforementioned problems, this paper proposes a timed colored Petri net simulation-based self-adaptive collaboration method for Internet of Things-enabled production-logistics systems. The method combines the schedule of token sequences in the timed colored Petri net with real-time status of key production and logistics equipment. The key equipment is made ‘smart’ to actively publish or request logistics tasks. An integrated framework based on a cloud service platform is introduced to provide the basis for self-adaptive collaboration of production-logistics systems. A simulation experiment is conducted by using colored Petri nets (CPN Tools to validate the performance and applicability of the proposed method. Computational experiments demonstrate that the proposed method outperforms the event-driven method in terms of reductions of waiting time, makespan, and electricity consumption. This proposed method is also applicable to other manufacturing systems to implement production-logistics collaboration.

  10. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction.

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P; Rother, Kristian M; Bujnicki, Janusz M

    2013-04-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks.

  11. New Hybrid Monte Carlo methods for efficient sampling. From physics to biology and statistics

    International Nuclear Information System (INIS)

    Akhmatskaya, Elena; Reich, Sebastian

    2011-01-01

    We introduce a class of novel hybrid methods for detailed simulations of large complex systems in physics, biology, materials science and statistics. These generalized shadow Hybrid Monte Carlo (GSHMC) methods combine the advantages of stochastic and deterministic simulation techniques. They utilize a partial momentum update to retain some of the dynamical information, employ modified Hamiltonians to overcome exponential performance degradation with the system’s size and make use of multi-scale nature of complex systems. Variants of GSHMCs were developed for atomistic simulation, particle simulation and statistics: GSHMC (thermodynamically consistent implementation of constant-temperature molecular dynamics), MTS-GSHMC (multiple-time-stepping GSHMC), meso-GSHMC (Metropolis corrected dissipative particle dynamics (DPD) method), and a generalized shadow Hamiltonian Monte Carlo, GSHmMC (a GSHMC for statistical simulations). All of these are compatible with other enhanced sampling techniques and suitable for massively parallel computing allowing for a range of multi-level parallel strategies. A brief description of the GSHMC approach, examples of its application on high performance computers and comparison with other existing techniques are given. Our approach is shown to resolve such problems as resonance instabilities of the MTS methods and non-preservation of thermodynamic equilibrium properties in DPD, and to outperform known methods in sampling efficiency by an order of magnitude. (author)

  12. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    International Nuclear Information System (INIS)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.; Karahaliou, A.; Costaridou, L.; Vassiou, K.

    2015-01-01

    -validation methodology. A previously developed B-spline active rays segmentation method was also considered for comparison purposes. Results: Interobserver and intraobserver segmentation agreements (median and [25%, 75%] quartile range) were substantial with respect to the distance metrics HDIST cluster (2.3 [1.8, 2.9] and 2.5 [2.1, 3.2] pixels) and AMINDIST cluster (0.8 [0.6, 1.0] and 1.0 [0.8, 1.2] pixels), while moderate with respect to AOM cluster (0.64 [0.55, 0.71] and 0.59 [0.52, 0.66]). The proposed segmentation method outperformed (0.80 ± 0.04) statistically significantly (Mann-Whitney U-test, p < 0.05) the B-spline active rays segmentation method (0.69 ± 0.04), suggesting the significance of the proposed semiautomated method. Conclusions: Results indicate a reliable semiautomated segmentation method for MC clusters offered by deformable models, which could be utilized in MC cluster quantitative image analysis

  13. Large-Scale Portfolio Optimization Using Multiobjective Evolutionary Algorithms and Preselection Methods

    Directory of Open Access Journals (Sweden)

    B. Y. Qu

    2017-01-01

    Full Text Available Portfolio optimization problems involve selection of different assets to invest in order to maximize the overall return and minimize the overall risk simultaneously. The complexity of the optimal asset allocation problem increases with an increase in the number of assets available to select from for investing. The optimization problem becomes computationally challenging when there are more than a few hundreds of assets to select from. To reduce the complexity of large-scale portfolio optimization, two asset preselection procedures that consider return and risk of individual asset and pairwise correlation to remove assets that may not potentially be selected into any portfolio are proposed in this paper. With these asset preselection methods, the number of assets considered to be included in a portfolio can be increased to thousands. To test the effectiveness of the proposed methods, a Normalized Multiobjective Evolutionary Algorithm based on Decomposition (NMOEA/D algorithm and several other commonly used multiobjective evolutionary algorithms are applied and compared. Six experiments with different settings are carried out. The experimental results show that with the proposed methods the simulation time is reduced while return-risk trade-off performances are significantly improved. Meanwhile, the NMOEA/D is able to outperform other compared algorithms on all experiments according to the comparative analysis.

  14. A semantics-based method for clustering of Chinese web search results

    Science.gov (United States)

    Zhang, Hui; Wang, Deqing; Wang, Li; Bi, Zhuming; Chen, Yong

    2014-01-01

    Information explosion is a critical challenge to the development of modern information systems. In particular, when the application of an information system is over the Internet, the amount of information over the web has been increasing exponentially and rapidly. Search engines, such as Google and Baidu, are essential tools for people to find the information from the Internet. Valuable information, however, is still likely submerged in the ocean of search results from those tools. By clustering the results into different groups based on subjects automatically, a search engine with the clustering feature allows users to select most relevant results quickly. In this paper, we propose an online semantics-based method to cluster Chinese web search results. First, we employ the generalised suffix tree to extract the longest common substrings (LCSs) from search snippets. Second, we use the HowNet to calculate the similarities of the words derived from the LCSs, and extract the most representative features by constructing the vocabulary chain. Third, we construct a vector of text features and calculate snippets' semantic similarities. Finally, we improve the Chameleon algorithm to cluster snippets. Extensive experimental results have shown that the proposed algorithm has outperformed over the suffix tree clustering method and other traditional clustering methods.

  15. Traffic Flow Prediction with Rainfall Impact Using a Deep Learning Method

    Directory of Open Access Journals (Sweden)

    Yuhan Jia

    2017-01-01

    Full Text Available Accurate traffic flow prediction is increasingly essential for successful traffic modeling, operation, and management. Traditional data driven traffic flow prediction approaches have largely assumed restrictive (shallow model architectures and do not leverage the large amount of environmental data available. Inspired by deep learning methods with more complex model architectures and effective data mining capabilities, this paper introduces the deep belief network (DBN and long short-term memory (LSTM to predict urban traffic flow considering the impact of rainfall. The rainfall-integrated DBN and LSTM can learn the features of traffic flow under various rainfall scenarios. Experimental results indicate that, with the consideration of additional rainfall factor, the deep learning predictors have better accuracy than existing predictors and also yield improvements over the original deep learning models without rainfall input. Furthermore, the LSTM can outperform the DBN to capture the time series characteristics of traffic flow data.

  16. Accelerated gradient methods for total-variation-based CT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Joergensen, Jakob H.; Hansen, Per Christian [Technical Univ. of Denmark, Lyngby (Denmark). Dept. of Informatics and Mathematical Modeling; Jensen, Tobias L.; Jensen, Soeren H. [Aalborg Univ. (Denmark). Dept. of Electronic Systems; Sidky, Emil Y.; Pan, Xiaochuan [Chicago Univ., Chicago, IL (United States). Dept. of Radiology

    2011-07-01

    Total-variation (TV)-based CT image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is well suited for images with piecewise nearly constant regions. Computationally, however, TV-based reconstruction is demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-intensive methods such as Newton's method. The simple gradient method has much lower memory requirements, but exhibits prohibitively slow convergence. In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping criterion to ensure that the TV reconstruction has indeed been found. An implementation of the methods (in C with interface to Matlab) is available for download from http://www2.imm.dtu.dk/~pch/TVReg/. We compare the proposed methods with the standard gradient method, applied to a 3D test problem with synthetic few-view data. We find experimentally that for realistic parameters the proposed methods significantly outperform the standard gradient method. (orig.)

  17. bNEAT: a Bayesian network method for detecting epistatic interactions in genome-wide association studies

    Directory of Open Access Journals (Sweden)

    Chen Xue-wen

    2011-07-01

    Full Text Available Abstract Background Detecting epistatic interactions plays a significant role in improving pathogenesis, prevention, diagnosis and treatment of complex human diseases. A recent study in automatic detection of epistatic interactions shows that Markov Blanket-based methods are capable of finding genetic variants strongly associated with common diseases and reducing false positives when the number of instances is large. Unfortunately, a typical dataset from genome-wide association studies consists of very limited number of examples, where current methods including Markov Blanket-based method may perform poorly. Results To address small sample problems, we propose a Bayesian network-based approach (bNEAT to detect epistatic interactions. The proposed method also employs a Branch-and-Bound technique for learning. We apply the proposed method to simulated datasets based on four disease models and a real dataset. Experimental results show that our method outperforms Markov Blanket-based methods and other commonly-used methods, especially when the number of samples is small. Conclusions Our results show bNEAT can obtain a strong power regardless of the number of samples and is especially suitable for detecting epistatic interactions with slight or no marginal effects. The merits of the proposed approach lie in two aspects: a suitable score for Bayesian network structure learning that can reflect higher-order epistatic interactions and a heuristic Bayesian network structure learning method.

  18. Cultural differences in early math skills among U.S., Taiwanese, Dutch, and Peruvian preschoolers

    NARCIS (Netherlands)

    Paik, J.H.; van Gelderen, L.; Gonzales, M.; de Jong, P.F.; Hayes, M.

    2011-01-01

    East Asian children have consistently outperformed children from other nations on mathematical tests. However, most previous cross-cultural studies mainly compared East Asian countries and the United States and have largely ignored cultures from other parts of the world. The present study explored

  19. Research Note Effects of previous cultivation on regeneration of ...

    African Journals Online (AJOL)

    We investigated the effects of previous cultivation on regeneration potential under miombo woodlands in a resettlement area, a spatial product of Zimbabwe's land reforms. We predicted that cultivation would affect population structure, regeneration, recruitment and potential grazing capacity of rangelands. Plant attributes ...

  20. Lettuce (Lactuca sativa L. var. Sucrine Growth Performance in Complemented Aquaponic Solution Outperforms Hydroponics

    Directory of Open Access Journals (Sweden)

    Boris Delaide

    2016-10-01

    Full Text Available Plant growth performance is optimized under hydroponic conditions. The comparison between aquaponics and hydroponics has attracted considerable attention recently, particularly regarding plant yield. However, previous research has not focused on the potential of using aquaponic solution complemented with mineral elements to commercial hydroponic levels in order to increase yield. For this purpose, lettuce plants were put into AeroFlo installations and exposed to hydroponic (HP, aquaponic (AP, or complemented aquaponic (CAP solutions. The principal finding of this research was that AP and HP treatments exhibited similar (p > 0.05 plant growth, whereas the shoot weight of the CAP treatment showed a significant (p < 0.05 growth rate increase of 39% on average compared to the HP and AP treatments. Additionally, the root weight was similar (p > 0.05 in AP and CAP treatments, and both were significantly higher (p < 0.05 than that observed in the HP treatment. The results highlight the beneficial effect of recirculating aquaculture system (RAS water on plant growth. The findings represent a further step toward developing decoupled aquaponic systems (i.e., two- or multi-loops that have the potential to establish a more productive alternative to hydroponic systems. Microorganisms and dissolved organic matter are suspected to play an important role in RAS water for promoting plant roots and shoots growth.

  1. Analysis of 60 706 Exomes Questions the Role of De Novo Variants Previously Implicated in Cardiac Disease

    DEFF Research Database (Denmark)

    Paludan-Müller, Christian; Ahlberg, Gustav; Ghouse, Jonas

    2017-01-01

    BACKGROUND: De novo variants in the exome occur at a rate of 1 per individual per generation, and because of the low reproductive fitness for de novo variants causing severe disease, the likelihood of finding these as standing variations in the general population is low. Therefore, this study...... sought to evaluate the pathogenicity of de novo variants previously associated with cardiac disease based on a large population-representative exome database. METHODS AND RESULTS: We performed a literature search for previous publications on de novo variants associated with severe arrhythmias...... trio studies (>1000 subjects). Of the monogenic variants, 11% (23/211) were present in ExAC, whereas 26% (802/3050) variants believed to increase susceptibility of disease were identified in ExAC. Monogenic de novo variants in ExAC had a total allele count of 109 and with ≈844 expected cases in Ex...

  2. Revisiting chlorophyll extraction methods in biological soil crusts – methodology for determination of chlorophyll a and chlorophyll a + b as compared to previous methods

    Directory of Open Access Journals (Sweden)

    J. Caesar

    2018-03-01

    Full Text Available Chlorophyll concentrations of biological soil crust (biocrust samples are commonly determined to quantify the relevance of photosynthetically active organisms within these surface soil communities. Whereas chlorophyll extraction methods for freshwater algae and leaf tissues of vascular plants are well established, there is still some uncertainty regarding the optimal extraction method for biocrusts, where organism composition is highly variable and samples comprise major amounts of soil. In this study we analyzed the efficiency of two different chlorophyll extraction solvents, the effect of grinding the soil samples prior to the extraction procedure, and the impact of shaking as an intermediate step during extraction. The analyses were conducted on four different types of biocrusts. Our results show that for all biocrust types chlorophyll contents obtained with ethanol were significantly lower than those obtained using dimethyl sulfoxide (DMSO as a solvent. Grinding of biocrust samples prior to analysis caused a highly significant decrease in chlorophyll content for green algal lichen- and cyanolichen-dominated biocrusts, and a tendency towards lower values for moss- and algae-dominated biocrusts. Shaking of the samples after each extraction step had a significant positive effect on the chlorophyll content of green algal lichen- and cyanolichen-dominated biocrusts. Based on our results we confirm a DMSO-based chlorophyll extraction method without grinding pretreatment and suggest the addition of an intermediate shaking step for complete chlorophyll extraction (see Supplement S6 for detailed manual. Determination of a universal chlorophyll extraction method for biocrusts is essential for the inter-comparability of publications conducted across all continents.

  3. Females use self-referent cues to avoid mating with previous mates

    OpenAIRE

    Ivy, Tracie M; Weddle, Carie B; Sakaluk, Scott K

    2005-01-01

    Females of many species mate repeatedly throughout their lives, often with many different males (polyandry). Females can secure genetic benefits by maximizing their diversity of mating partners, and might be expected, therefore, to forego matings with previous partners in favour of novel males. Indeed, a female preference for novel mating partners has been shown in several taxa, but the mechanism by which females distinguish between novel males and previous mates remains unknown. We show that...

  4. Genetic Algorithms: A New Method for Neutron Beam Spectral Characterization

    International Nuclear Information System (INIS)

    David W. Freeman

    2000-01-01

    A revolutionary new concept for solving the neutron spectrum unfolding problem using genetic algorithms (GAs) has recently been introduced. GAs are part of a new field of evolutionary solution techniques that mimic living systems with computer-simulated chromosome solutions that mate, mutate, and evolve to create improved solutions. The original motivation for the research was to improve spectral characterization of neutron beams associated with boron neutron capture therapy (BNCT). The GA unfolding technique has been successfully applied to problems with moderate energy resolution (up to 47 energy groups). Initial research indicates that the GA unfolding technique may well be superior to popular unfolding methods in common use. Research now under way at Kansas State University is focused on optimizing the unfolding algorithm and expanding its energy resolution to unfold detailed beam spectra based on multiple foil measurements. Indications are that the final code will significantly outperform current, state-of-the-art codes in use by the scientific community

  5. A novel word spotting method based on recurrent neural networks.

    Science.gov (United States)

    Frinken, Volkmar; Fischer, Andreas; Manmatha, R; Bunke, Horst

    2012-02-01

    Keyword spotting refers to the process of retrieving all instances of a given keyword from a document. In the present paper, a novel keyword spotting method for handwritten documents is described. It is derived from a neural network-based system for unconstrained handwriting recognition. As such it performs template-free spotting, i.e., it is not necessary for a keyword to appear in the training set. The keyword spotting is done using a modification of the CTC Token Passing algorithm in conjunction with a recurrent neural network. We demonstrate that the proposed systems outperform not only a classical dynamic time warping-based approach but also a modern keyword spotting system, based on hidden Markov models. Furthermore, we analyze the performance of the underlying neural networks when using them in a recognition task followed by keyword spotting on the produced transcription. We point out the advantages of keyword spotting when compared to classic text line recognition.

  6. Two new prediction rules for spontaneous pregnancy leading to live birth among subfertile couples, based on the synthesis of three previous models.

    NARCIS (Netherlands)

    C.C. Hunault; J.D.F. Habbema (Dik); M.J.C. Eijkemans (René); J.A. Collins (John); J.L.H. Evers (Johannes); E.R. te Velde (Egbert)

    2004-01-01

    textabstractBACKGROUND: Several models have been published for the prediction of spontaneous pregnancy among subfertile patients. The aim of this study was to broaden the empirical basis for these predictions by making a synthesis of three previously published models. METHODS:

  7. A simplified method to recover urinary vesicles for clinical applications, and sample banking.

    Science.gov (United States)

    Musante, Luca; Tataruch, Dorota; Gu, Dongfeng; Benito-Martin, Alberto; Calzaferri, Giulio; Aherne, Sinead; Holthofer, Harry

    2014-12-23

    Urinary extracellular vesicles provide a novel source for valuable biomarkers for kidney and urogenital diseases: Current isolation protocols include laborious, sequential centrifugation steps which hampers their widespread research and clinical use. Furthermore, large individual urine sample volumes or sizable target cohorts are to be processed (e.g. for biobanking), the storage capacity is an additional problem. Thus, alternative methods are necessary to overcome such limitations. We have developed a practical vesicle isolation technique to yield easily manageable sample volumes in an exceptionally cost efficient way to facilitate their full utilization in less privileged environments and maximize the benefit of biobanking. Urinary vesicles were isolated by hydrostatic dialysis with minimal interference of soluble proteins or vesicle loss. Large volumes of urine were concentrated up to 1/100 of original volume and the dialysis step allowed equalization of urine physico-chemical characteristics. Vesicle fractions were found suitable to any applications, including RNA analysis. In the yield, our hydrostatic filtration dialysis system outperforms the conventional ultracentrifugation-based methods and the labour intensive and potentially hazardous step of ultracentrifugations are eliminated. Likewise, the need for trained laboratory personnel and heavy initial investment is avoided. Thus, our method qualifies as a method for laboratories working with urinary vesicles and biobanking.

  8. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    Science.gov (United States)

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  9. Semi-Lagrangian methods in air pollution models

    Directory of Open Access Journals (Sweden)

    A. B. Hansen

    2011-06-01

    Full Text Available Various semi-Lagrangian methods are tested with respect to advection in air pollution modeling. The aim is to find a method fulfilling as many of the desirable properties by Rasch andWilliamson (1990 and Machenhauer et al. (2008 as possible. The focus in this study is on accuracy and local mass conservation.

    The methods tested are, first, classical semi-Lagrangian cubic interpolation, see e.g. Durran (1999, second, semi-Lagrangian cubic cascade interpolation, by Nair et al. (2002, third, semi-Lagrangian cubic interpolation with the modified interpolation weights, Locally Mass Conserving Semi-Lagrangian (LMCSL, by Kaas (2008, and last, semi-Lagrangian cubic interpolation with a locally mass conserving monotonic filter by Kaas and Nielsen (2010.

    Semi-Lagrangian (SL interpolation is a classical method for atmospheric modeling, cascade interpolation is more efficient computationally, modified interpolation weights assure mass conservation and the locally mass conserving monotonic filter imposes monotonicity.

    All schemes are tested with advection alone or with advection and chemistry together under both typical rural and urban conditions using different temporal and spatial resolution. The methods are compared with a current state-of-the-art scheme, Accurate Space Derivatives (ASD, see Frohn et al. (2002, presently used at the National Environmental Research Institute (NERI in Denmark. To enable a consistent comparison only non-divergent flow configurations are tested.

    The test cases are based either on the traditional slotted cylinder or the rotating cone, where the schemes' ability to model both steep gradients and slopes are challenged.

    The tests showed that the locally mass conserving monotonic filter improved the results significantly for some of the test cases, however, not for all. It was found that the semi-Lagrangian schemes, in almost every case, were not able to outperform the current ASD scheme

  10. Severe radiation morbidity in carcinoma of the cervix: impact of pretherapy surgical staging and previous surgery

    International Nuclear Information System (INIS)

    Fine, Bruce A.; Hempling, Ronald E.; Piver, M. Steven; Baker, Trudy R.; McAuley, Michael; Driscoll, Deborah

    1995-01-01

    Purpose: The purpose of this study is to delineate the factors which (a) contribute to an increase in the severe, radiation induced complication rate and (b) have a significant effect on survival in patients with International Federation of Gynecologists and Obstetricians (FIGO) Stage I-IVA cervical cancer undergoing pretherapy surgical staging. Methods and Materials: From 1971-1991, 189 patients underwent pretherapy surgical staging via a retroperitoneal approach (67) or transperitoneal approach (122). Seventy-nine patients had previously experienced a laparotomy. Patients subsequently received a median of 85 Gy to point A. In patients receiving paraaortic radiation, a median of 45 Gy was administered. One hundred and thirty-two (69.8%) patients received hydroxyurea as a radiation sensitizer. Results: Pretherapy surgical evaluation revealed that 21 of 89 (23.6%) Stage II patients and 32 of 85 (37.6%) Stage III patients had paraaortic lymph node metastases. Multivariate logistic regression analysis detailed the significant factors favorably influencing the radiation-induced complication rate to be a retroperitoneal approach of pretherapy surgical staging and no previous laparotomy. Survival was significantly prolonged in patients receiving hydroxyurea, evaluated via a retroperitoneal incision, with negative paraaortic lymph nodes, and with an early stage of disease. Conclusion: A retroperitoneal approach to pretherapy surgical staging and absence of previous surgery reduced the incidence of subsequent radiation-induced complications. Despite improvements in the detection of occult disease, prolonged survival is impaired when the therapeutic measures currently available are used

  11. Does the patients′ educational level and previous counseling affect their medication knowledge?

    Directory of Open Access Journals (Sweden)

    Abdulmalik M Alkatheri

    2013-01-01

    Conclusions: The education level of the patient and previous counseling are positively linked to medication knowledge. Knowledge of the medications′ side effects proved to be the most difficult task for the participants in this study, requiring the highest level of education, and was improved by previous counseling.

  12. The A2iA French handwriting recognition system at the Rimes-ICDAR2011 competition

    Science.gov (United States)

    Menasri, Farès; Louradour, Jérôme; Bianne-Bernard, Anne-Laure; Kermorvant, Christopher

    2012-01-01

    This paper describes the system for the recognition of French handwriting submitted by A2iA to the competition organized at ICDAR2011 using the Rimes database. This system is composed of several recognizers based on three different recognition technologies, combined using a novel combination method. A framework multi-word recognition based on weighted finite state transducers is presented, using an explicit word segmentation, a combination of isolated word recognizers and a language model. The system was tested both for isolated word recognition and for multi-word line recognition and submitted to the RIMES-ICDAR2011 competition. This system outperformed all previously proposed systems on these tasks.

  13. The job satisfaction of principals of previously disadvantaged schools

    African Journals Online (AJOL)

    The aim of this study was to identify influences on the job satisfaction of previously disadvantaged ..... I am still riding the cloud … I hope it lasts. .... as a way of creating a climate and culture in schools where individuals are willing to explore.

  14. Predictive factors for the development of diabetes in women with previous gestational diabetes mellitus

    DEFF Research Database (Denmark)

    Damm, P.; Kühl, C.; Bertelsen, Aksel

    1992-01-01

    OBJECTIVES: The purpose of this study was to determine the incidence of diabetes in women with previous dietary-treated gestational diabetes mellitus and to identify predictive factors for development of diabetes. STUDY DESIGN: Two to 11 years post partum, glucose tolerance was investigated in 241...... women with previous dietary-treated gestational diabetes mellitus and 57 women without previous gestational diabetes mellitus (control group). RESULTS: Diabetes developed in 42 (17.4%) women with previous gestational diabetes mellitus (3.7% insulin-dependent diabetes mellitus and 13.7% non...... of previous patients with gestational diabetes mellitus in whom plasma insulin was measured during an oral glucose tolerance test in late pregnancy a low insulin response at diagnosis was found to be an independent predictive factor for diabetes development. CONCLUSIONS: Women with previous dietary...

  15. Evaluating the performance of the Lee-Carter method and its variants in modelling and forecasting Malaysian mortality

    Science.gov (United States)

    Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.

    2014-12-01

    This study investigated the performance of the Lee-Carter (LC) method and it variants in modeling and forecasting Malaysia mortality. These include the original LC, the Lee-Miller (LM) variant and the Booth-Maindonald-Smith (BMS) variant. These methods were evaluated using Malaysia's mortality data which was measured based on age specific death rates (ASDR) for 1971 to 2009 for overall population while those for 1980-2009 were used in separate models for male and female population. The performance of the variants has been examined in term of the goodness of fit of the models and forecasting accuracy. Comparison was made based on several criteria namely, mean square error (MSE), root mean square error (RMSE), mean absolute deviation (MAD) and mean absolute percentage error (MAPE). The results indicate that BMS method was outperformed in in-sample fitting for overall population and when the models were fitted separately for male and female population. However, in the case of out-sample forecast accuracy, BMS method only best when the data were fitted to overall population. When the data were fitted separately for male and female, LCnone performed better for male population and LM method is good for female population.

  16. Spinal Arachnoiditis as a Complication of Cryptococcal Meningoencephalitis in Non-HIV Previously Healthy Adults

    Science.gov (United States)

    Komori, Mika; Kosa, Peter; Khan, Omar; Hammoud, Dima A.; Rosen, Lindsey B.; Browne, Sarah K.; Lin, Yen-Chih; Romm, Elena; Ramaprasad, Charu; Fries, Bettina C.; Bennett, John E.; Bielekova, Bibiana; Williamson, Peter R.

    2017-01-01

    Background. Cryptococcus can cause meningoencephalitis (CM) among previously healthy non-HIV adults. Spinal arachnoiditis is under-recognized, since diagnosis is difficult with concomitant central nervous system (CNS) pathology. Methods. We describe 6 cases of spinal arachnoiditis among 26 consecutively recruited CM patients with normal CD4 counts who achieved microbiologic control. We performed detailed neurological exams, cerebrospinal fluid (CSF) immunophenotyping and biomarker analysis before and after adjunctive immunomodulatory intervention with high dose pulse corticosteroids, affording causal inference into pathophysiology. Results. All 6 exhibited severe lower motor neuron involvement in addition to cognitive changes and gait disturbances from meningoencephalitis. Spinal involvement was associated with asymmetric weakness and urinary retention. Diagnostic specificity was improved by MRI imaging which demonstrated lumbar spinal nerve root enhancement and clumping or lesions. Despite negative fungal cultures, CSF inflammatory biomarkers, sCD27 and sCD21, as well as the neuronal damage biomarker, neurofilament light chain (NFL), were elevated compared to healthy donor (HD) controls. Elevations in these biomarkers were associated with clinical symptoms and showed improvement with adjunctive high dose pulse corticosteroids. Conclusions. These data suggest that a post-infectious spinal arachnoiditis is an important complication of CM in previously healthy individuals, requiring heightened clinician awareness. Despite microbiological control, this syndrome causes significant pathology likely due to increased inflammation and may be amenable to suppressive therapeutics. PMID:28011613

  17. Deep Web Search Interface Identification: A Semi-Supervised Ensemble Approach

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2014-12-01

    Full Text Available To surface the Deep Web, one crucial task is to predict whether a given web page has a search interface (searchable HyperText Markup Language (HTML form or not. Previous studies have focused on supervised classification with labeled examples. However, labeled data are scarce, hard to get and requires tediousmanual work, while unlabeled HTML forms are abundant and easy to obtain. In this research, we consider the plausibility of using both labeled and unlabeled data to train better models to identify search interfaces more effectively. We present a semi-supervised co-training ensemble learning approach using both neural networks and decision trees to deal with the search interface identification problem. We show that the proposed model outperforms previous methods using only labeled data. We also show that adding unlabeled data improves the effectiveness of the proposed model.

  18. Iodine-131 induced hepatotoxicity in previously healthy patients with Grave's disease.

    Science.gov (United States)

    Jhummon, Navina Priya; Tohooloo, Bhavna; Qu, Shen

    2013-01-01

    To describe the association of the rare and serious complication of liver toxicity in previously healthy Grave's disease (GD) patients after the treatment with radioactive iodine (131)I (RAI). We report the clinical, laboratory and pathologic findings of 2 cases of severe liver toxicity associated with the treatment with RAI in previously healthy patients with GD. Clinical examination and laboratory investigations excluded viral hepatitis, autoimmune hepatitis, granulomatous disease, primary biliary disease, extrahepatic biliary obstruction, and heart failure. Case 1: A previously healthy 52-years old man reportedly having a typical GD but following RAI treatment, concomitantly developed severe liver toxicity that required 1 week of treatment in hospital. Case 2: A previously healthy 34-years old woman is reported as having a typical GD but developed jaundice following RAI treatment that required several weeks of in hospital treatment in the hepato-biliary department. In both cases, the liver dysfunction resolved after intensive treatment with hepato-protective agents. In this report the therapeutic considerations as well as the pathogenetic possibilities are reviewed. To the best of our knowledge, this is the first description of the association observed, which is rare but may be severe and should be considered in any case of thyrotoxicosis where a liver dysfunction develops after the treatment with radioactive iodine (131)I.

  19. Modified conjugate gradient method for diagonalizing large matrices.

    Science.gov (United States)

    Jie, Quanlin; Liu, Dunhuan

    2003-11-01

    We present an iterative method to diagonalize large matrices. The basic idea is the same as the conjugate gradient (CG) method, i.e, minimizing the Rayleigh quotient via its gradient and avoiding reintroducing errors to the directions of previous gradients. Each iteration step is to find lowest eigenvector of the matrix in a subspace spanned by the current trial vector and the corresponding gradient of the Rayleigh quotient, as well as some previous trial vectors. The gradient, together with the previous trial vectors, play a similar role as the conjugate gradient of the original CG algorithm. Our numeric tests indicate that this method converges significantly faster than the original CG method. And the computational cost of one iteration step is about the same as the original CG method. It is suitable for first principle calculations.

  20. Impact of Previous Pharmacy Work Experience on Pharmacy School Academic Performance

    Science.gov (United States)

    Mar, Ellena; T-L Tang, Terrill; Sasaki-Hill, Debra; Kuperberg, James R.; Knapp, Katherine

    2010-01-01

    Objectives To determine whether students' previous pharmacy-related work experience was associated with their pharmacy school performance (academic and clinical). Methods The following measures of student academic performance were examined: pharmacy grade point average (GPA), scores on cumulative high-stakes examinations, and advanced pharmacy practice experience (APPE) grades. The quantity and type of pharmacy-related work experience each student performed prior to matriculation was solicited through a student survey instrument. Survey responses were correlated with academic measures, and demographic-based stratified analyses were conducted. Results No significant difference in academic or clinical performance between those students with prior pharmacy experience and those without was identified. Subanalyses by work setting, position type, and substantial pharmacy work experience did not reveal any association with student performance. A relationship was found, however, between age and work experience, ie, older students tended to have more work experience than younger students. Conclusions Prior pharmacy work experience did not affect students' overall academic or clinical performance in pharmacy school. The lack of significant findings may have been due to the inherent practice limitations of nonpharmacist positions, changes in pharmacy education, and the limitations of survey responses. PMID:20498735

  1. Road Lane Detection by Discriminating Dashed and Solid Road Lanes Using a Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Toan Minh Hoang

    2016-08-01

    Full Text Available With the increasing need for road lane detection used in lane departure warning systems and autonomous vehicles, many studies have been conducted to turn road lane detection into a virtual assistant to improve driving safety and reduce car accidents. Most of the previous research approaches detect the central line of a road lane and not the accurate left and right boundaries of the lane. In addition, they do not discriminate between dashed and solid lanes when detecting the road lanes. However, this discrimination is necessary for the safety of autonomous vehicles and the safety of vehicles driven by human drivers. To overcome these problems, we propose a method for road lane detection that distinguishes between dashed and solid lanes. Experimental results with the Caltech open database showed that our method outperforms conventional methods.

  2. Joint Bayesian variable and graph selection for regression models with network-structured predictors

    Science.gov (United States)

    Peterson, C. B.; Stingo, F. C.; Vannucci, M.

    2015-01-01

    In this work, we develop a Bayesian approach to perform selection of predictors that are linked within a network. We achieve this by combining a sparse regression model relating the predictors to a response variable with a graphical model describing conditional dependencies among the predictors. The proposed method is well-suited for genomic applications since it allows the identification of pathways of functionally related genes or proteins which impact an outcome of interest. In contrast to previous approaches for network-guided variable selection, we infer the network among predictors using a Gaussian graphical model and do not assume that network information is available a priori. We demonstrate that our method outperforms existing methods in identifying network-structured predictors in simulation settings, and illustrate our proposed model with an application to inference of proteins relevant to glioblastoma survival. PMID:26514925

  3. Virtual fringe projection system with nonparallel illumination based on iteration

    International Nuclear Information System (INIS)

    Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian

    2017-01-01

    Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements. (paper)

  4. A new methodology for non-contact accurate crack width measurement through photogrammetry for automated structural safety evaluation

    International Nuclear Information System (INIS)

    Jahanshahi, Mohammad R; Masri, Sami F

    2013-01-01

    In mechanical, aerospace and civil structures, cracks are important defects that can cause catastrophes if neglected. Visual inspection is currently the predominant method for crack assessment. This approach is tedious, labor-intensive, subjective and highly qualitative. An inexpensive alternative to current monitoring methods is to use a robotic system that could perform autonomous crack detection and quantification. To reach this goal, several image-based crack detection approaches have been developed; however, the crack thickness quantification, which is an essential element for a reliable structural condition assessment, has not been sufficiently investigated. In this paper, a new contact-less crack quantification methodology, based on computer vision and image processing concepts, is introduced and evaluated against a crack quantification approach which was previously developed by the authors. The proposed approach in this study utilizes depth perception to quantify crack thickness and, as opposed to most previous studies, needs no scale attachment to the region under inspection, which makes this approach ideal for incorporation with autonomous or semi-autonomous mobile inspection systems. Validation tests are performed to evaluate the performance of the proposed approach, and the results show that the new proposed approach outperforms the previously developed one. (paper)

  5. The job satisfaction of principals of previously disadvantaged schools

    African Journals Online (AJOL)

    The aim of this study was to identify influences on the job satisfaction of previously disadvantaged school principals in North-West Province. Evans's theory of job satisfaction, morale and motivation was useful as a conceptual framework. A mixedmethods explanatory research design was important in discovering issues with ...

  6. 75 FR 57844 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Science.gov (United States)

    2010-09-23

    ... Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft... Previously Held by Israel Aircraft Industries, Ltd.): Amendment 39-16438. Docket No. FAA-2010-0555... (Type Certificate previously held by Israel Aircraft Industries, Ltd.) Model Galaxy and Gulfstream 200...

  7. 77 FR 64767 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Science.gov (United States)

    2012-10-23

    ... Aerospace LP (Type Certificate Previously Held by Israel Aircraft Industries, Ltd.) Airplanes AGENCY... airworthiness directive (AD) for certain Gulfstream Aerospace LP (Type Certificate previously held by Israel... Certificate previously held by Israel Aircraft Industries, Ltd.) Model Galaxy and Gulfstream 200 airplanes...

  8. 78 FR 11567 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Science.gov (United States)

    2013-02-19

    ... Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft... Aerospace LP (Type Certificate Previously Held by Israel Aircraft Industries, Ltd.) Model Gulfstream G150... Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft Industries, Ltd.): Amendment 39...

  9. 76 FR 70040 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Science.gov (United States)

    2011-11-10

    ... Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft... Aerospace LP (type certificate previously held by Israel Aircraft Industries, Ltd.) Model Galaxy and... new AD: 2011-23-07 Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft...

  10. 76 FR 6525 - Airworthiness Directives; Cessna Aircraft Company (Type Certificate Previously Held by Columbia...

    Science.gov (United States)

    2011-02-07

    ... Airworthiness Directives; Cessna Aircraft Company (Type Certificate Previously Held by Columbia Aircraft... following new AD: 2011-03-04 Cessna Aircraft Company (Type Certificate Previously Held by Columbia Aircraft... the following Cessna Aircraft Company (type certificate previously held by Columbia Aircraft...

  11. Augmenting Conceptual Design Trajectory Tradespace Exploration with Graph Theory

    Science.gov (United States)

    Dees, Patrick D.; Zwack, Mathew R.; Steffens, Michael; Edwards, Stephen

    2016-01-01

    Within conceptual design changes occur rapidly due to a combination of uncertainty and shifting requirements. To stay relevant in this fluid time, trade studies must also be performed rapidly. In order to drive down analysis time while improving the information gained by these studies, surrogate models can be created to represent the complex output of a tool or tools within a specified tradespace. In order to create this model however, a large amount of data must be collected in a short amount of time. By this method, the historical approach of relying on subject matter experts to generate the data required is schedule infeasible. However, by implementing automation and distributed analysis the required data can be generated in a fraction of the time. Previous work focused on setting up a tool called multiPOST capable of orchestrating many simultaneous runs of an analysis tool assessing these automated analyses utilizing heuristics gleaned from the best practices of current subject matter experts. In this update to the previous work, elements of graph theory are included to further drive down analysis time by leveraging data previously gathered. It is shown to outperform the previous method in both time required, and the quantity and quality of data produced.

  12. Cost-effectiveness of abiraterone treatment in patients with castration-resistant prostate cancer who previously received docetaxel therapy

    Directory of Open Access Journals (Sweden)

    A. V. Rudakova

    2014-01-01

    Full Text Available Background. Therapy for metastatic castration-resistant prostate cancer (CRPC is a serious problem that requires significant public health care expenditures.Objective: to evaluate the cost-effectiveness of abiraterone treatment in patients with metastatic CRPC who previously received docetaxel under the conditions of the budgetary public health system of the Russian Federation.Material and methods. Markovian simulation based on the COU-AA-301 randomized placebo-controlled Phase III study was used. Survival analysis was made in 70-year-old patients. The cost of abiraterone therapy corresponded to that of the 2013 auctions.Results. Abiraterone therapy in patients who have previously received docetaxel therapy causes an increase in average life expectancy by an average of 4.6 months and progression-free survival by 2.0 months. Moreover, the cost calculated with reference to one year of additional life will account for about 3.6 million rubles and that to one additional quality-adjusted life year will be about 5.45 million rubles.Conclusion. The cost-effectiveness of abiraterone therapy for metastatic CRPC in patients who have previously received docetaxel therapy is similar to that of other medicaments used in oncological practice under the conditions of the budgetary public health system of the Russian Federation. In this connection, abiraterone may be considered as an economically acceptable medical intervention in this clinical situation.

  13. Cost-effectiveness of abiraterone treatment in patients with castration-resistant prostate cancer who previously received docetaxel therapy

    Directory of Open Access Journals (Sweden)

    A. V. Rudakova

    2014-11-01

    Full Text Available Background. Therapy for metastatic castration-resistant prostate cancer (CRPC is a serious problem that requires significant public health care expenditures.Objective: to evaluate the cost-effectiveness of abiraterone treatment in patients with metastatic CRPC who previously received docetaxel under the conditions of the budgetary public health system of the Russian Federation.Material and methods. Markovian simulation based on the COU-AA-301 randomized placebo-controlled Phase III study was used. Survival analysis was made in 70-year-old patients. The cost of abiraterone therapy corresponded to that of the 2013 auctions.Results. Abiraterone therapy in patients who have previously received docetaxel therapy causes an increase in average life expectancy by an average of 4.6 months and progression-free survival by 2.0 months. Moreover, the cost calculated with reference to one year of additional life will account for about 3.6 million rubles and that to one additional quality-adjusted life year will be about 5.45 million rubles.Conclusion. The cost-effectiveness of abiraterone therapy for metastatic CRPC in patients who have previously received docetaxel therapy is similar to that of other medicaments used in oncological practice under the conditions of the budgetary public health system of the Russian Federation. In this connection, abiraterone may be considered as an economically acceptable medical intervention in this clinical situation.

  14. Detecting Malware with an Ensemble Method Based on Deep Neural Network

    Directory of Open Access Journals (Sweden)

    Jinpei Yan

    2018-01-01

    Full Text Available Malware detection plays a crucial role in computer security. Recent researches mainly use machine learning based methods heavily relying on domain knowledge for manually extracting malicious features. In this paper, we propose MalNet, a novel malware detection method that learns features automatically from the raw data. Concretely, we first generate a grayscale image from malware file, meanwhile extracting its opcode sequences with the decompilation tool IDA. Then MalNet uses CNN and LSTM networks to learn from grayscale image and opcode sequence, respectively, and takes a stacking ensemble for malware classification. We perform experiments on more than 40,000 samples including 20,650 benign files collected from online software providers and 21,736 malwares provided by Microsoft. The evaluation result shows that MalNet achieves 99.88% validation accuracy for malware detection. In addition, we also take malware family classification experiment on 9 malware families to compare MalNet with other related works, in which MalNet outperforms most of related works with 99.36% detection accuracy and achieves a considerable speed-up on detecting efficiency comparing with two state-of-the-art results on Microsoft malware dataset.

  15. Word Sense Disambiguation with LSTM : Do We Really Need 100 Billion Words?

    NARCIS (Netherlands)

    Le, Minh; Postma, Marten; Urbani, Jacopo

    2017-01-01

    Recently, Yuan et al. (2016) have shown the effectiveness of using Long Short-Term Memory (LSTM) for performing Word Sense Disambiguation (WSD). Their proposed technique outperformed the previous state-of-the-art with several benchmarks, but neither the training data nor the source code was

  16. Streaming Parallel GPU Acceleration of Large-Scale filter-based Spiking Neural Networks

    NARCIS (Netherlands)

    L.P. Slazynski (Leszek); S.M. Bohte (Sander)

    2012-01-01

    htmlabstractThe arrival of graphics processing (GPU) cards suitable for massively parallel computing promises a↵ordable large-scale neural network simulation previously only available at supercomputing facil- ities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of

  17. Neural networks for link prediction in realistic biomedical graphs: a multi-dimensional evaluation of graph embedding-based approaches.

    Science.gov (United States)

    Crichton, Gamal; Guo, Yufan; Pyysalo, Sampo; Korhonen, Anna

    2018-05-21

    Link prediction in biomedical graphs has several important applications including predicting Drug-Target Interactions (DTI), Protein-Protein Interaction (PPI) prediction and Literature-Based Discovery (LBD). It can be done using a classifier to output the probability of link formation between nodes. Recently several works have used neural networks to create node representations which allow rich inputs to neural classifiers. Preliminary works were done on this and report promising results. However they did not use realistic settings like time-slicing, evaluate performances with comprehensive metrics or explain when or why neural network methods outperform. We investigated how inputs from four node representation algorithms affect performance of a neural link predictor on random- and time-sliced biomedical graphs of real-world sizes (∼ 6 million edges) containing information relevant to DTI, PPI and LBD. We compared the performance of the neural link predictor to those of established baselines and report performance across five metrics. In random- and time-sliced experiments when the neural network methods were able to learn good node representations and there was a negligible amount of disconnected nodes, those approaches outperformed the baselines. In the smallest graph (∼ 15,000 edges) and in larger graphs with approximately 14% disconnected nodes, baselines such as Common Neighbours proved a justifiable choice for link prediction. At low recall levels (∼ 0.3) the approaches were mostly equal, but at higher recall levels across all nodes and average performance at individual nodes, neural network approaches were superior. Analysis showed that neural network methods performed well on links between nodes with no previous common neighbours; potentially the most interesting links. Additionally, while neural network methods benefit from large amounts of data, they require considerable amounts of computational resources to utilise them. Our results indicate

  18. Efficacy and safety of 10,600-nm carbon dioxide fractional laser on facial skin with previous volume injections

    Directory of Open Access Journals (Sweden)

    Josiane Hélou

    2013-01-01

    Full Text Available Background: Fractionated carbon dioxide (CO 2 lasers are a new treatment modality for skin resurfacing. The cosmetic rejuvenation market abounds with various injectable devices (poly-L-lactic acid, polymethyl-methacrylate, collagens, hyaluronic acids, silicone. The objective of this study is to examine the efficacy and safety of 10,600-nm CO 2 fractional laser on facial skin with previous volume injections. Materials and Methods: This is a retrospective study including 14 patients treated with fractional CO 2 laser and who have had previous facial volume restoration. The indication for the laser therapy, the age of the patients, previous facial volume restoration, and side effects were all recorded from their medical files. Objective assessments were made through clinical physician global assessment records and improvement scores records. Patients′ satisfaction rates were also recorded. Results: Review of medical records of the 14 patients show that five patients had polylactic acid injection prior to the laser session. Eight patients had hyaluronic acid injection prior to the laser session. Two patients had fat injection, two had silicone injection and one patient had facial thread lift. Side effects included pain during the laser treatment, post-treatment scaling, post-treatment erythema, hyperpigmentation which spontaneously resolved within a month. Concerning the previous facial volume restoration, no granulomatous reactions were noted, no facial shape deformation and no asymmetry were encountered whatever the facial volume product was. Conclusion: CO 2 fractional laser treatments do not seem to affect facial skin which had previous facial volume restoration with polylactic acid for more than 6 years, hyaluronic acid for more than 0.5 year, silicone for more than 6 years, or fat for more than 1.4 year. Prospective larger studies focusing on many other variables (skin phototype, injected device type are required to achieve better

  19. ATLANTIC DIP: simplifying the follow-up of women with previous gestational diabetes.

    LENUS (Irish Health Repository)

    Noctor, E

    2013-11-01

    Previous gestational diabetes (GDM) is associated with a significant lifetime risk of type 2 diabetes. In this study, we assessed the performance of HbA1c and fasting plasma glucose (FPG) measurements against that of 75 g oral glucose tolerance testing (OGTT) for the follow-up screening of women with previous GDM.

  20. IMPROVING NEAREST NEIGHBOUR SEARCH IN 3D SPATIAL ACCESS METHOD

    Directory of Open Access Journals (Sweden)

    A. Suhaibaha

    2016-10-01

    Full Text Available Nearest Neighbour (NN is one of the important queries and analyses for spatial application. In normal practice, spatial access method structure is used during the Nearest Neighbour query execution to retrieve information from the database. However, most of the spatial access method structures are still facing with unresolved issues such as overlapping among nodes and repetitive data entry. This situation will perform an excessive Input/Output (IO operation which is inefficient for data retrieval. The situation will become more crucial while dealing with 3D data. The size of 3D data is usually large due to its detail geometry and other attached information. In this research, a clustered 3D hierarchical structure is introduced as a 3D spatial access method structure. The structure is expected to improve the retrieval of Nearest Neighbour information for 3D objects. Several tests are performed in answering Single Nearest Neighbour search and k Nearest Neighbour (kNN search. The tests indicate that clustered hierarchical structure is efficient in handling Nearest Neighbour query compared to its competitor. From the results, clustered hierarchical structure reduced the repetitive data entry and the accessed page. The proposed structure also produced minimal Input/Output operation. The query response time is also outperformed compared to the other competitor. For future outlook of this research several possible applications are discussed and summarized.